Decision Rule Induction Based on the Graph Theory

Izabela Kutschenreiter-Praszkiewicz

## Abstract

The graph theory is a well-known and wildly used method of supporting the decision-making process. The present chapter presents an application of a decision tree for rule induction from a set of decision examples taken from past experiences. A decision tree is a graph, where each internal (non-leaf) node denotes a test on an attribute which characterises a decision problem, each branch (also called arc or edge) represents the outcome of a test (attribute value), and each leaf (or terminal) node holds a class label which can be interpreted as a decision type. In the presented approach, the object-attribute-value (OAV) framework will be used for decision problem characteristics. The chapter presents a method of optimal decision tree induction. It discusses the Iterative Dichotomiser 3 (ID3) algorithm and provides an example of the decision tree induction. Also, rules supporting the decision-making in engineering will be developed in this chapter.

Keywords: decision tree, decision rule induction, ID3 algorithm, QFD, dependence network

## 1. Introduction: decision problems in industry

In order to solve decision problems, we need knowledge, information and data. Data are understood as a set of discrete objectives, facts and events. Information represents a relationship between some pieces of data. For data to be transformed into information, raw data should be cleaned, corrected and processed while considering the context of a decision. Knowledge can be defined as application of the relevant information to address specific problems. Knowledge in decision problems is developed based on the decision-maker's experiences. Hence, data gathering and analysis is one of the necessary stages of decision support systems development with the use of artificial intelligence.

The main issues related to decision support with the use of artificial intelligence are:


Decision problems in industry may be classified into one of the following decision categories:


Structured decisions are routine, and there are usually specific procedures or actions that can be identified to help make the decision. In case of unstructured decisions, decision scenarios often involve new or unique problems, and an individual has little or no programmatic or routine procedure for addressing the problem or making a decision. In semi-structured decisions, decision scenarios involve both some structured and some unstructured components.

The object-attribute-value (OAV) model is a knowledge representation framework. Objects are understood in OAV as entities that are items being described. Attributes are understood as features, properties, characteristics or variables of a given object. Value is understood as a numerically discreet or linguistically defined

OAV can be used in artificial intelligent methods as a framework of data analysis useful for data preparation for applying chosen knowledge representation method.

• Decision rules—which transform premises (e.g. observations expressed in the OAV framework) into conclusions (e.g. decision expressed in the OAV

• Decision trees—which are graph representations of the decision process. Inspecting the conditions in the decision path starts from the beginning node

• Frames—which are used when information units are characterised by many important features. The structure of a simple frame contains three different lines: a heading with the frame name, a pointer to another frame with appropriate relation and slots defining attribute names and values.

• Semantic networks—which capture knowledge in form of a graph in which nodes represent pieces of information (objects, concepts or situations in the problem domain) and the arcs represent relations or associations

• Artificial neural networks (ANNs)—which are inspired by neurons in the brain and have become a popular target representation for learning. The idea of ANN usage is to create a learning set which includes data characterised by input and output features. During training, ANNs create a model which is

able to transform input features into output features of a data set.

Literature presents different methods of formal knowledge representation which have been successfully applied to support various decision problems. Joshi and Dutta [3] use rule-based method to recognise and simplify features in freeform surface models for automatic finite element mesh generation. Cakir et al. [4] apply

called the root and to the leaves which give the decision.

attribute esteem.

framework).

between them.

Figure 1.

93

A data analysis schema.

A data analysis schema is presented in Figure 1. Knowledge representation methods include:

Decision Rule Induction Based on the Graph Theory DOI: http://dx.doi.org/10.5772/intechopen.88737

According to the data analysed by Iyer et al. [1], in product design, only 20% of parts are new, while 80% of them are reused or modified [2], so knowledge management focused on knowledge reuse related to product and production process is an important field of analysis and can influence the effectiveness of production process preparation.

Knowledge representation is a field of artificial intelligence that focuses on designing computer representations that capture knowledge which can be used to solve problems.

A knowledge representation should be:


According to formalism used for knowledge representation, knowledge can be classified as:


Taking into consideration knowledge transfer to another person, knowledge can be divided into:


Decision Rule Induction Based on the Graph Theory DOI: http://dx.doi.org/10.5772/intechopen.88737

• Structured

• Unstructured

process preparation.

represented

classified as:

goals

be divided into:

92

problem structure

solve problems.

• Semi-structured

Structured decisions are routine, and there are usually specific procedures or actions that can be identified to help make the decision. In case of unstructured decisions, decision scenarios often involve new or unique problems, and an individual has little or no programmatic or routine procedure for addressing the problem or making a decision. In semi-structured decisions, decision scenarios involve

According to the data analysed by Iyer et al. [1], in product design, only 20% of parts are new, while 80% of them are reused or modified [2], so knowledge management focused on knowledge reuse related to product and production process is an important field of analysis and can influence the effectiveness of production

Knowledge representation is a field of artificial intelligence that focuses on designing computer representations that capture knowledge which can be used to

• Easy to see the relationship between the representation and the domain being

• Able to conduct efficient computation, which usually means that it is able to express problem features that can be exploited for computational gain and able

According to formalism used for knowledge representation, knowledge can be

• Procedural knowledge, which defines algorithms that help to achieve given

Taking into consideration knowledge transfer to another person, knowledge can

• Declarative knowledge, which gives the solution without analysing the

• Rich enough to express the knowledge needed to solve the problem

both some structured and some unstructured components.

Application of Decision Science in Business and Management

A knowledge representation should be:

• As close to the problem as possible

• Compact, natural, and maintainable

to trade off accuracy and computation time

• Able to be acquired from people, data and past experiences

• Tacit, which is difficult to express by words or equations

• Explicit, which is easy to formulate and transfer

The object-attribute-value (OAV) model is a knowledge representation framework. Objects are understood in OAV as entities that are items being described. Attributes are understood as features, properties, characteristics or variables of a given object. Value is understood as a numerically discreet or linguistically defined attribute esteem.

OAV can be used in artificial intelligent methods as a framework of data analysis useful for data preparation for applying chosen knowledge representation method. A data analysis schema is presented in Figure 1.

Knowledge representation methods include:


Literature presents different methods of formal knowledge representation which have been successfully applied to support various decision problems. Joshi and Dutta [3] use rule-based method to recognise and simplify features in freeform surface models for automatic finite element mesh generation. Cakir et al. [4] apply

rule-based expert system for die and mould making. According to Trabelsi et al. [5], decision trees are recognised as effective and efficient machine learning approaches which have been successfully applied to solve real-world problems [5].

## 2. Data preparation for decision problem solving

One of the phases of knowledge acquisition processes based on a learning data set is the feature selection phase [6].

Obtaining data and selecting features for a given problem can be supported by means of the quality function deployment (QFD) method. QFD was developed to link product users and engineering attributes [7, 8]. QFD can be used for attribute identification not only in product development but also for decision support.

QFD helps to build a partnership between decision-makers and experts who are able to solve a given decision problem.

> The structure of the QFD matrix was presented in Figure 2. The attributes identified in the QFD matrix can be used for obtaining data and developing a

Decision-making can be supported with the use of an algorithm presented in

Decision tree is a commonly used decision support tool. It is built with the use of training data set which contains examples of a decision problem solved in the past, characterised with the use of the OAV framework. A decision tree example is

A proper decision tree uses meaningful attributes in the decision-making process. In the presented approach, decisions are generated by a decision tree built with the use of the ID3 (Iterative Dichotomiser 3) algorithm, which helps to feature selection to avoid keeping too many or too few attributes than is appropriate [10–12]. The basic idea of the ID3 family algorithms is to develop decision trees by growing them from the most important attribute, which is on the root, and selecting

The ID3 algorithm [13, 14] uses entropy, which is understood as a measure of the amount of uncertainty in the training data set and which is calculated according

ð1Þ

ð2Þ

3. Decision tree development with the use of the ID3 algorithm

Decision-making with the use of chosen knowledge representation methods.

Decision Rule Induction Based on the Graph Theory DOI: http://dx.doi.org/10.5772/intechopen.88737

the next best attribute for each new decision branch added to the tree [12].

p probability that an element from i class occurs

Entropy of a given attribute is calculated according to Eq. (2):

where Mk number of values taken by attribute Ak; N number of classes; k number of attributes; p(ak, j) probability that ak takes value j; p(ci/ak,j) probability

training data set.

presented in Figure 4 [9].

Figure 3.

Figure 3.

to Eq. (1):

where:

95

that class ci occurs, when ak = j.


Figure 2. QFD for DSS attribute identification and training data set development.

#### Figure 3.

rule-based expert system for die and mould making. According to Trabelsi et al. [5], decision trees are recognised as effective and efficient machine learning approaches

One of the phases of knowledge acquisition processes based on a learning data

Obtaining data and selecting features for a given problem can be supported by means of the quality function deployment (QFD) method. QFD was developed to link product users and engineering attributes [7, 8]. QFD can be used for attribute identification not only in product development but also for decision support.

QFD helps to build a partnership between decision-makers and experts who are

which have been successfully applied to solve real-world problems [5].

2. Data preparation for decision problem solving

Application of Decision Science in Business and Management

set is the feature selection phase [6].

able to solve a given decision problem.

Figure 2.

94

QFD for DSS attribute identification and training data set development.

Decision-making with the use of chosen knowledge representation methods.

The structure of the QFD matrix was presented in Figure 2. The attributes identified in the QFD matrix can be used for obtaining data and developing a training data set.

Decision-making can be supported with the use of an algorithm presented in Figure 3.

## 3. Decision tree development with the use of the ID3 algorithm

Decision tree is a commonly used decision support tool. It is built with the use of training data set which contains examples of a decision problem solved in the past, characterised with the use of the OAV framework. A decision tree example is presented in Figure 4 [9].

A proper decision tree uses meaningful attributes in the decision-making process. In the presented approach, decisions are generated by a decision tree built with the use of the ID3 (Iterative Dichotomiser 3) algorithm, which helps to feature selection to avoid keeping too many or too few attributes than is appropriate [10–12]. The basic idea of the ID3 family algorithms is to develop decision trees by growing them from the most important attribute, which is on the root, and selecting the next best attribute for each new decision branch added to the tree [12].

The ID3 algorithm [13, 14] uses entropy, which is understood as a measure of the amount of uncertainty in the training data set and which is calculated according to Eq. (1):

$$I = \sum\_{i=1}^{n} (-p\_i \log\_2 p\_i) \tag{1}$$

where:

p probability that an element from i class occurs Entropy of a given attribute is calculated according to Eq. (2):

$$I(C \mid A\_k) = \sum\_{j=1}^{M\_k} p(a\_k, j) \bullet \left[ -\sum\_{i=1}^{N} (p(c\_i \mid a\_{k,j}) \bullet \log\_2 p(c\_i \mid a\_{k,j})) \right] \tag{2}$$

where Mk number of values taken by attribute Ak; N number of classes; k number of attributes; p(ak, j) probability that ak takes value j; p(ci/ak,j) probability that class ci occurs, when ak = j.

Application of Decision Science in Business and Management

Figure 4. A decision tree example.

Information gained is calculated according to Eq. (3):

$$\bigtriangleup \text{I}(\text{A}\_k) = \text{I} - \text{I}(\text{C}/\text{A}\_k) \tag{3}$$

• Classifying examples from the training data set, taking into consideration the

• If all cases for the given attribute and values lead unambiguously a decision, a

The two last steps should be repeated until all branches in a decision tree will end

Basing on a decision tree, it is possible to formulate decision rules which consist

where in the OAV framework premise is expressed as attribute… = value…conclu-

Each premise consists of an attribute (a decision tree node) and a value (a decision tree branch) which create the given path (branch sequence) in a decision tree. A conclusion is understood as a decision leaf at the end of a given path in the

The rules directly coming from a decision tree transform each decision tree branch into a given rule, so the obtained set of rules can include some rules which consist of different premises and the same conclusions. In such a case, it is necessary

The rules which come directly from a decision tree use the operator "and" between the premises. Rule transformation focuses on conjoint premises with the same conclusions and develops a complex rule with the use of the operator "or" between the premise sequences from a given decision tree path. The algorithm for

of premises and a conclusion. The rule can be formulated as follows: If premise 1 and/or premise 2 and/or premise… then conclusion.

decision leaf should be added at the end of the decision tree branch.

• If it is not possible to classify all cases from the training data set in a given decision tree branch to a decision, it is necessary to add a new node with the

given attribute and its values.

Decision Rule Induction Based on the Graph Theory DOI: http://dx.doi.org/10.5772/intechopen.88737

with a decision leaf.

decision tree.

Figure 5.

97

Rule formulation algorithm.

to transform the rules.

4. Decision rule formulation

4.1 Formulating and transforming rules

sion is expressed as decision = decision variant….

rule formulation is presented in Figure 5.

attribute which has high information gain.

A decision tree starts from the root node which is created with the use of the most meaningful attribute for which information gain is the highest. The next node in a decision tree is created with the use of the attribute for which information gain has a high value.

A pseudocode for ID3 algorithm includes the following stages:


The two last steps should be repeated until all branches in a decision tree will end with a decision leaf.

## 4. Decision rule formulation

## 4.1 Formulating and transforming rules

Basing on a decision tree, it is possible to formulate decision rules which consist of premises and a conclusion. The rule can be formulated as follows:

If premise 1 and/or premise 2 and/or premise… then conclusion.

where in the OAV framework premise is expressed as attribute… = value…conclusion is expressed as decision = decision variant….

Each premise consists of an attribute (a decision tree node) and a value (a decision tree branch) which create the given path (branch sequence) in a decision tree. A conclusion is understood as a decision leaf at the end of a given path in the decision tree.

The rules directly coming from a decision tree transform each decision tree branch into a given rule, so the obtained set of rules can include some rules which consist of different premises and the same conclusions. In such a case, it is necessary to transform the rules.

The rules which come directly from a decision tree use the operator "and" between the premises. Rule transformation focuses on conjoint premises with the same conclusions and develops a complex rule with the use of the operator "or" between the premise sequences from a given decision tree path. The algorithm for rule formulation is presented in Figure 5.

Information gained is calculated according to Eq. (3):

Application of Decision Science in Business and Management

A pseudocode for ID3 algorithm includes the following stages:

• Calculating entropy for the training data set (Eq. 1).

• Calculating information gain for each attribute (Eq. (3)).

• Creating the root node in the decision tree with the attribute for which

• Creating decision tree branches using attribute and their values for which

• Calculating entropy for each attribute (Eq. 2).

information gain is the highest.

information gain is high.

gain has a high value.

Figure 4.

96

A decision tree example.

A decision tree starts from the root node which is created with the use of the most meaningful attribute for which information gain is the highest. The next node in a decision tree is created with the use of the attribute for which information

ð3Þ

## 4.2 An example of decision rule transformation

In the example, a set of initial rules comes from the decision tree presented in Figure 4 and includes seven rules: Rule 1: If attribute 1 = value 1.1 and attribute 2 = value 2.1 then decision = decision 3. Rule 2: If attribute 1 = value 1.1 and attribute 2 = value 2.2 then decision = decision 1. Rule 3: If attribute 1 = value 1.2 and attribute 2 = value 2.1 and attribute 3 = value 3.1 then decision = decision 2. Rule 4: If attribute 1 = value 1.2 and attribute 2 = value 2.1 and attribute 3 = value 3.2 then decision = decision 1. Rule 5: If attribute 1 = value 1.2 and attribute 2 = value 2.2 and attribute 3 = value 3.1 and attribute 4 = value 4.1 then decision = decision 2. Rule 6: If attribute 1 = value 1.2 and attribute 2 = value 2.2 and attribute 3 = value 3.1 and attribute 4 = value 4.2 then decision = decision 1. Rule 7: If attribute 1 = value 1.2 and attribute 2 = value 2.2 and attribute 3 = value 3.2 then decision = decision 2. The set of rules has to be transformed into complex rules with the use of the operator "or". The number of complex rules is equal to the number of decision variants. Rule 1: If (attribute 1 = value 1.1 and attribute 2 = value 2.2) or (attribute 1 = value 1.2 and attribute 2 = value 2.1 and attribute 3 = value 3.2) or (attribute 1 = value 1.2 and attribute 2 = value 2.2 and attribute 3 = value 3.1 and attribute 4 = value 4.2) then decision = decision 1. Rule 2: If (attribute 1 = value 1.2 and attribute 2 = value 2.1 and attribute 3 = value 3.1) or (attribute 1 = value 1.2 and attribute 2 = value 2.2 and attribute 3 = value 3.1 and attribute 4 = value 4.1) or (attribute 1 = value 1.2 and attribute 2 = value 2.2 and

attribute 3 = value 3.2) then decision = decision 2.

Rule 3:

If attribute 1 = value 1.1 and attribute 2 = value 2.1 then decision = decision 3.

## 4.3 Checking the completeness of the rules

Rule completeness can be checked with the use of a dependence network which consists of the components presented in Figure 6 [13].

An example of a dependence network is presented in Figure 7.

Formal correctness of the rules can be verified by a dependence network, but substantial correctness can be verified by using the rules in the knowledge-based system (KBS).

> The knowledge base consists of rules, whereas the inference engine is an automated reasoning system that evaluates the current state of the knowledge

base, applies relevant rules and then asserts new knowledge into the

knowledge base.

An example of a dependence network.

Figure 7.

99

Figure 6.

Components of a dependence network.

Decision Rule Induction Based on the Graph Theory DOI: http://dx.doi.org/10.5772/intechopen.88737

KBS is essentially composed of two subsystems:


Decision Rule Induction Based on the Graph Theory DOI: http://dx.doi.org/10.5772/intechopen.88737

Figure 6.

4.2 An example of decision rule transformation

Application of Decision Science in Business and Management

and attribute 4 = value 4.1 then decision = decision 2.

and attribute 4 = value 4.2 then decision = decision 1.

attribute 3 = value 3.2) then decision = decision 2.

4.3 Checking the completeness of the rules

consists of the components presented in Figure 6 [13].

KBS is essentially composed of two subsystems:

An example of a dependence network is presented in Figure 7.

Figure 4 and includes seven rules:

then decision = decision 2.

then decision = decision 1.

then decision = decision 2.

decision = decision 1.

Rule 2:

Rule 3:

system (KBS).

98

• The knowledge base

• The inference engine

Rule 1:

Rule 2:

Rule 3:

Rule 4:

Rule 5:

Rule 6:

Rule 7:

variants. Rule 1:

In the example, a set of initial rules comes from the decision tree presented in

If attribute 1 = value 1.1 and attribute 2 = value 2.1 then decision = decision 3.

If attribute 1 = value 1.1 and attribute 2 = value 2.2 then decision = decision 1.

If attribute 1 = value 1.2 and attribute 2 = value 2.1 and attribute 3 = value 3.1

If attribute 1 = value 1.2 and attribute 2 = value 2.1 and attribute 3 = value 3.2

If attribute 1 = value 1.2 and attribute 2 = value 2.2 and attribute 3 = value 3.1

If attribute 1 = value 1.2 and attribute 2 = value 2.2 and attribute 3 = value 3.1

If attribute 1 = value 1.2 and attribute 2 = value 2.2 and attribute 3 = value 3.2

The set of rules has to be transformed into complex rules with the use of the operator "or". The number of complex rules is equal to the number of decision

If (attribute 1 = value 1.1 and attribute 2 = value 2.2) or (attribute 1 = value 1.2 and attribute 2 = value 2.1 and attribute 3 = value 3.2) or (attribute 1 = value 1.2 and attribute 2 = value 2.2 and attribute 3 = value 3.1 and attribute 4 = value 4.2) then

If (attribute 1 = value 1.2 and attribute 2 = value 2.1 and attribute 3 = value 3.1) or (attribute 1 = value 1.2 and attribute 2 = value 2.2 and attribute 3 = value 3.1 and attribute 4 = value 4.1) or (attribute 1 = value 1.2 and attribute 2 = value 2.2 and

If attribute 1 = value 1.1 and attribute 2 = value 2.1 then decision = decision 3.

Rule completeness can be checked with the use of a dependence network which

Formal correctness of the rules can be verified by a dependence network, but substantial correctness can be verified by using the rules in the knowledge-based Components of a dependence network.

The knowledge base consists of rules, whereas the inference engine is an automated reasoning system that evaluates the current state of the knowledge base, applies relevant rules and then asserts new knowledge into the knowledge base.

There are primarily two modes for the inference engine:


In case of backward chaining, the system looks at possible conclusions and works backwards to see if they might be true.

Forward chaining starts with the available data and uses inference rules to extract more data until a goal is reached.

Software tools for KBS include:


## 5. An example

The decision problem analysed in the example is to find the right component bearing to the toothed gear. The QFD matrix for a decision problem is presented in Figure 8.

The data from the matrix presented in Figure 8 creates the first record in the training data set presented in Table 1. The data from the matrix was transformed according to the schema presented in Figure 9.

The training data set was presented in Table 1.

The meaning of the attributes was set with the use of the ID3 algorithm (Table 2).

An example of information gain calculation for Ak = principal dimensions (Eqs. 4–6):

$$\mathbf{I} = \mathbf{\hat{3}}/\mathbf{8} \times \mathbf{LOG}\_2\mathbf{\hat{3}}/\mathbf{8} - \mathbf{\hat{5}}/\mathbf{8} \times \mathbf{LOG}\_2\mathbf{\hat{5}}/\mathbf{8} = \mathbf{0}, \mathbf{954}.\tag{4}$$

No Attribute/(value 1, value 2,…) Decision

1 Small Big Small Short High w2 2 Medium Big Big Short High w1 3 Medium Big Big Short Low w2 4 Small Small Big Long High w1 5 Medium Small Small Short Low w1 6 Big Big Small Long High w2 7 Big Small Big Long Low w1 8 Small Big Big Long Low w1

Basic static load rating (small, big)

Delivery time (short, long)

Price (high, low)

Basic dynamic load rating (small, big)

Principal dimensions (small, medium, big)

An example of the QFD matrix.

Decision Rule Induction Based on the Graph Theory DOI: http://dx.doi.org/10.5772/intechopen.88737

Figure 8.

Table 1.

Figure 9.

101

A training data set example.

An example of the OAV framework for a training set.

$$\begin{aligned} \mathrm{I(C/A\_k)} &= 3/8 \times (-2/3 \times \mathrm{LOG}\_2 2/3 \text{-I}/3 \times \mathrm{LOG}\_2 \mathrm{1}/3) + 3/8 \\ &\quad \times (-2/3 \times \mathrm{LOG}\_2 \, 2/3 \text{-I}/3 \times \mathrm{LOG}\_2 \, 1/3) + 2/8 \\ &\quad \times (-1/2 \times \mathrm{LOG}\_2 \, 1/2 \, -\mathrm{I}/2 \times \mathrm{LOG}\_2 \, 1/2) = 0,939. \end{aligned} \tag{5}$$
  $\Delta \mathrm{I(A\_k)} = 0,015. \tag{6}$ 

The decision tree for the training data set in the example was presented in Figure 10.

Decision rule induction based on the decision tree presented in Figure 10 is as follows:

Rule 1:

If basic dynamic load rating = small then decision = w1. Rule 2:

If basic dynamic load rating = big and basic static load rating = small then decision = w2.

Rule 3:

If basic dynamic load rating = big and basic static load rating = big and delivery time = long then decision = w2.

Rule 4:

If basic dynamic load rating = big and basic static load rating = big and delivery time = short and price = high then decision = w1.

## Decision Rule Induction Based on the Graph Theory DOI: http://dx.doi.org/10.5772/intechopen.88737


#### Figure 8.

There are primarily two modes for the inference engine:

Application of Decision Science in Business and Management

In case of backward chaining, the system looks at possible conclusions and

Forward chaining starts with the available data and uses inference rules to

The decision problem analysed in the example is to find the right component bearing to the toothed gear. The QFD matrix for a decision problem is presented in

The data from the matrix presented in Figure 8 creates the first record in the training data set presented in Table 1. The data from the matrix was transformed

The meaning of the attributes was set with the use of the ID3 algorithm (Table 2). An example of information gain calculation for Ak = principal dimensions

I Cð Þ¼ =Ak 3=8 � �ð 2=3 � LOG22=3–1=3 � LOG21=3Þ þ 3=8

The decision tree for the training data set in the example was presented in

If basic dynamic load rating = big and basic static load rating = small then

If basic dynamic load rating = big and basic static load rating = big and delivery

If basic dynamic load rating = big and basic static load rating = big and delivery

If basic dynamic load rating = small then decision = w1.

Decision rule induction based on the decision tree presented in Figure 10 is as

� �ð 2=3 � LOG2 2=3–1=3 � LOG2 1=3Þ þ 2=8 � �ð 1=2 � LOG2 1=2–1=2 � LOG2 1=2Þ ¼ 0,939:

ΔI Að Þ¼ <sup>k</sup> 0,015: (6)

I ¼ 3=8 � LOG23=8–5=8 � LOG25=8 ¼ 0,954: (4)

(5)

• Programming languages (e.g. Pascal, C++, Java, LISP, Prolog, etc.)

• Forward chaining

• Backward chaining

5. An example

Figure 8.

(Eqs. 4–6):

Figure 10.

Rule 2:

decision = w2. Rule 3:

Rule 4:

100

time = long then decision = w2.

time = short and price = high then decision = w1.

follows: Rule 1:

works backwards to see if they might be true.

extract more data until a goal is reached. Software tools for KBS include:

• Expert system shells (e.g. Sphinx)

according to the schema presented in Figure 9. The training data set was presented in Table 1.

An example of the QFD matrix.


#### Table 1.

A training data set example.

#### Figure 9.

An example of the OAV framework for a training set.


#### Table 2.

Information gain calculation.

#### Figure 10.

An exemplary decision tree.

Rule 5:

If basic dynamic load rating = big and basic static load rating = big and delivery time = short and price = low then decision = w2.

Rule 2:

Figure 11. Dependence network.

Figure 11.

103

6. Conclusions

If (basic dynamic load rating = big and basic static load rating = small) or (basic

time = long) or (basic dynamic load rating = big and basic static load rating = big

The dependence network for checking the transformed rules was presented in

The decision-making process in enterprises needs knowledge, information and data. A decision-maker's knowledge is developed based on past experiences and can be supported by means of artificial intelligence methods, such as rules which are

dynamic load rating = big and basic static load rating = big and delivery

and delivery time = short and price = low) then decision = w2.

Decision Rule Induction Based on the Graph Theory DOI: http://dx.doi.org/10.5772/intechopen.88737

The transformed rules:

Rule 1:

If (basic dynamic load rating = small) or (basic dynamic load rating = big and basic static load rating = big and delivery time = short and price = high) then decision = w1.

Decision Rule Induction Based on the Graph Theory DOI: http://dx.doi.org/10.5772/intechopen.88737

Figure 11. Dependence network.

Rule 2:

If (basic dynamic load rating = big and basic static load rating = small) or (basic dynamic load rating = big and basic static load rating = big and delivery time = long) or (basic dynamic load rating = big and basic static load rating = big and delivery time = short and price = low) then decision = w2.

The dependence network for checking the transformed rules was presented in Figure 11.

## 6. Conclusions

The decision-making process in enterprises needs knowledge, information and data. A decision-maker's knowledge is developed based on past experiences and can be supported by means of artificial intelligence methods, such as rules which are

Rule 5:

An exemplary decision tree.

Figure 10.

Entropy of the system I

Table 2.

Information gain I(Ak)

Information gain calculation.

Attribute importance (ranking)

Principal dimensions

Application of Decision Science in Business and Management

Rule 1:

decision = w1.

102

The transformed rules:

time = short and price = low then decision = w2.

If basic dynamic load rating = big and basic static load rating = big and delivery

Entropy of the attribute I(C/Ak)

0.015 0.347 0.158 0.048 0.048

5 1 2 34

Basic static load rating (BSLR)

Delivery time (DT) Price

Basic dynamic load rating (BDLR)

0.954 0.939 0.607 0.796 0.906 0.906

If (basic dynamic load rating = small) or (basic dynamic load rating = big and basic static load rating = big and delivery time = short and price = high) then

the widely used methods of knowledge representation. Decision trees are an efficient method of rule formulation. This chapter presents the methodology of decision tree induction with the Iterative Dichotomiser 3 algorithm, as well as rule formulation and checking with the use of dependence network.

References

509-530

[1] Iyer N, Jayanti S, Lou K,

representation methods and applications in machining process planning. The International Journal of Advanced Manufacturing Technology.

[3] Joshi N, Dutta D. Feature

2018;98:3041-3059

2003;3(3):177-186

2005;21(2):175-183

366:46-62

pp. 119-152

105

Kalyanaraman Y, Ramani K. Threedimensional shape searching: Stateof-the-art review and future trends. Computer-Aided Design. 2005;37(5):

Decision Rule Induction Based on the Graph Theory DOI: http://dx.doi.org/10.5772/intechopen.88737

> [9] Kutschenreiter-Praszkiewicz I. Graph-based decision making in industry. In: Sirmacek B, editor. Graph-Based Decision Making in Industry.

[10] Zhao X, Deng W, Shi Y. Feature selection with attributes clustering by maximal information coefficient. Procedia Computer Science. 2013;17:

[11] Mehrotra L, Saxena PS, Doohan N. Implementation of modified ID3 algorithm. In: Mishra DK et al., editors. Information and Communication Technology for Sustainable Development. Lecture Notes in Networks and Systems 9. Springer Nature Singapore Pte Ltd; 2018. DOI:

[12] Xu M, Wang J-L, Chen T. Improved decision tree algorithm: ID3+. In: Huang D-S, Li K, Irwin GW, editors. Intelligent Computing in Signal Processing and Pattern Recognition. Vol. 345. Berlin Heidelberg: Springer-Verlag; 2006.

[13] Mulawka J. Sysytemy Ekspertowe.

[14] Quinlan JR. Induction of decision trees. Machine Learning. 1986;1:81-106

10.1007/978-981-10-3932-4\_6

IntechOpen; 2018

70-79

pp. 141-149

Warszawa: WNT; 1997

[2] Li X, Zhang S, Huang R, Huang B, Xu C, Zhang Y. A survey of knowledge

simplification techniques for freeform surface models. Journal of Computing and Information Science in Engineering.

[4] Cakir MC, Irfan O, Cavdar K. An expert system approach for die and mold making operations. Robotics and Computer-Integrated Manufacturing.

[5] Trabelsi A, Elouedi Z, Lefevre E.

evidential attribute values and class labels. Fuzzy Sets and Systems. 2019;

[6] Moczulski W, Szulim R, Tomasik P, Wachla D. Knowledge discovery in databases. In: Korbicz J, Kościelny JM, editors. Modeling, Diagnostics and Process Control. Berlin, Heidelberg: Springer; 2010.

[7] Chen W, Hoyle C, Wassenaar H. Decision-Based Design. London: Springer-Verlag; 2013. pp. 13-34. DOI:

[8] Molloy O, Warman EA, Tilley S. Design for Manufacturing and Assembly, Concepts, Architectures and Implementation. USA: Springer; 1998. DOI: 10.1007/978-1-4615-5785-2

10.1007/978-1-4471-4036-82

Decision tree classifiers for

A useful method for data analysis in the decision process is Quality Function Deployment, which organises data both from the customer and expert perspectives. The data was analysed with the object-attribute-value framework in which attributes and their values are used for building a training data set.

An example related to bearing selection was presented to illustrate the decision process.

## Author details

Izabela Kutschenreiter-Praszkiewicz University of Bielsko-Biała, Bielsko-Biała, Poland

\*Address all correspondence to: ipraszkiewicz@ath.bielsko.pl

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Decision Rule Induction Based on the Graph Theory DOI: http://dx.doi.org/10.5772/intechopen.88737

## References

the widely used methods of knowledge representation. Decision trees are an efficient method of rule formulation. This chapter presents the methodology of decision tree induction with the Iterative Dichotomiser 3 algorithm, as well as rule

The data was analysed with the object-attribute-value framework in which

An example related to bearing selection was presented to illustrate the

A useful method for data analysis in the decision process is Quality Function Deployment, which organises data both from the customer and expert perspectives.

formulation and checking with the use of dependence network.

Application of Decision Science in Business and Management

attributes and their values are used for building a training data set.

decision process.

Author details

104

Izabela Kutschenreiter-Praszkiewicz

provided the original work is properly cited.

University of Bielsko-Biała, Bielsko-Biała, Poland

\*Address all correspondence to: ipraszkiewicz@ath.bielsko.pl

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,

[1] Iyer N, Jayanti S, Lou K, Kalyanaraman Y, Ramani K. Threedimensional shape searching: Stateof-the-art review and future trends. Computer-Aided Design. 2005;37(5): 509-530

[2] Li X, Zhang S, Huang R, Huang B, Xu C, Zhang Y. A survey of knowledge representation methods and applications in machining process planning. The International Journal of Advanced Manufacturing Technology. 2018;98:3041-3059

[3] Joshi N, Dutta D. Feature simplification techniques for freeform surface models. Journal of Computing and Information Science in Engineering. 2003;3(3):177-186

[4] Cakir MC, Irfan O, Cavdar K. An expert system approach for die and mold making operations. Robotics and Computer-Integrated Manufacturing. 2005;21(2):175-183

[5] Trabelsi A, Elouedi Z, Lefevre E. Decision tree classifiers for evidential attribute values and class labels. Fuzzy Sets and Systems. 2019; 366:46-62

[6] Moczulski W, Szulim R, Tomasik P, Wachla D. Knowledge discovery in databases. In: Korbicz J, Kościelny JM, editors. Modeling, Diagnostics and Process Control. Berlin, Heidelberg: Springer; 2010. pp. 119-152

[7] Chen W, Hoyle C, Wassenaar H. Decision-Based Design. London: Springer-Verlag; 2013. pp. 13-34. DOI: 10.1007/978-1-4471-4036-82

[8] Molloy O, Warman EA, Tilley S. Design for Manufacturing and Assembly, Concepts, Architectures and Implementation. USA: Springer; 1998. DOI: 10.1007/978-1-4615-5785-2 [9] Kutschenreiter-Praszkiewicz I. Graph-based decision making in industry. In: Sirmacek B, editor. Graph-Based Decision Making in Industry. IntechOpen; 2018

[10] Zhao X, Deng W, Shi Y. Feature selection with attributes clustering by maximal information coefficient. Procedia Computer Science. 2013;17: 70-79

[11] Mehrotra L, Saxena PS, Doohan N. Implementation of modified ID3 algorithm. In: Mishra DK et al., editors. Information and Communication Technology for Sustainable Development. Lecture Notes in Networks and Systems 9. Springer Nature Singapore Pte Ltd; 2018. DOI: 10.1007/978-981-10-3932-4\_6

[12] Xu M, Wang J-L, Chen T. Improved decision tree algorithm: ID3+. In: Huang D-S, Li K, Irwin GW, editors. Intelligent Computing in Signal Processing and Pattern Recognition. Vol. 345. Berlin Heidelberg: Springer-Verlag; 2006. pp. 141-149

[13] Mulawka J. Sysytemy Ekspertowe. Warszawa: WNT; 1997

[14] Quinlan JR. Induction of decision trees. Machine Learning. 1986;1:81-106

Chapter 7

Abstract

numerical examples.

1. Introduction

107

discount cash flow, credit-risk customer

Inventory Policies for

Nirmal Kumar Duari, Sorforaj Nowaj

and Jobin George Varghese

Deteriorating Items with

Maximum Lifetime under

to Credit-Risk Customers by

Discounted Cash Flow Analysis

Downstream Partial Trade Credits

Getting loans from banks are almost impossible after 2008 global financial crisis. As a result, about 80% of companies in the United Kingdom and the United States offer their products on various short terms, free-interest loans to customers. To compute the interest earned and charged during the credit period but not to the revenue and other costs which are considerably larger than the interest earned and charged, numerous researchers and academicians apply merely the discounted cash flow (DCF) analysis. In addition, some products deteriorate continuously and cannot sell after expiration date. However, a little number of researchers have considered the product lifetime expectance into their models. In this chapter, a supplierretailer-customer chain model is developed. The supplier provides an upstream full trade credit to the retailer, and the credit-risk customer gets a downstream partial trade credit from the retailer. The non-decreasing deterioration rate is 100% near particularly close to its expiration date. To compute all relevant costs, DCF analysis is applied. The retailer's optimal replenishment cycle time is not only exists but also

unique that demonstrated in this proposal and that has been shown by the

Keywords: supply chain management, deterioration, expiration dates, trade credit,

In traditional business transactions, it was implicitly assumed that the buyer must pay the procurement cost when products are received. However, in today's competitive markets, most companies offer buyers various credit terms to impel sales and hence reduce inventory. In the United Kingdom, "estimates suggest" that more than 80% of the business transactions are which made on credit, while about

## Chapter 7
