2. Technical aspects of fuzzy measures

Zadeh [2] introduced fuzzy set theory which is associated with vagueness arising in human cognitive methods. The alteration for an element connecting membership and nonmembership in the universe of classical sets is abrupt whereas in the universe of fuzzy sets the transition is gradual. Thus, the membership function describes the vagueness and ambiguity of an element and takes values in the interval [0, 1].

gives a probability distribution.

2. f(x) increases as x goes from 0 to 0.5.

3. f(x) decreases as x goes from 0.5 to 1.

1. f(x) = 0 when x = 0 or 1.

This is called fuzzy entropy.

1. 0 ≤ pi ≤ 1 for each i. Also 0 ≤ μAð Þ xi

distribution of crisp sets.

are of the form P<sup>n</sup>

are of the form P<sup>n</sup>

<sup>i</sup>¼<sup>1</sup> f pi

<sup>i</sup>¼<sup>1</sup> <sup>f</sup> <sup>μ</sup>Að Þ xi , <sup>μ</sup>Bð Þþ xi

sures are of the form P<sup>n</sup>

gence and vice-versa.

the form P<sup>n</sup>

4. f(x) = f(1 � x).

2. P<sup>n</sup>

4. μAð Þ xi

P<sup>n</sup>

With the ith element, fuzzy uncertainty is defined as f (μAð Þ xi ) with the following properties:

The total fuzzy uncertainty defined as fuzzy entropy for n independent elements is given by,

i¼1

When there is uncertainty due to fuzziness of information it is known as fuzzy entropy measures whereas when the uncertainty is due to information available in terms of probability distribution it is known as probabilistic entropy. Following similarities and dissimilarities are

3. The probabilistic uncertainty measure measures how close the probability distribution (p1, p2, …, pn) is to the uniform distribution (1/n, 1/n, …, 1/n) and how far away it is from degenerate distributions. Fuzzy uncertainty measures how close the fuzzy distribution is from the most fuzzy vector distribution (1/2, 1/2, …, 1/2) and how far it is from the

from 1/2 and the crisp set values 0 and 1. However probabilities p and 1 � p make different contributions to probabilistic uncertainty. As such while most measures of fuzzy entropy

<sup>i</sup>¼<sup>1</sup> <sup>f</sup> <sup>1</sup> � <sup>μ</sup>Að Þ xi

5. Similarly while many measures for fuzzy directed divergence are all of the form

� �, <sup>1</sup> � <sup>μ</sup>Bð Þ xi

divergence, we have a corresponding measure of fuzzy entropy and fuzzy directed diver-

� � � � , most of the probabilistic mea-

f μAð Þ xi

<sup>i</sup>¼<sup>1</sup> <sup>μ</sup>Að Þ xi

� �. However some measures of probabilistic entropy can also be of

� �. For each measure of probabilistic entropy or directed

� � (14)

Fuzzy Information Measures with Multiple Parameters http://dx.doi.org/10.5772/intechopen.78803 11

� � need not be equal to unity

� � because both are equidistant

� �, most measures of probabilities entropy

H AðÞ¼ <sup>X</sup><sup>n</sup>

there between fuzzy entropy measures and probabilistic entropy measures:

<sup>i</sup>¼<sup>1</sup> pi <sup>¼</sup> 1 for all probability distributions, but <sup>P</sup><sup>n</sup>

� � gives the same degree of fuzziness as 1 � <sup>μ</sup>Að Þ xi

� + P<sup>n</sup>

<sup>i</sup>¼<sup>1</sup> f pi � �.

P<sup>n</sup>

<sup>i</sup>¼<sup>1</sup> f pi

<sup>i</sup>¼<sup>1</sup> <sup>f</sup> <sup>1</sup> � <sup>μ</sup>Að Þ xi

; qi

<sup>i</sup>¼<sup>1</sup> <sup>f</sup> <sup>μ</sup>Að Þ xi

<sup>i</sup>¼<sup>1</sup> f pi

� � + P<sup>n</sup>

and it need not even be the same for all fuzzy sets.

� ≤ 1 for each i.

Kapur [7] explained the concept of fuzzy entropy by considering the following vector μAð Þ x<sup>1</sup> ; μAð Þ x<sup>2</sup> ;…; μAð Þ xn � �.

If <sup>μ</sup>Að Þ¼ xi 0 then the ith element does not belong to set A and if <sup>μ</sup>Að Þ¼ xi 1, then the ith element belongs to set A. If <sup>μ</sup>Að Þ¼ xi <sup>0</sup>:5 then highest ambiguity arises as to ith element belongs to set A or not. Thus, μAð Þ x<sup>1</sup> ; μAð Þ x<sup>2</sup> ;…; μAð Þ xn � � is termed as fuzzy vector and the set A is identified as the fuzzy set. Thus crisp set are those sets in which each element is 0 or 1 and hence uncertainty does not arise in these sets whereas those sets in which elements are 0 or 1 and others lie among 0 and 1 are entitled as fuzzy sets. A fuzzy set A is represented as A = xi=<sup>μ</sup>Að Þ xi ; i ¼ 1; 2;…; n n o where <sup>μ</sup>Að Þ xi gives the degree of belongingness of the element ð Þ xi to A. We explain the concept of membership function μA:X ! ½ � 0; 1 as follows:

$$\mu\_A(\mathbf{x}\_i) = \left\{ \begin{array}{ll} 0, & \text{if } \mathbf{x} \notin A \text{ and } \text{ there is no ambiguity} \\\\ 1, & \text{if } \mathbf{x} \in A \text{ and } \text{there is no ambiguity} \\\\ 0.5, & \text{if } \text{there is maximum ambiguity} \end{array} \right\} \tag{10}$$

Further if μBð Þ xi = μAð Þ xi either 1 � μAð Þ xi or then fuzzy sets A and B are characterised as fuzzy equivalent sets. Also, without being fuzzy equivalent, two sets can have same entropy but it is obvious to have identical entropy for fuzzy equivalent sets. Now if all the membership values of class of fuzzy equivalent sets are less than or equal to 0.5 then that set is defined as standard fuzzy set.

For any fuzzy set A\* to be a sharpened version of set A the subsequent requirements has to be fulfilled:

$$
\mu\_{A^\*} (\mathbf{x}\_i) \le \mu\_A(\mathbf{x}\_i), \text{if } \; \mu\_A(\mathbf{x}\_i) \le 0.5; \forall i \tag{11}
$$

and

$$\mu\_{A^\*} (\mathbf{x}\_i) \ge \mu\_A (\mathbf{x}\_i), \text{if } \mu\_A (\mathbf{x}\_i) \ge 0.5; \forall i \tag{12}$$

Thus, when x1, x2, …, xn are components of universe of discourse then, μAð Þ x<sup>1</sup> ; μAð Þ x<sup>2</sup> ;…; μ<sup>A</sup> � ð Þg xn are positioned among 0 and 1 but since their sum is not unity therefore they are not considered as probabilities. However,

$$\phi\_A(\mathbf{x}\_i) = \frac{\mu\_A(\mathbf{x}\_i)}{\sum\_{i=1}^n \mu\_A(\mathbf{x}\_i)}, \quad i = 1, 2, \dots, n \tag{13}$$

gives a probability distribution.

With the ith element, fuzzy uncertainty is defined as f (μAð Þ xi ) with the following properties:


$$\mathbf{4.}\quad\mathbf{f}(\mathbf{x})=\mathbf{f}(1-\mathbf{x}).$$

2. Technical aspects of fuzzy measures

10 Fuzzy Logic Based in Optimization Methods and Control Systems and Its Applications

and takes values in the interval [0, 1].

or not. Thus, μAð Þ x<sup>1</sup> ; μAð Þ x<sup>2</sup> ;…; μAð Þ xn

8 ><

>:

μAð Þ x<sup>1</sup> ; μAð Þ x<sup>2</sup> ;…; μAð Þ xn � �.

xi=<sup>μ</sup>Að Þ xi ; i ¼ 1; 2;…; n n o

fuzzy set.

fulfilled:

and

μAð Þ¼ xi

considered as probabilities. However,

Zadeh [2] introduced fuzzy set theory which is associated with vagueness arising in human cognitive methods. The alteration for an element connecting membership and nonmembership in the universe of classical sets is abrupt whereas in the universe of fuzzy sets the transition is gradual. Thus, the membership function describes the vagueness and ambiguity of an element

Kapur [7] explained the concept of fuzzy entropy by considering the following vector

If <sup>μ</sup>Að Þ¼ xi 0 then the ith element does not belong to set A and if <sup>μ</sup>Að Þ¼ xi 1, then the ith element belongs to set A. If <sup>μ</sup>Að Þ¼ xi <sup>0</sup>:5 then highest ambiguity arises as to ith element belongs to set A

as the fuzzy set. Thus crisp set are those sets in which each element is 0 or 1 and hence uncertainty does not arise in these sets whereas those sets in which elements are 0 or 1 and others lie among 0 and 1 are entitled as fuzzy sets. A fuzzy set A is represented as A =

Further if μBð Þ xi = μAð Þ xi either 1 � μAð Þ xi or then fuzzy sets A and B are characterised as fuzzy equivalent sets. Also, without being fuzzy equivalent, two sets can have same entropy but it is obvious to have identical entropy for fuzzy equivalent sets. Now if all the membership values of class of fuzzy equivalent sets are less than or equal to 0.5 then that set is defined as standard

For any fuzzy set A\* to be a sharpened version of set A the subsequent requirements has to be

Thus, when x1, x2, …, xn are components of universe of discourse then, μAð Þ x<sup>1</sup> ; μAð Þ x<sup>2</sup> ;…; μ<sup>A</sup>

ð Þg xn are positioned among 0 and 1 but since their sum is not unity therefore they are not

μAð Þ xi P<sup>n</sup>

<sup>i</sup>¼<sup>1</sup> <sup>μ</sup>Að Þ xi

ϕAð Þ¼ xi

A. We explain the concept of membership function μA:X ! ½ � 0; 1 as follows:

� � is termed as fuzzy vector and the set A is identified

0, if x ∉ A and there is no ambiguity 1, if x∈ A and there is no ambiguity 0:5, if there ismaximum ambiguity whether x ∉ A x ∈ A

where μAð Þ xi gives the degree of belongingness of the element ð Þ xi to

μA<sup>∗</sup> ð Þ xi ≤ μAð Þ xi , if μAð Þ xi ≤ 0:5; ∀i (11)

μA<sup>∗</sup> ð Þ xi ≥ μAð Þ xi , if μAð Þ xi ≥ 0:5; ∀i (12)

�

, i ¼ 1, 2, …, n (13)

9 >=

>;

(10)

The total fuzzy uncertainty defined as fuzzy entropy for n independent elements is given by,

$$H(A) \quad = \sum\_{i=1}^{n} f\left(\mu\_A(\mathbf{x}\_i)\right) \tag{14}$$

This is called fuzzy entropy.

When there is uncertainty due to fuzziness of information it is known as fuzzy entropy measures whereas when the uncertainty is due to information available in terms of probability distribution it is known as probabilistic entropy. Following similarities and dissimilarities are there between fuzzy entropy measures and probabilistic entropy measures:


6. The common properties arise from the consideration that both types of measures are based on measures of distance from (1/n, 1/n, …, 1/n) in one case and from (1/2, 1/2, …, 1/2) in the other.

Thus, ambiguity due to fuzziness of information is calculated by fuzzy entropy whereas vagueness due to information which is accessible in context of probability distribution is

Entropy theory was developed to measure uncertainty of a probability distribution and therefore it was natural for researchers in fuzzy set theory to make use of entropy concepts in

Entropy of a fuzzy set A having n support points was characterised by Kauffman [8] in the

X<sup>n</sup>

μAð Þ xi log μAð Þþ xi 1 � μAð Þ xi

<sup>1</sup>�μAð Þ xi <sup>þ</sup> <sup>1</sup> � <sup>μ</sup>Að Þ xi

� �e

h i (17)

<sup>A</sup>ðÞþ xi 1 � μAð Þ xi

� �<sup>β</sup> h i (19)

<sup>i</sup>¼<sup>1</sup> <sup>ϕ</sup>Að Þ xi log <sup>ϕ</sup>Að Þ xi (15)

Fuzzy Information Measures with Multiple Parameters http://dx.doi.org/10.5772/intechopen.78803 13

<sup>μ</sup>Að Þ xi � <sup>1</sup>

, α 6¼ β (18)

(20)

� � log 1 � <sup>μ</sup>Að Þ xi � � � � (16)

Hkð Þ¼� <sup>A</sup> <sup>1</sup>

Xn i¼1

<sup>e</sup> � <sup>1</sup> <sup>p</sup> <sup>X</sup> log <sup>μ</sup>Að Þ xi <sup>e</sup>

1. Corresponding to Sharma and Taneja's [11] measure of entropy of degree α, β

Xn i¼1 pα <sup>i</sup> �X<sup>n</sup> i¼1 p β i

<sup>A</sup>ð Þþ xi 1 � μAð Þ xi

where either α ≥ 1, β ≤ 1 or α ≤ 1, β ≥ 1 and α = β only if both are unity.

α þ β � 2

" #

� �<sup>α</sup> � <sup>μ</sup><sup>β</sup>

Xn i¼1 pα <sup>i</sup> <sup>þ</sup>X<sup>n</sup> i¼1 p β <sup>i</sup> � 2

" #

β � α

Deluca and Termini [9] suggested the measure

n log 2

Bhandari and Pal [10] suggested the following measure:

n ffiffiffiffiffiffiffiffiffiffi

Hβ

Xn i¼1

Hβ

<sup>α</sup>ð Þ¼ <sup>P</sup> <sup>1</sup>

μα

2. Corresponding to Kapur's measure of entropy of degree α, β

<sup>α</sup>ð Þ¼ <sup>P</sup> <sup>1</sup>

HDð Þ¼� <sup>A</sup> <sup>1</sup>

Heð Þ¼ <sup>A</sup> <sup>1</sup>

Some other measures of fuzzy entropy are:

we get the measure

Hβ

we get the measure

<sup>α</sup>ð Þ¼ <sup>A</sup> <sup>1</sup>

β � α

log n

computed by probabilistic entropy.

measuring fuzziness.

way as


While processing information, making decision and in our language we can find fuzziness. Many authentic world objectives and human thinking consider uncertainty and fuzziness as their fundamental nature. Uncertainty and fuzziness are removed by the utilization of information. The degree of information is the quantity of uncertainty eliminated whereas the degree of vagueness and ambiguity of uncertainties is the quantity of fuzziness.

The theory of fuzziness is related to various areas of research such as Statistics, Information theory, Clustering and Decision analysis, Medical and Socio-economic prediction, Image processing, etc. The preparation and analysis of information development method are the applications of the mathematical designs related to system research.

In order to deal with fuzziness there is a small area from the extremely large fields of theories and applications which have been developed from the concept of fuzziness.

We define problems in the form of decision, management and prediction and by analysis, understanding and utilization of information, we can find their solutions. Thus, a significant quantity of information together with significant quantity of uncertainty is considered as the ground of many problems.

As we become aware of how much we know and how much we do not know, as information and uncertainty themselves become the focus of our concern, we begin to see our problems as centring on the issue of complexity.

Thus, ambiguity due to fuzziness of information is calculated by fuzzy entropy whereas vagueness due to information which is accessible in context of probability distribution is computed by probabilistic entropy.

Entropy theory was developed to measure uncertainty of a probability distribution and therefore it was natural for researchers in fuzzy set theory to make use of entropy concepts in measuring fuzziness.

Entropy of a fuzzy set A having n support points was characterised by Kauffman [8] in the way as

$$H\_k(A) = -\frac{1}{\log n} \sum\_{i=1}^n \phi\_A(\mathbf{x}\_i) \log \phi\_A(\mathbf{x}\_i) \tag{15}$$

Deluca and Termini [9] suggested the measure

6. The common properties arise from the consideration that both types of measures are based on measures of distance from (1/n, 1/n, …, 1/n) in one case and from (1/2, 1/2, …, 1/2) in the

<sup>i</sup>¼<sup>1</sup> pi <sup>¼</sup> 1, <sup>P</sup><sup>n</sup>

of n � 1 outcomes will determine the probability of the nth outcome. However fuzziness of n elements of the fuzzy set are quite independent and our knowledge of fuzziness of n � 1

8. Conceptually the two types of uncertainty are poles apart. One deals with probabilities or relative frequencies and repeated experiments, while the other deals with estimation of fuzzy values. The probabilities can be determined objectively and experimentally and should naturally be the same for everyone. Fuzziness is one's perception of membership of an element of a set and can be subjective. However, after finding fuzzy value for every member of the set, everything else is objective. In probability theory also after assigning

9. Fuzzy and probabilistic entropies are concave functions of μAð Þ x<sup>1</sup> ; μAð Þ x<sup>2</sup> ;…; μAð Þ xn

p1, p2, …, pn respectively. If we start with any value of μAð Þ x<sup>1</sup> ; μAð Þ x<sup>2</sup> ;…; μAð Þ xn

approach the vector 1/2, 1/2, …, 1/2, the fuzzy entropy will increase. Similarly, if we start with any probability vector p1, p2, …, pn and approach the vector 1/n, 1/n, …, 1/n, the probabilistic

a concave surface with maximum value at 1/2, 1/2, …, 1/2. Similarly Z = G(p1, p2, …, pn) where G is probabilistic entropy is a concave surface with maximum value at 1/n, 1/n, …, 1/n.

While processing information, making decision and in our language we can find fuzziness. Many authentic world objectives and human thinking consider uncertainty and fuzziness as their fundamental nature. Uncertainty and fuzziness are removed by the utilization of information. The degree of information is the quantity of uncertainty eliminated whereas the degree

The theory of fuzziness is related to various areas of research such as Statistics, Information theory, Clustering and Decision analysis, Medical and Socio-economic prediction, Image processing, etc. The preparation and analysis of information development method are the

In order to deal with fuzziness there is a small area from the extremely large fields of theories

We define problems in the form of decision, management and prediction and by analysis, understanding and utilization of information, we can find their solutions. Thus, a significant quantity of information together with significant quantity of uncertainty is considered as the

As we become aware of how much we know and how much we do not know, as information and uncertainty themselves become the focus of our concern, we begin to see our problems as

elements gives us no information about the fuzziness of the nth element.

<sup>i</sup>¼<sup>1</sup> <sup>μ</sup>Að Þ xi

� �, where F is a fuzzy entropy is

� is not 1. The probabilities

� � and

� � and

other.

7. The dissimilarity arises because while P<sup>n</sup>

12 Fuzzy Logic Based in Optimization Methods and Control Systems and Its Applications

probabilities, everything is also objective.

entropy will increase. Thus, Z ¼ F μAð Þ x<sup>1</sup> ; μAð Þ x<sup>2</sup> ;…; μAð Þ xn

of vagueness and ambiguity of uncertainties is the quantity of fuzziness.

applications of the mathematical designs related to system research.

ground of many problems.

centring on the issue of complexity.

and applications which have been developed from the concept of fuzziness.

$$H\_D(A) = -\frac{1}{n\log 2} \sum\_{i=1}^{n} \left[ \mu\_A(\mathbf{x}\_i) \log \mu\_A(\mathbf{x}\_i) + \left(1 - \mu\_A(\mathbf{x}\_i)\right) \log \left(1 - \mu\_A(\mathbf{x}\_i)\right) \right] \tag{16}$$

Bhandari and Pal [10] suggested the following measure:

$$H\_{\varepsilon}(A) = \frac{1}{n\sqrt{\varepsilon - 1}} \sum \mathcal{l} \log \left[ \mu\_{A}(\mathbf{x}\_{i}) e^{1 - \mu\_{A}(\mathbf{x}\_{i})} + \left( 1 - \mu\_{A}(\mathbf{x}\_{i}) \right) e^{\mu\_{A}(\mathbf{x}\_{i})} - 1 \right] \tag{17}$$

Some other measures of fuzzy entropy are:

1. Corresponding to Sharma and Taneja's [11] measure of entropy of degree α, β

$$H\_{\alpha}^{\S}(P) = \frac{1}{\beta - \alpha} \left[ \sum\_{i=1}^{n} p\_i^{\alpha} - \sum\_{i=1}^{n} p\_i^{\beta} \right], \quad \alpha \neq \beta \tag{18}$$

we get the measure

$$H\_a^\p(A) = \frac{1}{\beta - \alpha} \sum\_{i=1}^n \left[ \mu\_A^a(\mathbf{x}\_i) + \left(1 - \mu\_A(\mathbf{x}\_i)\right)^a - \mu\_A^\p(\mathbf{x}\_i) \right. \\ \left. + \left(1 - \mu\_A(\mathbf{x}\_i)\right)^\beta\right] \tag{19}$$

where either α ≥ 1, β ≤ 1 or α ≤ 1, β ≥ 1 and α = β only if both are unity.

2. Corresponding to Kapur's measure of entropy of degree α, β

$$H\_{\alpha}^{\mathcal{C}}(P) = \frac{1}{\alpha + \beta - 2} \left[ \sum\_{i=1}^{n} p\_i^{\alpha} + \sum\_{i=1}^{n} p\_i^{\beta} - 2 \right] \tag{20}$$

we get the measure

$$H\_a^{\mathfrak{f}}(A) = \frac{1}{a+\beta-2} \sum\_{i=1}^{n} \left[ \mu\_A^a(\mathbf{x}\_i) + \left(1 - \mu\_A(\mathbf{x}\_i)\right)^a - \mu\_A^{\mathfrak{f}}(\mathbf{x}\_i) \right. \\ \left. + \left(1 - \mu\_A(\mathbf{x}\_i)\right)^{\mathfrak{f}} - 2\right] \tag{21}$$

interval type 2 fuzzy set membership function from a collection of type 1 fuzzy sets. Kumar and Bajaj [25] introduced NTV metric based entropies of interval-valued intuitionistic fuzzy

Mishra [26] introduced two exponential fuzzy information measures and characterised axiomatically. To show the effectiveness of the proposed measure, it is compared with the existing measures. Further, two fuzzy discrimination and symmetric discrimination measures are defined and their validity are checked. Important properties of new measures are studied and their applications in pattern recognition and diagnosis problem of crop disease are discussed. Hooda and Mishra [27] gave two sine and cosine trigonometric measures of fuzzy information

3. A new parametric measure of fuzzy information measure involving two

A new generalised fuzzy information measure of order α and type β has been suggested and their necessary and required properties are examined. Thereafter, its validity is also verified. Also, the monotonic behaviour of fuzzy information measure of order α and type β has been

<sup>A</sup> þ 1 � μAð Þ xi

� �<sup>α</sup> <sup>1</sup>�μ<sup>A</sup> ð Þ ð Þ xi � �<sup>β</sup>

<sup>α</sup>ð Þ A = 0.

μAð Þ xi

1 2.

� h

� �αμAð Þ xi <sup>1</sup> <sup>þ</sup> log <sup>μ</sup><sup>A</sup>

� �,

� <sup>2</sup><sup>β</sup>

Fuzzy Information Measures with Multiple Parameters http://dx.doi.org/10.5772/intechopen.78803

(26)

15

The generalised measure of fuzzy information of order α and type β is given by,

μαμAð Þ xi

Xn i¼1

α≻ 0, α 6¼ 1, β 6¼ 0:

We have supposed that, 00:<sup>α</sup> <sup>¼</sup> 1, we study the following properties:

<sup>α</sup>ð Þ A is minimum if A is a non-fuzzy set.

<sup>α</sup>ð Þ A is maximum if A is most fuzzy set.

<sup>1</sup>�<sup>α</sup> <sup>μ</sup>Að Þ xi

� �<sup>α</sup> <sup>1</sup>�μ<sup>A</sup> ð Þ ð Þ xi <sup>1</sup> <sup>þ</sup> log 1 � <sup>μ</sup>Að Þ xi

, Thus, at μAð Þ¼ xi

<sup>α</sup>ð Þ A is nonnegative.

� �αμAð Þ xi <sup>þ</sup> <sup>1</sup> � <sup>μ</sup>Að Þ xi

� � � � .

<sup>∂</sup>μAð Þ xi <sup>¼</sup> 0 which is possible <sup>μ</sup>Að Þ¼ xi <sup>1</sup> � <sup>μ</sup>Að Þ xi that is if <sup>μ</sup>Að Þ¼ xi

1 2:

<sup>α</sup>ð Þ <sup>A</sup> = 0 and <sup>μ</sup>Að Þ¼ xi 0 we have <sup>H</sup><sup>β</sup>

h i<sup>β</sup>�<sup>1</sup>

� �<sup>α</sup> <sup>1</sup>�μ<sup>A</sup> ð Þ ð Þ xi

sets and their applications in decision making.

parameters α and β

Hβ

3.1. Properties of H<sup>β</sup>

Property 1: H<sup>β</sup>

Property 2: H<sup>β</sup>

Property 3: H<sup>β</sup>

We have, <sup>∂</sup>H<sup>β</sup>

Taking, <sup>∂</sup>H<sup>β</sup>

ð ÞÞ � xi 1 � μAð Þ xi

Now, we have, <sup>∂</sup>2H<sup>β</sup>

<sup>α</sup>ð Þ A

For <sup>μ</sup>Að Þ¼ xi 0, it implies <sup>H</sup><sup>β</sup>

<sup>α</sup>ð Þ A <sup>∂</sup>μAð Þ xi <sup>¼</sup> <sup>α</sup>

> <sup>α</sup>ð Þ A <sup>∂</sup>2μAð Þ xi

<sup>α</sup>ð Þ¼ <sup>A</sup> <sup>1</sup>

<sup>α</sup>ð Þ A

<sup>α</sup>ð Þ <sup>A</sup> <sup>≥</sup> 0 i.e. <sup>H</sup><sup>β</sup>

ð Þ 1 � α β

conferred.

and obtained new measures of trigonometric fuzzy information.

3. Corresponding to Kapur's [12] measure of entropy

$$H\_a(P) = -\sum\_{i=1}^n p\_i \quad \log p\_i + \frac{1}{a} \sum\_{i=1}^n \left[ \left( 1 + ap\_i \right) \log \left( 1 + ap\_i \right) - \frac{1}{a} (1+a) \log \left( 1 + a \right) \right], a \ge 0 \tag{22}$$

we get the measure

$$\begin{aligned} H\_a(A) &= -\sum\_{i=1}^n \left[ \mu\_A(\mathbf{x}\_i) \log \mu\_A(\mathbf{x}\_i) + \left( 1 - \mu\_A(\mathbf{x}\_i) \right) \log \left( 1 - \mu\_A(\mathbf{x}\_i) \right) \right] \\ &+ \frac{1}{a} \sum\_{i=1}^n \left[ \left[ \left( 1 + a \mu\_A(\mathbf{x}\_i) \right) \log \left( 1 + a \mu\_A(\mathbf{x}\_i) \right) \right] + \frac{1}{a} \sum\_{i=1}^n \left[ \left( 1 + a - a \mu\_A(\mathbf{x}\_i) \right) \log \left( 1 + a - a \mu\_A(\mathbf{x}\_i) \right) \right] \right] \\ &- \frac{1}{a} (1 - a) \log \left( 1 - a \right) \end{aligned} \tag{23}$$

4. Corresponding to Kapur's [4] measure of entropy of degree α and type β

$$H^{\alpha, \beta}(P) = \frac{1}{\beta - \alpha} \log \frac{\sum\_{i=1}^{n} p\_i^{\alpha}}{\sum\_{i=1}^{n} p\_i^{\beta}}, \alpha \neq \beta \tag{24}$$

we get the measure

$$H^{\mu,\beta}(P) = \frac{1}{\beta - a} \log \frac{\sum\_{i=1}^{u} \mu\_A^a(\mathbf{x}\_i) + \left(1 - \mu\_A^a(\mathbf{x}\_i)\right)^a}{\sum\_{i=1}^{u} \mu\_A^\beta(\mathbf{x}\_i) + \left(1 - \mu\_A(\mathbf{x}\_i)\right)^\beta}, a \ge 1, \beta \le 1 \quad \text{or } a \le 1, \beta \ge 1. \tag{25}$$

Kosko [13] introduced fuzzy entropy and conditioning. Pal and Pal [14] gave object background segmentation using new definition of entropy. Parkash [15] proposed new measures of weighted fuzzy entropy and their applications for the study of maximum weighted fuzzy entropy principle. Parkash and Gandhi [16] suggested new generalised measures of fuzzy entropy and properties. Parkash and Singh [17] gave characterization of useful information theoretic measures. Taneja [18] introduced generalised information measures and their applications. Taneja and Tuteja [19] gave characterization of quantitative-qualitative measure of relative information. Tuteja [20] introduced characterization of nonadditive measures of relative information and accuracy. Tuteja and Hooda [21] proposed generalised useful information measure of type α and degree β. Tuteja and Jain [22, 23] gave characterization of relative useful information having utilities as monotone functions and an axiomatic characterization of relative useful information. Tahayori [24] presented a universal methodology for generating an interval type 2 fuzzy set membership function from a collection of type 1 fuzzy sets. Kumar and Bajaj [25] introduced NTV metric based entropies of interval-valued intuitionistic fuzzy sets and their applications in decision making.

Mishra [26] introduced two exponential fuzzy information measures and characterised axiomatically. To show the effectiveness of the proposed measure, it is compared with the existing measures. Further, two fuzzy discrimination and symmetric discrimination measures are defined and their validity are checked. Important properties of new measures are studied and their applications in pattern recognition and diagnosis problem of crop disease are discussed. Hooda and Mishra [27] gave two sine and cosine trigonometric measures of fuzzy information and obtained new measures of trigonometric fuzzy information.
