**1. Introduction**

#### **1.1 Background**

Communication is not possible without some channel. Communication between human is modeled by two ways: multiplicity and modality. Multiplicity defines more than one way for the communication and modality defines the way human senses are used to perceive signals from outer world [1]. Speech and vocal information are communicated through the auditory channel, whereas facial expression is communicated via the visual channel. Organs such as nose, ear, skin provides the different modalities for the communication. Multi-channel communication is highly robust; failure of one channel can be compensated by another channel.

Facial expression provides an important behavioral measure for studies of emotion, cognitive processes, and social interaction [2]. For a human being, recognition of face and facial expression is a trivial task. We discriminate the faces with almost

no effort in a fraction of a second. But it is equally challenging to teach a machine to perform the same task.

Expressions are not mere changes in muscle position, rather a complex psychophysiological process. The psychological process of thoughts emerging in mind is followed by a physiological process in which the thoughts render as expressions on the face by means of muscle deformation. The muscle movement lasts for a brief period of about 250 ms to 5 sec. Hence recognizing expressions from the spontaneous image is harder compared to posed still images [3].

Recognition of pure expression is difficult to wide range of expressions, as well as a same expression might have different intensities. Schmidt and Cohn [4] noted 18 unique classes of the expression smile. Intensity of expression can vary from gentle to peak.
