**1. Introduction**

During the last decade, information about the emotional state of users has become more and more important in computer‐based technologies. Several emotion recognition methods and their applications have been addressed, including facial expression and microexpression rec‐ ognition, vocal feature recognition and electrophysiology‐based systems [1]. More recently,

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution and reproduction in any medium, provided the original work is properly cited. © 2017 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

the integration of emotion forecasting systems in ambient‐assistant living paradigms has been considered [2]. Concerning the origin of the signal sources, the used signals can be divided into two categories: those originating from the peripheral nervous system (e.g., heart rate, electromyogram, galvanic skin resistance, etc.) and those originating from the central ner‐ vous system (e.g., electroencephalogram (EEG)). Traditionally, EEG‐based technology has been used in medical applications but nowadays it is spreading to other areas such as enter‐ tainment [3] and brain‐computer interfaces (BCI) [4]. With the emergence of wearable and portable devices, a vast amount of digital data are produced and there is an increasing inter‐ est in the development of machine‐learning software applications using EEG signals. For the efficient manipulation of this high‐dimensional data, various soft computing paradigms have been introduced either for feature extraction or pattern recognition tasks. Nevertheless, up to now, as far as authors are aware, few research works have focused on the criteria to select the most relevant features linked to emotions, relying most of the studies on basic statistics.

It is not easy to compare different emotion recognition systems, since they differ in the way emotions are elicited and in the underlying model of emotions (e.g., discrete or dimensional model of emotions) [5]. According to the dimensional model of emotions, psychologists rep‐ resent emotions in a 2D valence/arousal space [6]. While valence refers to the pleasure or displeasure that a stimulus causes, arousal refers to the alertness level which is elicited by the stimulus (see **Figure 1**). Sometimes an additional category assigned as *neutral* is included, which is represented in the region close to the origin of the 2D valence/arousal space. Some studies concentrate on one of the dimensions of the space such as identifying the arousal intensity or the valence (low/negative versus high/positive) and eventually a third class neu‐ tral state. Recently, it was pointed out that data analysis competitions, similar to the brain‐ computer interfaces community, could encourage the researchers to disseminate and compare their methodologies [7].

Normally, emotions can be elicited by different procedures, for instance by presenting an external stimulus (picture, sound, word, or video), by facing a concrete interaction or situ‐ ation [8] or by simply asking subjects to imagine different kinds of emotions. Concerning external visual stimuli, one may resort to standard databases such as the international affec‐ tive picture system (IAPS) collection which is widely used [7, 9] or the DEAP database [10] that also includes some physiological signals recorded during multimedia stimuli presenta‐ tion. Similar to any other classification system, in physiology‐based recognition systems, it is needed to establish which signals will be used to extract relevant features from these input signals and finally to use them for training a classifier. However, as often it occurs in many biomedical data applications, the initial feature vector dimension can be very large in com‐ parison to the number of examples to train (and evaluate) the classifier.

In this work, we prove the suitability of incorporating a wrapper strategy for feature elimi‐ nation to improve the classification accuracy and to identify the most relevant EEG features (according to the standard 10/20 system). We propose it by using the spectral features related to EEG synchronization, which has never been applied before for similar purposes. Two learn‐ ing algorithms integrating the classification block are compared: random forest and support vector machine (SVM). In addition, our automatic valence recognition system has been tested both in intra and intersubject modalities, whose input signals are single trials (segments of signal after the stimulus presentation) of only one participant and ensemble averaged signals computed for each stimulus category and every participant, respectively.

**Figure 1.** Ratings of the pictures selected from international affective picture system for carrying out the experiment. L: low rating; H: high rating.
