**7. Applications and future directions**

One application of these studies is to build an artificial intelligence to decode brain signals which contribute to socioemotion understanding. Most of the studies use the acted (posted) vocal expression as testing materials, which were produced by professional actors, public speakers, or amateurs to portray an intended emotion. In real‐life communication, the com‐ municators may use such emotional pose to achieve certain communicative goals. Some research purpose, for example, the cultural display in vocal expression communication, may be specifically favored by using posed stimuli. However, a call for research on naturalistic, ecological, and observation‐based stimuli is highly recommended. Therefore, a future study is to examine how the brain differentiates "real" vs. "fake" vocal expression by looking at the neurophysiological responses.

Another implication of using EEG signals to study vocal emotion decoding is to test the effectiveness of speech‐coding strategies used in hearing aids for deaf listeners when they distinguish the emotions via prosody‐specific features of language [33, 34]. In Agrawal et al. [33], statements simulated with different speech‐encoding strategies differentiated the P200 in the happy expression and an early (0–400 ms) and late (600–1200 ms) gamma band power increase in vocal expressions of happiness, anger, and neutral. In Agrawal et al. [34], the P200 was differentiated by different simulation strategies in all types of emotions, and was larger in happiness than in other emotion types across speech‐encoding strategies. These studies emphasized the importance of vocoded simulation to better understand the prosodic cues which cochlear impairment users may be utilizing to decode emotion in the voice. Further studies will also draw upon the merits of multimodal recording and synchroniza‐ tion of neurophysiological and peripheral physiological responses to decoding vocal expres‐ sions, including eye movement, pupil dilation, heart rate tracking, etc., to understand how different systems support the understanding of social and emotional information in speech and vocalizations.
