Skip to main content
Intended for healthcare professionals
Free access
Research article
First published online September 18, 2016

Emotional states recognition, implementing a low computational complexity strategy

Abstract

This article describes a methodology to recognize emotional states through an electroencephalography signals analysis, developed with the premise of reducing the computational burden that is associated with it, implementing a strategy that reduces the amount of data that must be processed by establishing a relationship between electrodes and Brodmann regions, so as to discard electrodes that do not provide relevant information to the identification process. Also some design suggestions to carry out a pattern recognition process by low computational complexity neural networks and support vector machines are presented, which obtain up to a 90.2% mean recognition rate.

Introduction

The recognition of emotional states through a computer has been a widely studied topic in affective computing, since Rosalind W Picard proposed the basis to analyze the physiological responses of an emotion;1 however, most of these studies have been focused on analyzing the peripheral reactions produced by emotions, such as vocal or facial expression, and some researchers suggest that these responses could be manipulated by common users and they suggest the implementation of bio-medical signals as more reliable data sources, according to the William James theory:2 emotional states could produce disturbances in one or more of the basic human functions, such as changes in heart rate, muscle responses or temperature changes, when a person is facing a real or an imaginary emotional stimulus. There are many bio-medical signals that could be evaluated to recognize emotional states, such as heart rate or body temperature variations; however, our interest is focused on the analysis of signals that were originated by brain activity, particularly electroencephalography (EEG) due to the fact that this technique does not require extensive technical knowledge to implement and implementation costs are relatively low.

Related work

Some of the most outstanding work in emotion recognition by an EEG signals analysis, was presented by Dr M Murugappan of Perlis University in Malaysia and by Dr Sander Koelstra at Queen Mary University of London in the UK. Dr Murugappan proposed a mathematical model to infer emotional states from EEG analysis and Dr Koelstra developed the DEAP database, which is an extensive collection of physiological signal records of emotional stimulation processes, and both works demonstrates the feasibility of the establishment of a relationship between the electrical activity in brain cortex and emotional states.35 A summary of related work is presented in Table 1.
Table 1. Related work in classification and identification techniques of emotional states and associated identification rate (IR).
Technique IR (%) Reference
KNN, LDA 81.25 Murugappan (2011)4
MPC 64.00 Yuan et al. (2007)6
KNN 82.27 Heraz at al. (2007)7
SVM 66.70 Takahashi (2004)8
SVM 93.50 Li and Lu (2009)9
SVM 77.80 Rozgic et al. (2012)10
NN 43.14 Attabi and Dumouchel (2012)11
NN 60.00 Razak et al. (2005)12
NN 93.30 Gunler et al. (2012)13
KNN: k-Nearest Neighbors algorithm; LDA: Linear discriminant analysis; MPC: Multi-way Polarity Classification; SVM: support vector machines; NN: neural networks.

Emotions

Disambiguation

Words such as affect, feeling and emotion are commonly used synonymously; however, each of them has a very different meaning, in their etymology and in the physical and mental reactions they cause.14-16
Emotions are the manifested reactions to those affective conditions that due to their intensity, move us to some kind of action with slight or intense, concomitant or subsequent, repercussions upon several organs, that can set up partial or total blocking of logical reasoning.
Affect could be defined as a grouping of psychic phenomena manifesting under the form of emotions, feelings or passions, always followed by impressions of pleasure or pain, satisfaction or discontentment, like, dislike, joy or sorrow.17, ii
Feelings are seen as affective states with a longer duration, causing less intensive experiences, with fewer repercussions upon organic functions and lesser interference on reasoning and behavior.iii

Emotion theories

Over centuries, philosophers, physicians and psychologists have studied affectivity phenomena by questioning their origin, their role upon psychic life, their action with regards to favoring or hindering adaptation and their neuro-physiological concomitants;18 however as established by Scherer (Scherer (2005)) “even the simple question, of what emotions are?, hardly get the same answer”.19
There are classical and antagonistic theories of emotions:
Classical theories are supported by the Darwinian theory of emotions, which states that affective reactions are innate patterns designed to orient behavior and promote adaptation,18,20,21 and by recent theories which suggest that emotions are complex phenomena initiated by a central process as result of internal or external causes, that can be observed as an organismic alteration.19,22-29
Antagonistic theories are led by the Claparede theory,14 that defines emotions as useless, non-adaptive and harmful phenomena, since according to this theory emotions are characterized by a sudden disruption of an affective balance (mostly for short episodes), with slight or intense, concomitant or subsequent, repercussions upon several organs, that can set up partial or total blocking of logical reasoning in the affected subject.
It is the definition provided by Scherer (Scherer (2005)), which best fits an engineering task, defining an emotion as: an episode of interrelated and synchronized changes in most of the organismic subsystems(note call 4 and 5), in response to the evaluation of an external or internal stimulus event.

Data sources

The lack of a standardized database is one of the main problems with carrying out a study about the physiological responses of an emotional state, since standardized databases such as the International Affective Picture System (IAPS) or the International Affective Digitized Sound system (IADS) can be only implemented to perform the stimuli processes, which implies that if two different research groups perform an experimental setup that associates the same stimulus and even the same participants, the results will vary due the environmental conditions.30,31
However, there are some data collections such as bu-3DFE, PhysioNet, DEAP and the Ibug project that can help with this problem.
bu-3DFE is a collection of several physiological signal records, associated with a wide range of emotional expressions.32
PhysioNet is a collection of physiological signals, time series and images, constructed to perform a behavior analysis.33
The Ibug project is a collection of bio-metric data associated with affective behaviors.34
DEAP (Database for Emotion Analysis using Physiological Signals) is a wide collection of biosignals generated by several specialized experimental setups under arousal and valence stimuli.5

Database for emotion analysis using physiological signals

is a large collection of physiological signals which are directly associated to an emotional stimulus in a multi-modal dataset for the analysis of human affective states, where the EEG and other peripheral physiological signals of 32 participants were recorded as each watched 40 one-minute long excerpts of music videos. Participants also rated each video in terms of the levels of arousal, valence, like/dislike, dominance and familiarity they experienced.5
This is the database that we implemented for the experimental process presented in this paper, because it is to our best knowledge the most comprehensive and reliable source of data.

Data bounding methodology

This work proposes a methodology that reduces the amount of the processed data, by defining a model that establishes a correlation criteria between emotional activity and it responses in the brain cortex and excludes regions that could not provide significant performance to the recognition process.

The Brodmann regions

Dr Korbinian Brodmann sub-divided the brain cortex into regions that appeared to have micro-structural differences and associated these regions with specific cognitive functions, such as motor processing, speech, hearing or sight. Since our work is focused on analyzing an emotional process evoked by audio-visual stimuli, we are proposing that only the electrodes that are strongly related to the audition, visual, sensory and motorvi regionsvii should be considered for the digital signal processing task (see Table 2).18,3540
Table 2. Regions of the cortex with its associations with the cerebral cortex and its relationship with Brodmann regions.
Processes Brodmann regions Cortex regions
Visual 17, 18, 19, 20, 21, 37 Temporal lobe
    Occipital lobe
Audition 22, 41, 42 Temporal lobe
Sensory 1, 2, 3, 4, 5, 7, 22, 37, 39, 40 Parietal lobe
Motor 4, 6, 44, 9, 10, 11, 45, 46, 47 Temporal lobe
    Frontal lobe
• The audition processes are related to electrodes T7, T8, F7, F8, P7 and P8.
• The visual processes are related to electrodes O1 and O2.
• The sensory processes are related to electrodes CP5, CP1, CP2 and CP6.
• The motor processes are related to electrodes FC1, FCz and FC.

Selected electrodes

Only 15 electrodes were set as active elements, while 22 were setted as non active elements as can be observed in Figure 1 and Table 3. This provides a data reduction of 11,264 samples per second (considering a nominal sampling rate of 512 Hz), which is a significant data reduction and consequently also a computational burden reduction.viii
Table 3. Electrodes associated with the Brodmann regions.
Associated regions Electrodes
Frontal Temporal 7 F7
Frontal Cortex 5 FC5
Frontal Cortex 5 C5
Frontal Cortex 6 C6
Frontal Temporal 8 F8
Parietal Cortex 5 CP5
Parietal Occipital 4 P4
Parietal Occipital 8 P8
Parietal Occipital 7 P7
Parietal Occipital 3 P3
Parietal Cortex 6 CP6
Occipital 1 O1
Occipital 2 O2
Temporal 7 T7
Temporal 8 T8
Figure 1. Graphical representation of the active and non-active electrodes (green electrodes are considered as active electrodes and black marked as non-active electrodes).

Signal conditioning

A very wide variety of phenomena could affect the performance of the analysis of physiological data, such as a wide variety of noise, or the large amount of resources required to process these types of signals.

Noise filtering

The Laplacian filter described by Murugappan (equation (1)), was implemented to mitigate the problem that EEG signals are naturally contaminated with noise and artifacts (i.e. eye movement(EOG), muscular movement (EMG), vascular movements (ECG) and kinetic artifacts)4
x new = [ x ( t ) 1 / N E ] [ i = 1 N E x i ( t ) ]
(1)
where x new is the filtered signal, X t the raw signals and N E is the number of neighbor electrodes.

Signal bounding

A band-pass filter with cutoff frequencies of 0.5 Hz to 47 Hz, was implemented to exclude all frequencies that are not associated with the brain rhythms model: delta (0.2 to 3.5 Hz), theta (3.5 to 7.5 Hz), alpha (7.5 to 13 Hz), beta (13 to 28 Hz) and gamma ( > 28 Hz).41,42

Blind source separation

A blind source separation (BSS) algorithm was implemented to remove redundancy between active elements but preserve information of non-active elements. Since these cases are generally considered as a multi-channeling problem and the signal y ( n ) , its components y i ( n ) , could be defined as:
p Y ( y ( n ) ) = i = 1 m p y ( y i ( n ) ) n
(2)
where p ( Y ) is the probability distribution set, p y ( y i ( n ) ) is the marginal distribution and m refers to the predefined independent components, to separate each element of our gating region as shown in Figure 2, where S i = e i + m = 1 n r m , it is the sum of overlapped signals that occur when you try to read an specific electrode and S b i = e i + λ i represents the information of a specific an electrode without redundancy of the active electrodes.
Figure 2. Process model of the blind source separation by independent component analysis implemented.

Feature extraction

A common question about the implementation of the wavelet transformation is ‘Why does it not use Fourier traditional methods?’. The answer is that there are two important differences between Fourier and wavelet analysis.
The first is that due to the Fourier basis, functions are localized in frequency but not in time (a small frequency change in the Fourier transform could produce changes in all parts in the time domain), unlike the wavelet transform which presents resolutions in frequency (through expansion) and in time (through translations).
The second is that many kinds of functions that can be represented by wavelets in a more compact form (i.e. functions with discontinuities and features with sharp spikes usually need fewer functions when they are analyzed on a methodology based on wavelets) and due to this, large data sets can be easily and quickly transformed by the discrete wavelet transform (the counterpart of the discrete Fourier transform) by encoding the data as wavelet coefficients, which implies a higher processing speed since the computational complexity of the fast Fourier transform is O ( n  · l o g 2 ( n ) ) , while for the wavelet transform this is reduced to O ( n ) .43

Feature selection

Each level of the discrete wavelet transform is calculated by passing the signal through a series of filters; samples are passed through a low pass filter g and simultaneously a high pass filter h , asix
y [ n ] = ( x * g ) [ n ] = k = x [ k ] g [ n k ]
(3)
Also the energy content in a specified region of the signal f ( t ) , can be decomposed as
f ( t ) = j k d j , k ψ j , k ( t ) = j f j ( t )
(4)
where j , k Z y ψ ( t ) is the mother wavelet and the coefficients d j , k represent the inner product (equation (5))
d j , k = f ( t ) , ψ j , k ( t ) = 1 2 j f ( t ) ψ ( 2 j t k ) d t
(5)
d j , k represents the energy of the detail coefficientsx from f at level j , and the classification input array can be obtained by
E j = k d j , k 2
(6)
This same procedure can be implemented to extract the approximation coefficients.
The total coefficients energy can be expressed as
E t = i = 1 e E j
(7)
where e , are the electrodes associated with the wavelet analysis and j is the corresponding energy percentile level calculated as
ε j = E j E t × 100
(8)
The translation and dilation coefficients, can be implemented directly as the features in the classification problem.

Classes

Our proposal to establish an appropriate model that establishes a clear distance between each emotional tag, is the assignment of a distance parameter between the emotional tags according to the Ekman, Russell and Scherer models. Since the Ekman model provides the basic tags to identify each class, while the Russell and Sceherer models define emotional states as arousal and valence levels.
This model can be observed in Figure 3, which encompasses all similar emotional states according to their arousal and valence levels (i.e. all emotions that have high levels of arousal and high levels of valence are correlated to states of happiness, states which have a low valence and high arousal are correlate to anger, low arousal and low valence to sadness, and low arousal and high valence to relaxation states).
Figure 3. Emotional states distribution model, based on arousal and valence levels and the discrete tags of the Ekman model.
Class 1: (HA-HV) high arousal, high valence.
Class 2: (HA-LV) high arousal, low valence.
Class 3: (LA-HV) low arousal, high valence.
Class 4: (LA-LV) low arousal, low valence.
Also this model defines the orientation of each of the evoked potentials according to their characteristics to define the elements of each of the classes that would be implemented as references in the identification model, as can be observed in Figures 4 and 5, where each element of the experimental process belongs to a particular class.
Figure 4. The location of the experimental cases, according to their emotional stimuli responses obtained by the user ratings (the input elements to the classification process are created based on this information). Arousal: it includes features that define idle or alert states (i.e. disinterested, boring, alert, excited). Valence: ranges from unpleasant (i.e. sad, stressed) and nice (i.e. happy, euphoric). Domination: ranging from a feeling of helplessness and weak (uncontrolled) to one feeling empowered (in control of everything).
Figure 5. Distribution of evoked potential trials in an arousal and valence space. As seen there are ten experiments related to each class and a greater emphasis are given to three of them in relation to its distance to the other cases to ensure a greater separability between classes, according to their labeling.

Identification process

Support vector machines and neural networks, are the techniques that according to literature review have shown the best recognition rates (Table 1). Therefore, these two techniques were implemented in this work in order to corroborate the performance of our methodology.

Experimental setup and inputs

The experimental configuration presented in Figure 6, was designed to evaluate the identification performance of each of the combinations produced by implementing the clustering Algorithm 8.1, which ensures a consistent experimental process distribution for each of the elements of the classes by considering at least one experimental class associated with each class (i.e. each of the cross validations evaluate distinct cases of the same class).xi
Figure 6. Class selection model to perform the identification performance task.
• Case 1: Single class training, in order to achieve identification of a particular state ( n ).
• Case 2: Two classes training, to carry out the identification of two emotional states ( n , n + 1 ).
• Case 3: Three classes training, multi-class identification for three emotional states ( n , n + 1 , n + 2 ).
• Case 4: All classes training for a multi-class identification ( n , n + 1 , n + 2 , n + 3 ).
Algorithm 1: Feature input arrangement algorithm.
Input: Features d j , k arrays
Output: Matrix arrangement E k
Selecting features to provide input classes;
for i 1 to 3 do
e= electrodes array);
for j 1 to 15 do
C = [ e ( b ) ; d j , k ]
‘a’ is the parameter that defines the number of emotional states that would be analyzed;
for k to a do
for l 1 to 30 do
E k = [ l ; C ] ;
E k is the arrangement of features of the three case studies of each emotional state and all users

8.1.1 Classification inputs

To ensure that each case study contains the information from more than one user at the same time and to corroborate the existence of a correlation between different study subjects, each case is also assessed by means of the following classification scheme:
a: Contains the information of a single user.
b: Contains information of two users.
c: Contains information of three users.
d: Contains information for all users.

Neural networks

Artificial neural networks (NN) are computational techniques that can be trained to find solutions, recognize patterns, classify data or forecast events by defining the way its individual computing elements are connected, which automatically adjusts their configurations to solve specific problems according to a specified learning rule. Due to the fact that EEG data are considered as chaotic signals, many researchers have proposed the implementation of NN, as one of the most appropriate tools to carry out the analysis, since NN have a remarkable ability to derive meaning from complicated or imprecise data. Besides NN are inspired by the natural behavior of a neuron, and EEG signals are signals produced by neurons.

Implementation

The implementation of a highly complex NN was our initial proposal to carry out the pattern recognition task; however, as can be observed in Figure 7, the error rates have a direct correlation to the network architecture (i.e. if network architecture increases, so to does the error), contrary to our initial thought and that the implementation of low complexity architecture will be the best option for this particular case, as shown in Figure 8, where a 10-fold cross-validation was performed to evaluate 20 configurations of low complexity networks, in order to determine the amount of units per layer that would be necessary to obtain the most accurate and stable recognition rate.
Figure 7. An identification process performed by the same NN configuration, but incrementing by one the number of layers in each iteration up to 50.
Figure 8. Performance of low complexity NN configurations. Considering only NN with less than 20 hidden layers, the validation process chose the number of units per layer this network would have by considering that 20 would be the maximum number of elements per layer.
Based on the presented considerations, the following NN was implemented to perform the presented experimental results:xii
resilient back-propagation algorithm;
11-layer topology, 2/3/4 units in the input layer, 18 hidden units per layer and 2/3/4 units on the output;
200 maximum epoch;
maximum square error (MSE), Goal = 0 . 002 ;
70% for training data;
15% to validate;
15% to test.

Support vector machines

Support vector machines (SVM) provide separability to non-linear regions by implementing kernel functions that avoid the local minimum issues by implementing quadratic optimization, so that, unlike NN, this technique is more related to an optimization algorithm rather than to a greedy search algorithm. Also when the classification problems do not present a simple separating criterion, there are several mathematical approaches that could be applied to the SVM strategy, in order to retain all the simplicity of hyperplane separation.

Implementation

Three transformation kernels were implemented to perform the SVM identification processes:
Gaussian or radial basis G ( x , y ) = exp ( ( x y ) ( x y ) / 2 σ 2 ) ) ;
polynomial G ( x , y ) = ( 1 + x y ) d ;
multi-layer perceptron G ( x , y ) = tanh ( p 1 x y + p 2 ) .
Besides the fact that the training algorithm function implements an optimization method to identify vectors s j , weights α j and bias b , which is used to classify the vectors x according to equation (9), the results of the training process are also considered in the optimization of the classification process. This condition is known as the condition of Karush– Kuhn– Tucker (KKT), which is analogous to the condition where the gradient must be zero or at least modified to consider its limitations as
c i α i k ( s i , x ) + b
(9)
where k is the kernel function and for the lineal case and if c ⩾ 0, x is considered as a member of the first group and otherwise as a member of the second group, also restricted by the Lagrangian conditions, as
L ( x , λ ) = f ( x ) + λ g , i g i ( x ) + λ h , i h i ( x )
(10)
where f ( x ) is the objective function, g ( x ) is the conditioning function vector when g ( x ) 0 and h ( x ) is the the conditioning function vector when h ( x ) = 0 . λ vector, is then a concatenation of the λ g and λ h values of the Lagrange multiplier, with the the conditions of x L ( x , λ ) = 0 , λ g , i g i ( x ) = 0 i , g ( x ) 0 , h ( x ) = 0 and λ g , i 0 , to reduce the computational burden in the identification process, and the support vectors are defined by a n s v x p matrix, where n s v is the number of support vectors (maximum size of the training sample) and p are the elements of the β vector (which is the numeric vector of linear predictor coefficients), having a length equal to the number of predictors used to train the model and α values are vectors of n s v elements (which can be very large for data sets that contains many features).

Results

Separability

The first thing that may be noticed is that this methodology provides a significant reduction of the computational complexity, since, as can be observed in Figures 9 to 17, the degree of separability obtained by implementing it can be easily observed and some of the behavioral tendencies can be noticed without a computational identification process.
Figure 9. Distribution comparison between classes 3 and 1. A separation between the coefficients associated with these classes can be easily observed, even before the classification and recognition processes (class 3: relaxation/calm, class 1: happiness/elation).
Figure 10. Distribution comparison between classes 3 and 2. A very clear separation between the coefficients associated with these classes can be easily observed, even before the classification and recognition processes (class 3: relaxation/calm, class 2: anger/hostile).
Figure 11. Distribution comparison between classes 3 and 4. A clear separation between the coefficients associated with these classes can be easily appreciated, even before the classification and recognition processes (class 3: relaxation/calm, class 4: boredom/sadness).
Figure 12. Distribution comparison between classes 1 and 4. For this case the separation between classes is not as clear as the three previous cases; however, that the separation process can be carried out with simple classification techniques can still be observed (class 1: happiness/elation, class 4: boredom/sadness).
Figure 13. Distribution comparison between classes 1 and 2. For this case the separation between classes is not as clear as the three previous cases; however, that the separation process can be carried out with simple classification techniques can still be observed (class 1: happiness/elation, class 4: anger/hostile).
Figure 14. Distribution comparison between classes 4 and 2. For this case the relationship between the classes is considered very close, which could indicate very similar behavior in both classes (class 1: boredom/sadness, class 4: anger/hostile).
Figure 15. Distribution comparison between classes 1, 2 and 3. This representation shows the level of complexity for the three-class identification process (class 1: happiness/elation, class 2: anger/hostile, class 3: relaxation/calm).
Figure 16. Distribution comparison between classes 1, 3 and 4. This representation shows the level of complexity for the three-class identification process (class 1: happiness/elation, class 3: relaxation/calm, class 4: boredom/sadness).
Figure 17. Distribution comparison between all classes. This representation shows the level of complexity for the four-class identification process (class 1: happiness/elation, class 2:anger/hostile, class 3: relaxation/calm, class 4: boredom/sadness).

Implementation of class three as reference

Figures 9, 11 and 10 show feature distribution models that implement class 3 as reference. A very weak correlation between classes 2, 4 and 3 can be observed, while a sightly higher correlation can be observed for classes 1 and 3 (i.e. the relaxation state is more related to happiness than to anger or sadness).

Implementation of class one as reference

Figures 12 and 13 show the feature distribution models that implement class 1 as reference. It can be observed that the correlation between classes 1 and 4 could be considered weak, while the correlation between classes 1 and 2 could be considered strong (i.e. the happiness state is more related to anger than to sadness).

Implementation of class two as reference

Figure 14 shows a feature distribution model that provides a comparison of classes 2 and 4; the relationship between these emotional states is considered very close.

Implementation of a three-classe comparison

In Figures 15 and 16 it can be seen that even though there is clearly overlap between classes, they are mostly distinguishable to the naked eye, except for those combinations labeled with a close relationship, such as anger and sadness.

Implementation of a four-class comparison

Figure 17 provides an overview of the four emotion recognition process that was implemented in this work; it can be observed that these emotions are closely related and represent a considerable increase in the complexity of identification.

NN performance

The implemented NN obtains up to 98% mean recognition rate for the trivial case (by recognizing a single emotional state), while for the binary and the multi-class scheme (two, three and four emotions) the mean identification rates were up to 90.2%, 84.2% and 80.9%, respectively.
The performance of the NN for the two classes recognition process can be observed in Figures 18 and 19, in which can be appreciated the recognition rate by class and a graph showing ten validations, showing a stable performance.
Figure 18. The 10-fold cross-validation performance of the NN for the two-class identification process.
Figure 19. Performance of the NN for the two-class identification process.
Figures 20 and 21 present the identification performance for the three-class recognition scheme, which obtains up to an 84.2% mean identification rate and the 10-fold cross-validation process to evaluate the stability of this identification scheme.
Figure 20. The 10-fold cross-validation performance of the NN for the three-class identification process.
Figure 21. Performance of the NN for the three-class identification process.
Figures 22 and 23 presents the four-class recognition scheme which obtains up to an 80.9% mean identification rate and the 10-fold cross-validation process to evaluate the stability of this identification scheme.
Figure 22. The 10-fold cross-validation performance of the NN for the four-class identification process.
Figure 23. Performance of the NN for the four-class identification process.

SVM performance

The results obtained by implementing the support vector methodology and the proposed signal conditioning strategy are shown in Tables 4, 5 and 6.
Table 4. SVM average identification rates for the Gaussian experimental cases.
  Gaussian
  Identification Rates (%)
  Case 1 Case 2 Case 3 Case 4
a 89.59 83.61 79.20 82.61
b 88.78 83.22 81.19 80.56
c 86.61 81.22 78.53 81.23
d 84.03 84.85 83.88 83.38
Table 5. SVM average identification rates for the polynomial experimental cases.
  Polynomial
  Identification Rates (%)
  Case 1 Case 2 Case 3 Case 4
a 90.19 82.34 78.76 82.67
b 89.45 83.76 81.56 81.23
c 89.12 82.21 79.50 82.47
d 85.47 84.73 84.49 85.15
Table 6. SVM average identification rates for the multi-layer experimental cases.
  Multi-layer
  Identification Rates (%)
  Case 1 Case 2 Case 3 Case 4
a 89.89 83.43 78.16 81.11
b 89.85 82.98 83.12 81.64
c 87.68 82.47 77.93 80.51
d 85.48 86.64 83.13 83.18

SVM identification performance

As can be observed in Table 7, the performance obtained from implementing this identification technique and the proposed signal conditioning strategy remains in competitive range when compared with the rates reported in the literature, although it could be considered that some other authors obtain slightly higher identification rates,9,13 despite the fact that their implementations do not consider multi-class problems, while works that do consider multi-class schemes obtain significantly lower identification rates than those presented in this paper.4,5
Table 7. Performance comparison of the implemented classification techniques.
Overall Performance (%)
Cases Case 1 Case 2 Multi-class
SVM Gaussian 87.25 83.22 81.32
SVM Polynomial 88.55 83.26 81.97
SVM Multi-Layer 88.22 83.88 81.11
NN 90.2 84.2 80.9

Conclusions

We present a strategy to carry out an emotion identification process, by analyzing the electroencephalographic activity of users when they are experiencing one or more emotional stimuli by implementing a strategy that reduces the amount of electrodes that have to be analyzed, which can be translated as an a priori removal of large amounts of information from the outset, retaining competitive recognition rates. In addition, some considerations that allow for the reduction of the computational burden required for the recognition and identification process are presented.
Most of the comparatives representations presented in this work contain information sets of up to 16 participants, and for all of them the information appears to be grouped into classes, which suggest the existence of relationships between the signal behavior of emotional states.
Another important aspect that may be noticed is that even though the amount of the initial information is reduced, we did not get any significant penalty in the recognition rates and our identification rates are comparable to those presented by some of the most important researchers in the field, such as Dr Muruarapan. However, the lack of a standard methodology to perform an emotion recognition task is one of the main problems that we faced in the development of this research, since most works implement distinct approaches and even distinct data sources, resulting in a very wide range of experimental procedures and results, making comparison between them unfeasible. The proposals presented by Dr Muruarapan and Dr Scherer underpin the efforts of the community by supporting the importance of affective computing and its impact on technological developments. Also the development of affective computing techniques are becoming more viable with the development of a wide variety of portable devices which facilitate information gathering processes, as well as digital processing techniques.
Because we work with a group of specialists focused on the physical rehabilitation of high performance athletes, the implementation of this algorithm in a real-time platform, combined with a micro-expressions recognizing technique is contemplated as future work to serve as a support tool to monitor patient progress. This is important because experts suggest that some of the athletes endanger their physical integrity, whether by their competitive desire or by frustration. Also almost any modern device has capability to collect and process large amounts of information, and therefore the possibility of developing systems to recognize the emotional states of a user is of growing interest.

Acknowledgments

I would like to thank CONACYT for making this project possible and the Technological Institute of Tijuana for providing necessary technical support for the implementation of this project.

Declaration of Conflicting Interests

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding

The author(s) received no financial support for the research, authorship, and/or publication of this article.

Footnotes

i. From the Latin emovere, meaning ‘moving’, ‘displacing’.
ii. From the Latin affectus, meaning ‘to afflict’, ‘to shake’, ‘to touch’ .
iii. From the Latin animus, meaning ‘basic attitude’ or ‘governing spirit’ .
iv. Information processing (CNS), Support (CNS, NES, ANS), Executive (CNS), Action (SNS), Monitor (CNS).
v. (CNS), central nervous system; (NES), neuro-endocrine system; (ANS), autonomic nervous system; (SNS), somatic nervous system (the organismic subsystems are theoretically postulated functional units or networks).
vi. The motor area is considered as a reference for the filtering process, since this model considers the noise that is produced by the motor activity as eye movement or muscle activity.
vii. Since the records implemented in the experimental process were obtained from the DEAP dataset, it is expected that the emotional processes were heavily related to visual and auditory activity; however we also consider the limbic area as a region of interest, because most of the literature refers to it as emotion’s center in the brain, which it is also conveniently located in the temporal and parietal cortex areas.
viii. The same methodology could be applied to exclude the 29.5% of the electrodes of the 10/20 classical system and achieve a data reduction of 3,072 samples per second.
ix. It is important to note that both filters are interrelated, due to the fact that outputs of the transformation process deliver the detail (high pass filter) and approximation (low pass filter) coefficient that are implemented in the classification problem.
x. The same procedure was performed to acquire the approximation.
xi. The neural network inputs sorted the number of classes by manipulating the a parameter.
xii. This architecture was selected based on the experimental observations, where each of them were performed under a 10-fold cross-validation process.

References

1. Picard RW. Affective computing. 1st ed. Cambridge, MA: MIT Press, 2000.
2. James W. What is an emotion? Mind 1884; 9(34): 188–205.
3. Murugappan M, Rizon M, Nagarajan R, et al. Lifting scheme for human emotion recognition using EEG. In: International symposium on information technology, Kuala Lumpur, Malaysia, 26–28 August 2008, Vol. 2, pp.1–7. IEEE.
4. Murugappan M. Human emotion classification using wavelet transform and NN. In: International conference on pattern analysis and intelligent robotics (ICPAIR), Putrajaya, 28–29 June 2011, Vol. 1, pp.148–153. IEEE.
5. Koelstra S, Muehl, Soleymani M, et al. DEAP: A database for emotion analysis using physiological signals,. IEEE T Affective Comput 2012; 3: 18–31.
6. Lin YP, Wang CH, Wu TL, et al. Multilayer perceptron for EEG signal classification during listening to emotional music. In: TENCON 2007 – 2007 IEEE region 10 conference, Taipei, 30 October–2 November 2007, pp.1–3. IEEE.
7. Heraz A, Razaki R, Frasson C. Using machine learning to predict learner emotional state from brainwaves. In: Seventh IEEE international conference on advanced learning technologies, Niigata, 18–20 July 2007, pp.853–857. IEEE.
8. Takahashi K. Remarks on SVM-based emotion recognition from multi-modal bio-potential signals. In: 13th IEEE international workshop on robot and human interactive communication, 20–22 September 2004, pp.95–100. IEEE.
9. Li M, Lu BL. Emotion classification based on gamma-band EEG. In: Annual international conference of the IEEE engineering in medicine and biology society, Minneapolis, MN, 3–6 September 2009, pp.1223–1226. IEEE.
10. Rozgic V, Ananthakrishnan S, Saleem S, et al. Ensemble of SVM trees for multimodal emotion recognition. In: Asia-Pacific signal information processing association annual summit and conference, Hollywood, CA, 3–6 December 2012, pp.1–4. IEEE.
11. Attabi Y, Dumouchel P. Emotion recognition from speech: WOC-NN and class-interaction. In: 11th international conference on information science, signal processing and their applications, 2–5 July 2012 pp.126–131. E-ISBN : 978-1-4673-0380-4 Print ISBN: 978-1-4673-0381-1INSPEC Accession Number: 13033317 Conference Location : Montreal, QC Publisher: IEEE.
12. Razak A, Komiya R, Izani M, et al. Comparison between fuzzy and NN method for speech emotion recognition. In: Third international conference on information technology and applications, Montreal, QC, 4–7 July 2005, Vol. 1. pp.297–302. IEEE.
13. Gunler M, Tora H. Emotion classification using hidden layer outputs. In: International symposium on innovations in intelligent systems and applications, Trabzon, 2–4 July 2012, pp.1–4. IEEE.
14. Claparede E. Feelings and emotions. Feelings and Emotions: The Wittenberg Symposium 1928; 16: 124–139.
15. Cannon WB. The James– Lange theory of emotion: A critical examination and an alternative theory. Am J Psych 1927; 39: 10–124.
16. Asada M. Development of artificial empathy. Neuroscience Res 2015; 90: 41–50.
17. Calvo RA, D’Mello S, Gratch J, et al. The Oxford Handbook of affective computing. 1st ed. New York, NY: Oxford Library of Psychology, 2014.
18. do Amaral JR, de Oliveira JM. Limbic system: The center of emotions, http://www.healing-arts.org/n-r-limbic.htm (accessed January 2016).
19. Scherer KR. What are emotions? and how can they be measured? In: Trends and developments: research on emotions, 2005, Vol. 44, pp.695–729. Thousand Oaks, CA and New Delhi: Social Science Information and SAGE Publications.
20. Darwin C. The expression of the emotions in man and animals. 3rd ed. New York, NY: Oxford University Press, 1998.
21. Black J. Darwin in the world of emotions. J Roy Soc Med 2002; 95: 311–313.
22. Russell JA. A circumplex model of affect. J Pers Soc Psychol 1980; 39: 1161–1178.
23. Lehmann A, Bahçesular K, Brockmann EM, et al. Subjective experience of emotions and emotional empathy in paranoid schizophrenia. Psych Res 2014; 220: 825–833.
24. Roseman IJ. Phenomenology, behaviors, and goals differentiate discrete emotions. J Pers Soc Psychol 1994; 67: 206–221.
25. Roseman IJ. Appraisal processes in emotion: Theory, methods, research. 7th ed. Oxford: Oxford University Press, 2001.
26. Ekman P, Friesen WV, O’Sullivan M, et al. Universals and cultural differences in the judgments of facial expressions of emotion. In: J Pers Soc Psychol 1987 53: 712–717.
27. Ekman P, Friesen WV. Constants across cultures in the face and emotions. J Pers Soc Psychol 1971; 17: 124–129.
28. Clore GL, Ortony A. Psychological construction in the OCC model of emotion. Emotion Rev 2013; 5: 335–343.
29. The cognitive structure of emotions. Andrew Ortony, Gerald Clore and Alan Collins. New York, NY: University of Cambridge, 1988. no. of pages: 207. isbn 0-521-35364-5. Applied Cognitive Psychology 1992; 6: 181–182.
30. Lang PJ, Bradley MM, Cuthbert BN. International affective picture system (IAPS): Technical manual and affective ratings. In: NIMH Center for the Study of Emotion and Attention, FL: The Center for Research in Psychophysiology, University of Florida. Retrieved from http://csea.phhp.ufl.edu/Media.html#midmedia
31. Lang PJ, Bradley MM, Cuthbert BN. International affective digitized sounds (IADS): Stimuli, instruction manual and affective ratings. In: The Center for Research in Psychophysiology, University of Florida. Retrieved from http://csea.phhp.ufl.edu/Media.html#midmedia
32. Yin L, Wei X, Sun Y, et al. A 3D facial expression database for facial behavior research. In: 7th international conference on automatic face and gesture recognition, Southampton, 2–6 April 2006, pp.211–216. IEEE.
33. Moody G, Mark R, Goldberger A. Physionet: A web-based resource for the study of physiologic signals. IEEE Eng Med Biol Mag 2001; 20: 70–75.
34. Petridis S, Martinez B, Pantic M. The {MAHNOB} laughter database. Image Vision Comput 2013; 31: 186–202.
35. Laurence J, Garey KB. Brodmann’s localisation in the cerebral cortex. 1st ed. New York, NY: Spinger Science+Bussiness Media, 2006.
36. Jacobs KM. Brodmann’s areas of the cortex. In: Encyclopedia of clinical neuropsychology. New York: Springer New York, 2011, pp.459–459.
37. Sanchez-Panchuelo RM, Besle J, Beckett A, et al. Within-digit functional parcellation of Brodmann areas of the human primary somatosensory cortex using functional magnetic resonance imaging at 7 tesla. J Neurosci 2012; 32: 15815–15822.
38. Isaacson RL. Limbic system. New York, NY: John Wiley and Sons, 2001.
39. LeDoux JE. Emotion circuits in the brain. Ann Rev Neuroscience 2003; 23: 154–184.
40. MacLEAN P. The limbic system (‘ visceral brain’) and emotional behavior. AMA Arch Neurol Psychiatry 1955; 73: 130–134.
41. Sherwood L. Human physiology from cells to system. 8th ed. Boston, MA: Brooks/Cole, Cengage Learning, 2013.
42. Fox SI. Human physiology. 7th ed. New York, NY: McGraw-Hill, 2002.
43. Vidakovic B, Mueller P. Wavelets for kids: A tutorial introduction. Technical report, Duke University, 1991.

Cite article

Cite article

Cite article

OR

Download to reference manager

If you have citation software installed, you can download article citation data to the citation manager of your choice

Share options

Share

Share this article

Share with email
EMAIL ARTICLE LINK
Share on social media

Share access to this article

Sharing links are not relevant where the article is open access and not available if you do not have a subscription.

For more information view the Sage Journals article sharing page.

Information, rights and permissions

Information

Published In

Article first published online: September 18, 2016
Issue published: June 2018

Keywords

  1. EEG
  2. affective computing
  3. emotions
  4. neural networks
  5. support vector machines
  6. Brodmann regions
  7. arousal
  8. valence

Rights and permissions

© The Author(s) 2016.
Request permissions for this article.
PubMed: 27644256

Authors

Affiliations

Adrian Rodriguez Aguiñaga
Instituto Tecnológico de Tijuana, Mexico
Miguel Angel Lopez Ramirez
Instituto Tecnológico de Tijuana, Mexico

Notes

Adrian Rodriguez Aguiñaga, Instituto Tecnológico de Tijuana, Blvd. Industrial s/n, Mesa de Otay, 22430 Tijuana, B.C., Mexico. Email: [email protected]

Metrics and citations

Metrics

Journals metrics

This article was published in Health Informatics Journal.

VIEW ALL JOURNAL METRICS

Article usage*

Total views and downloads: 748

*Article usage tracking started in December 2016


Altmetric

See the impact this article is making through the number of times it’s been read, and the Altmetric Score.
Learn more about the Altmetric Scores



Articles citing this one

Receive email alerts when this article is cited

Web of Science: 11 view articles Opens in new tab

Crossref: 11

  1. Electroencephalogram-based emotion recognition using factorization tem...
    Go to citation Crossref Google Scholar
  2. Fully connected-based nonnegative matrix factorization neural network
    Go to citation Crossref Google Scholar
  3. Applying data mining techniques to predict vitamin D deficiency in dia...
    Go to citation Crossref Google Scholar
  4. Cascaded Convolutional Recurrent Neural Networks for EEG Emotion Recog...
    Go to citation Crossref Google Scholar
  5. Stochastic weight averaging enhanced temporal convolution network for ...
    Go to citation Crossref Google Scholar
  6. Application of Electroencephalography-Based Machine Learning in Emotio...
    Go to citation Crossref Google Scholar
  7. Emotion Recognition by Correlating Facial Expressions and EEG Analysis
    Go to citation Crossref Google Scholar
  8. Predicting Exact Valence and Arousal Values from EEG
    Go to citation Crossref Google Scholar
  9. Critical Analysis of Cross-Validation Methods and Their Impact on Neur...
    Go to citation Crossref Google Scholar
  10. Development of emotion classifier based on absolute and differential a...
    Go to citation Crossref Google Scholar
  11. Non-generalized Analysis of the Multimodal Signals for Emotion Recogni...
    Go to citation Crossref Google Scholar

Figures and tables

Figures & Media

Tables

View Options

View options

PDF/ePub

View PDF/ePub

Get access

Access options

If you have access to journal content via a personal subscription, university, library, employer or society, select from the options below:


Alternatively, view purchase options below:

Purchase 24 hour online access to view and download content.

Access journal content via a DeepDyve subscription or find out more about this option.