Text Emotion Detection Using Machine Learning And NLP

DOI : 10.17577/IJERTCONV11IS03018

Download Full-Text PDF Cite this Publication

Text Only Version

Text Emotion Detection Using Machine Learning And NLP

TEXT EMOTION DETECTION USING MACHINE LEARNING AND NLP

Mrs.Sincija ,Assistant Professor of CSE Department Devika S Das ,Rwethuvarnna S ,Suhail C ,Prasanna Kumar 4 th year Department of Computer Science Engineering

Dhanalakshmi Srinivasan College of Engineering

  1. INTRODUCTION

    As in recent years, the textual data is created too much in the form of conversation, like chat on messenger, Whats app, twitter and hotel reviews [1]. To analyze these types of data we need natural language processing (The subfield of machine learning) techniques, which helps to analyze and extract meaningful information from text data. But the complexity occurs when a statement is ambiguous, where we need some facial expression to completely under-stand the meaning of the sentence. So, our system proposed an Long short-term memory based approach.The need for natural language processing techniques is in different fields, such as movies recommendation system (e.g., where we have to analyze different movies plot to find a similarity to recommendation) [2]. Our motivation to work on that problem is the rapidly growing interest in the field of sentiment analysis and Ubiquitous and universal use of computational systems, it also improves human-computer interaction. We know from biological psychology, a subset of psychology that explores the relationship between action and the body, particularly the brain, that rationality without emotion is insufficient to make the

    decisions that govern our lives, and ultimately make up our lives [1]. This offers inspiration and a fresh viewpoint on integrating emotional intelligence into decision-making. Syntax based graph convolution network used for emotion detection and pooling is used to improve the accuracy of results [2].

    Pooling based technique is used so that the number of parameters can be reduced that will reduce the computation time.

    Figure 1.1 A simple 3 turn conversation

    A 3 turn conversation between two people shown in figure 1.1 in which first person send a message Im not feeling well. Second person reply Why and then first person reply something is seriously wrong with my life. It shows the emotions of first person either sad or worried.

    1. CRUCIAL CONTRIBUTION

      We used Hyper-parameters (used to increase speed and quality of learning process) tuning for model training and choose the best parameters from them on the basis of optimal results and choose the best activation function and optimizers for it. Applying the state-of-the-art technique Bi long short-term memory (LSTM) for model training. We improve the preprocessing steps which take a tremendous contribution in the accuracy of model.

    2. Experimentation & Data-Set

    The data set used for experimentation have three term of conversation which used for training and four different labels are assigned to text retrieved during the conversation. These three terms are message from first user to second user and

    second user reply to first user and third term is again reply of first user to second user. We have to analyzed these three term and extract emotion and label them with happy, sad, angry and others.

  2. LITERATURE REVIEW

    In this section, we briefly understand the basic of emotion detection and the most recent work done in this field.

    1. Emotion understanding

      The emotion expressed by human beings in different ways, such as writing, speech, gestures, and body language. So, we need an appropriate model to categories these emotions. as, we can evaluate personal blogs or tweets for detection of basic purpose of talk or hidden emotion of the person in text or speech [3]. The first theory of emotion detection was proposed by [4] in 1872, in which humans and animals express different emotions with a different way. Later on, the emotion described as brain mechanism in 1984. Evolution theory said that the emotion varies with the passage of time [3]. To detect the emotion like happy sad angry novel deep learning approach used it combine the both semantic and sentiment techniques to explore the real- world conversation dataset for getting better results on emotion detection [5] [6] [7]. In the domain of text speech and images deep neural network approaches consider to accomplish the task successfully. In sequential formation LSTM and neural network algorithms are more operative and effective as stated and used by [8][9]. In recent research the state of art results of sentiments and emotion detection are achieved using Deep learning and LSTM model [10] [11] [12] [13]. Many classifiers of machine learning which were supervised and unsupervised models reported as effective results on emotion detection and sentiment analysis [14] [15] and most recent were deep learning models [16]. Memory network method is also used for emotion detection where interaction between users become enable by using two different memory networks.

    2. Model for Emotion Detection

      Model is a structured way to describe an emotion from the text. To categories emotion we need to define the emotion types. In this work we define four types of emotion which one is sad happy angry and other. The most recent work on emotion detection is done by [21] in which the researchers used the idea of syntactic rules semantics knowledge. The author used the voice tone and frequency of audio as input to analyze emotions [17]. In the work of [25], they used expressions and gestures from videos to extract emotions. But as our goal was to get emotions from text so we concentrated on the models that can perform well to understand the text.

    3. Method for emotion detection from text

      Most of the emotions are expressed in the form of text by different peoples in different ways on social media. Following methods are introduced to analyze these emotions from text data [19][21]. Recurring neural network described for classification of emotion through tracking of individual conversational state [18]. Distributional semantic model introduced for emotion classification [24].

    4. Keyword based Method

      Keyword extraction was the most basic and straightforward approach. By part of speech tagging, the keyword is extracted and matched with the word identified as emotional words. If a word matches with emotion then different criteria are used to assign proper emotion.

    5. Lexicon Based Method

      In the lexicon-based method we label each extracted keyword with associated emotion. It is mostly similar to the keyword extraction method but the only difference is to assign an emotion label. National Research Council of Canada (NRC) and EmoSenticNet (ESN) are some of the commonly used emotion and sentiment lexicon [23]. Deep learning, selfattention and turn based conversation model are used to evaluate the emotions which show the better result as compared to lexicon-based approaches [20].

    6. Machine Learning

      For the emotion detection the supervised and unsupervised machine learning approach can be used. In supervised learning, we have a complete labeled dataset whereas in unsupervised we have un-labelled dataset. The data-set one third was used for training and the remaining was used for testing. For the identification of emotions, most methods extract features such as repeated n-grams, negation, punctuation, emoticons, hashtags etc. to construct a character representation of the text, which is then used as input by classifiers such as Decision Trees, Naive Bayes, support vector machine among others to predict the performance [21] [22].

    7. li>

      Hybrid Method

      In the hybrid approach, we try to combine all three methods motioned above to achieve better accuracy. Recent work is done in 2018, which was done on the same data-set which we are working on. Researchers proposed an LSTM based approach, they claimed that emotion extraction from the text was a challenging process in the absence of facial expression and voice. They extracted emotion by using sentiment and semantic representation of text. To create emotion they used a multi-task problem where the given instance belongs to one and only one of the classes- happy, sad, angry and other. The input was provided on two different layers, one layer was with sentiment and other with semantic- based embedding. They concatenated both the layers and converted it into the fully connected network. For semantic author choose glove and sentiment specific word embedding [17].

      The author of [25] said that emotion extraction plays an important role in the identification of a person towards an organization or policies of the government. The correct way to word representation was compulsory for any type of emotion classification. The author [25] used Glove model word representation. A word from tweets were checked if it is present, it updated in a model. The tweets words are added in data-set to create a vector and the matrix. Then the data is reduced to a single vector by two methods- SUM and SVD. For classification purpose, used SVM and Random forest [22].

    8. Approach Comparison

    In this table 2.1 different approaches used to detect the emotion like happy sad angry fear angry unhappy.

    The f1 score of happy and unhappy emotion is greater than other approaches as see in table 2.1. Emotex approach have better results but they not used the embedding like other approaches have. LSTM have SSWE embeddings and f1 score is 49.1. other approaches have no embeddings and score near to the other approaches as well. So, this table 2.1 show the comparison of different approaches.

    Table 2. 1

    Comparison Table of Different approaches

    Sr

    Approach

    Embedding

    Emotion type

    F1-

    score

    1

    LSTM

    SSWE

    Happy, Sad, Angry

    49.1

    2

    LDA/SVM

    No

    Happy, Sad, Angry, Fear

    50.3

    3

    EmoNets

    No

    Happy, Angry, Sad, Disgust

    47.67

    4

    SVM

    No

    Joy, Anger, Fear, Sad

    49.54

    5

    Emotex

    No

    Happy, Unhappy

    60.4

  3. PROPOSED APPROACH

    Our proposed approach used the LSTM Bi-directional words embedding and annotated corpus. First step is preprocessing technique which apply on the input data in which we remove the extra spaces invalid characters resolve the character encoding and correction of spelling done. After the preprocessing adding words for interpretative or labels in annotated corpus and word embedding techniques apply on the input data. Mapping the dataset through vocabulary for getting real number of words. LSTM apply for training the dataset which filter the dataset and detect the emotions of happy sad angry from the dataset. The proposed model is shown in figure 2.1.

    Figure 2.1 Proposed Model OF Aimens System

    1. INPUT AND PREPROCESSING

      The training dataset has been tokenized by transforming the sentences into words and converting all uppercase letters into lower-case. We also include stemming the words using the NLTK stemmer and removal of extraneous whitespaces. We also convert multiple instances of.?!, sign to single instance e.g. okay sure to okay, sure while contractions (was, wasnt, arent) were left untreated.

    2. SPELL COREECTION

      The data given in dataset is in raw format and have many spelling errors, we also resolve this issue. For this, we use TextBolb, which is a library for different NLP tasks. It takes much time in processing time but increases the performance somehow,

    3. WORD EMBEDDING

      We have to give user utterance to a neural network and we have to obtain word embedding for each word in the input sequence. So, for this, we try Word2Vec, Glove and FastText model for reconstruction of embedding words. We train a simple (LSTM) model by using each of these above-mentioned embeddings to test the effectiveness of these embeddings. We validate this using 10 fold cross validation to ascertain the effectiveness of various embeddings. Finally, we used Glove twitter.27B.200 embedding model because it performed better than other models.

    4. Model Training

    For model training, first of all, we split out our data in a ratio of 9:1 and divided into two sets. i.e. one is training dataset and second is validation dataset. On the training set, we train our model and with validation set, we tune the hyper parameter and we use GridCv search for hyper parameter tuning. We choose GridCv search because it can include more parameters for the analysis without effecting the overall performance [28]. We try different optimizers like Adaptive moment estimation algorithms (to find the learning rates of individual parameter), Stochastic Gradient Descent (for faster iterations), Adamax,

    ,Nadam and different activation functions like hard sigmoid, elu, selu, soft plus, soft sign, relu, tanh, and SoftMax. After hyper parameter tuning, we decided to use categorical- SoftMax using entropy as loss function and Adamax as our learner function. We obtain the learning rate of 0.003 and 200 which is optimal. The Dataset ultimately composed of 2226 conversations of 3- turn each, with their emotion class labels (Sad, Angry, Happy).

  4. EVALUATION AND EXPERIMENTS

    In this section we describe the brief evaluation dataset and experimentation results.

    1. DATASET DESCRIPTION

      The dataset we used is provided by SemEval 2019. This dataset consists of 15k tweet conversational pairs. i.e. conversation and its response. To gather the class label we used the expert openion from judges, we gave the third sentence of the conversation accompanying with the context of past 2 sentences to judges. Each conversation was shown to 5 judges for highquality judgment and to decide the emotion class finally we take majority votes. At the time of training, evaluation dataset is unseen.

      Type

      Happy

      Sad

      Angry

      Others

      Total

      Count

      109

      107

      90

      1920

      2226

      Percentage

      4.90

      4.81

      4.04

      86.25

      100

      Table 4.1 Validation Set distribution

      Dataset detail statistics are shown in Table 4.1 in which Total count of Happy, Sad, Angry and others. Percentage of each emotions calculated. This dataset consists of four kinds of emotions happy sad anger and others.

    2. SHAPE OF DATASET

    The training dataset of five columns is shown in table 4.2 which contains the turn vise conversion of user and the values of dataset.

    A. COMPARITIVE ANALYSIS

    Comparative analysis of our approach with other approaches and given below in the form of graphs and shown in table 5.1

    Table 5. 1

    Comparison of results obtained from different embeddings using an BiLSTM network.

    Approaches

    Happy

    Sad

    Angry

    F1

    F1

    F1

    vg.f1

    Word2Vec

    64.44

    74.71

    59.28

    66.14

    FastText

    64.58

    76.68

    59.98

    67.08

    Glove

    66.11

    78.99

    63.79

    69.63

    ID

    Having a identical number to identify each training sample

    Turn 1

    Have the first turn written by User 1

    Turn 2

    Contain the second turn. It is a reply to the first turn in conversation and is written by User2

    Turn 3

    Contain the third turn. It is reply to the second turn in the conversation which is written by user 1

    Label

    Contain the Human judge of label of emotions of Turn 3 based on the conversation of given trained sample. It is always one of the four

    Values

    Happy sad angry and others

    Table 4.2 Shapes of Dataset

    The shape of dataset explains in table 4.2 in which every tweet has unique ID to identify each training sample. User 1 type a message and it will be turn 1 and User two reply the user 1 it will be Turn two and then User 1 reply the User 2 it will be turn 3 as well. After this 3-turn conversation judge the emotion and label the values Happy sad angry and others.

    1. EVALUATION METRICS

      Calculation of averaged scored F1 done by equation given below (i) and (ii) F1 for three classes Happy angry and sad

      P i (i)

      R I (ii)

      True Positive which is TPi which is correctly predicted in class and False Negative FNi and False Positive FPi which are the errors of type-1 and type-2 in the class sample. Harmonic mean calculated by P and R which resulted the final value of F1.

    2. Baseline

    Baseline approach is provided by Semeval-2019. For data preprocessing, they use Keras pre-processing techniques i.e. tokenization, hashing-trick (convert text to a sequence of indexes) and one hot encoding on Ground truth labels. They use Glove embeddings; every row has 100-dimensional Glove embedding return as Out-put. For model training, they use Basic LSTM model which takes the embedding matrix as an input and the embedding matrix to be loaded in the embedding layer.

  5. RESULTS

    In section V the summary of results from several techniques on the dataset is described in table 5.1. Our trained model gives us the best performance on the F1 score.

    In table 5.1, we can easily see that Glove model performs better than other. The average f1 of glove approach is 69.63. The other approaches have less f1 score in which Word2Vec and FastText approach used.

    Figure 5. 1 Aimens vs baseline

    Figure 5.1 shows our group improvement in micro- F1 score with the Baseline micro-F1 score, which clearly shows that our model works better than baseline.

    Figure 5. 2 Aimens vs baseline AvgPrRef1

    Figure 5.2 shows the results of Precision, Recall and F1 score comparison of Our approach with the baseline approach, which clearly shows that baseline approach has already good Recall but have Bad Precision, so our approach has Good precision and recall also, and a clear difference between F1 scores.

    Figure 5.3 comparisons of our approach with the baseline

    approach

    In Figure 5.3, we show the results of Angry, sad and happy score comparison of our approach with the baseline approach, which clearly shows that our approach works well than the baseline.

  6. CONCLUSION

    In this paper, we proposed our method for detection and classification of emotions from a 3-turn conversation. It consists a set of emotions fetched from twitter in the form of 3 turns. Our method involves building a word embedding from pre-trained embeddings i.e. Glove.twitter.27B. We choose it because it contains a huge amount of conversations based on emotions. We also contributed to the preprocessing of data by using different techniques i.e. normalization of data by removing repeated characters, spelling correction etc. We also applied stop words removal but it reduced the accuracy from which we analyzed that stop\words removal technique is not efficient for a contextual task. Then we use dthe most recent stage Bi-Long Short-term Memory on that problem which took word embedding as input and predicted emotion. To make it more optimized for our selected Dataset, we also performed hyper parameter tuning and choose the best parameters on which our model performs well on our data. Through these techniques, we achieve the 0.7189(F1score) which is higher as compared to other models. As a future work, we have planned to extend hybrid approaches by using emotion lexicons and emoticons handling.

  7. REFERENCES

[1] GUPTA, UMANG, ANKUSH CHATTERJEE, RADHAKRISHNAN SRIKANTH, AND PUNEET AGRAWAL. A SENTIMENT-AND- SEMANTICSBASED APPROACH FOR EMO-TION DETECTION IN TEXTUAL CONVERSATIONS. ARXIV PREPRINT ARXIV:1707.06996 (2017).

[2] SEMWAL, NANCY AND KUMAR, ABHIJEET AND NARAYANAN, SAKTHIVE AUTOMATIC SPEECH EMOTION DETECTION SYSTEM USING MULTI-DOMAIN ACOUSTIC FEATURE SELECTION AND CLASSIFICATION MODELS, IDENTITY, SECURITY AND BEHAVIOR ANALYSIS (ISBA), 2017.

[3] SAILUNAZ, KASHFIA, MANMEET DHALIWAL, JON ROKNE, AND REDA ALHAJJ. EMOTION DETECTION FROM TEXT AND SPEECH: A SURVEY. SOCIAL NETWORK ANALYSIS AND MINING 8, NO. 1 (2018):28.

[4] KAHOU, SAMIRA EBRAHIMI, XAVIER BOUTHILLIER, PASCAL LAMBLIN, CAGLAR GULCEHRE, VINCENT MICHALSKI, KISHORE KONDA, SBASTIEN JEAN ET AL. EMONETS: MULTIMODAL DEEP LEARNING APPROACHES FOR EMOTION RECOGNITION IN VIDEO.JOURNAL ON MULTIMODAL USER

INTERFACES 10, NO. 2 (2016): 99-111.

[5] GEORGE, ANON, BARATHI GANESH HB, AND K. P. SOMAN. TEAMCEN

AT SEMEVAL-2018 TASK 1: GLOBAL VECTORS REPRESENTATION IN

EMOTION DETECTION. IN PROCEEDINGS OF THE 12TH INTERNATIONAL WORKSHOP ON SEMANTIC EVALUATION, PP. 334-338. 2018.

[6] HASAN, MARYAM, ELKE RUNDENSTEINER, AND EMMANUEL AGU. AUTOMATIC EMOTION DETECTION IN TEXT STREAMS BY ANALYZING TWITTER DATA. INTERNATIONAL JOURNAL OF DATA SCIENCE AND ANALYTICS (2018): 1-17.

[7] CHATTERJEE A, NARAHARI KN, JOSHI M, AGRAWAL P. SEMEVAL- 2019

TASK 3: EMOCONTEXT CONTEXTUAL EMOTION DETECTION IN TEXT. INPROCEEDINGS OF THE 13TH INTERNATIONAL WORKSHOP ON SEMANTIC EVALUATION 2019 JUN (PP. 39-48)

[8] S. HOCHREITER, J. SCHMIDHUBER, LONG SHORT-TERM MEMORY, NEURAL COMPUTATION VOL. 9, PAGES 17351780.

[9] M. SCHUSTER, K. K. PALIWAL, BIDIRECTIONAL RECURRENT NEURAL NETWORKS, IEEE TRANSACTIONS ON SIGNAL PROCESSING VOL. 45, PA.

[10] BADARO, G., BALY, R., HAJJ, H., HABASH, N., EL-HAJJ, W., 2014. A LARGE SCALE ARABIC SENTIMENT LEXICON FOR ARABIC OPINION MINING, IN: PROCEEDINGS OF THE EMNLP 2014 WORKSHOP ON ARABIC NATURAL LANGUAGE PROCESSING (ANLP), PP. 165173.

[11] BADARO, G., EL JUNDI, O., KHADDAJ, A., MAAROUF, A., KAIN, R., HAJJ, H., EL-HAJJ, W., 2018. EMA AT SEMEVAL-2018 TASK 1: EMOTION

MINING FOR ARABIC, IN: PROCEEDINGS OF THE 12TH INTERNATIONAL WORKSHOP ON SEMANTIC EVALUATION, PP. 236244.

[12] BALIKAS, G., AMINI, M.R., 2016. TWISE AT SEMEVAL-2016 TASK 4: TWITTER SENTIMENT CLASSIFICATION. ARXIV PREPRINT ARXIV:1606.04351.

[13] AL-KHATIB, A. AND EL-BELTAGY, S.R., 2017. EMOTIONAL TONE DETECTION IN ARABIC TWEETS. IN PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL LINGUISTICS AND INTELLIGENT TEXT PROCESSING.

[14] SHAHEEN, S., EL-HAJJ, W., HAJJ, H., ELBASSUONI, S., 2014. EMOTION RECOGNITION FROM TEXT BASED ON AUTOMATICALLY GENERATED RULES, IN: DATA MINING WORKSHOP (ICDMW), 2014 IEEE INTERNATIONAL CONFERENCE ON, IEEE. PP. 383392.

[15] BALIKAS, G., AMINI, M.R., 2016. TWISE AT SEMEVAL-2016 TASK 4: TWITTER SENTIMENT CLASSIFICATION. ARXIV

[16] HO, D.T., CAO, T.H., 2012. A HIGH-ORDER HIDDEN MARKOV MODEL FOR EMOTION DETECTION FROM TEXTUAL DATA, IN: PAIFIC RIM KNOWLEDGE ACQUISITION WORKSHOP, SPRINGER. PP. 94105.

[17] PATERSON M, GLASS MR. THE WORLD THROUGH GLASS: DEVELOPING NOVEL METHODS WITH WEARABLE COMPUTING FOR URBAN VIDEOGRAPHIC RESEARCH. JOURNAL OF GEOGRAPHY IN HIGHER

EDUCATION. 2015 APR 3;39(2):275-87.

[18] MAJUMDER N, PORIA S, PENG H, CHHAYA N, CAMBRIA E, GELBUKH A.

SENTIMENT AND SARCASM CLASSIFICATION WITH MULTITASK LEARNING. IEEE INTELLIGENT SYSTEMS. 2019 JUL 18;34(3):38-43.

[19] M. R. NAQVI, M. ARFAN JAFFAR, M. ASLAM, S. K. SHAHZAD, M. WASEEM IQBAL AND A. FAROOQ, "IMPORTANCE OF BIG DATA IN PRECISION AND PERSONALIZED MEDICINE," 2020 INTERNATIONAL CONGRESS ON HUMAN-COMPUTER INTERACTION, OPTIMIZATION AND ROBOTIC APPLICATIONS (HORA), ANKARA, TURKEY, 2020, PP. 1-6, DOI: 10.1109/HORA49412.2020.9152842.

[20] RAGHEB W, AZÉ J, BRINGAY S, SERVAJEAN M. ATTENTION-BASED MODELING FOR EMOTION DETECTION AND CLASSIFICATION IN TEXTUAL CONVERSATIONS. ARXIV PREPRINT ARXIV:1906.07020. 2019 JUN 14.

[21] ACHEAMPONG FA, WENYU C, NUNOOMENSAH H. TEXT BASED EMOTION DETECTION: ADVANCES, CHALLENGES, AND OPPORTUNITIES. ENGINEERING REPORTS. 2020:E12189.

[22] ZAMKAH A, HUI T, ANDREWS S, DEY N, SHI F, SHERRATT RS. IDENTIFICATION OF SUITABLE BIOMARKERS FOR STRESS AND EMOTION

DETECTION FOR FUTURE PERSONAL AFFECTIVE WEARABLE SENSORS. BIOSENSORS. 2020 APR;10(4):40.

[23] SAXENA A, TRIPATHI K, KHANNA A, GUPTA D, SUNDARAM S. EMOTION DETECTION THROUGH EEG SIGNALS USING FFT AND MACHINE

LEARNING TECHNIQUES. (PP. 543-550). SPRINGER, SINGAPORE.

[24] JAN R, KHAN AA. EMOTION MINING USING SEMANTIC SIMILARITY. INNATURAL LANGUAGE PROCESSING: CONCEPTS, METHODOLOGIES, TOOLS, AND APPLICATIONS 2020 (PP. 1115-1138). IGI GLOBAL.

[25] LAI Y, ZHANG L, HAN D, ZHOU R, WANG G. FINE-GRAINED EMOTION

CLASSIFICATION OF CHINESE MICROBLOGS BASED ON GRAPH CONVOLUTION NETWORKS. WORLD WIDE WEB. 2020 JUN 17:1-7.

[26] M. R. NAQVI, M. ASLAM, M. W. IQBAL, S. KHURAM SHAHZAD, M. MALIK AND M. U. TAHIR, "STUDY OF BLOCK CHAIN AND ITS IMPACT ON INTERNET OF HEALTH THINGS (IOHT):CHALLENGES AND OPPORTUNITIES," 2020 INTERNATIONAL CONGRESS ON HUMAN- COMPUTER INTERACTION, OPTIMIZATION AND ROBOTIC APPLICATIONS

(HORA), ANKARA, TURKEY, 2020, PP. 1-6, DOI: 10.1109/HORA49412.2020.9152846.