Systematic Review of Determining Sarcasm in Sentiment Analysis

DOI : 10.17577/IJERTV12IS070049

Download Full-Text PDF Cite this Publication

Text Only Version

Systematic Review of Determining Sarcasm in Sentiment Analysis

Fredrick Boafo

Computer Science Dept, Lancaster University Ghana, LUG Accra, Ghana

Solomon Mensah

Department of Computer Science University of Ghana, UG

Accra, Ghana

Abstract – Sentiment analysis assists in determining the opinions of people as far as politics, business and other reviews are concerned. However, sarcastic sentiments by people become challenging if it is difficult to identify and detect them in textual sentiment classification. This goes a long way to affect the results of sentiment analysis that are undertaken on various subjects. The study seeks to conduct a systematic literature review of sarcasm in sentiment analysis to identify the trends and works done on sarcasm to aid researchers quest to effectively determine sarcastic sentiments. A systematic literature review approach was adopted considering articles obtained through the authors selection criteria and other review processes. The study shows that the reviewed papers produced substantial evidence on the classification techniques being employed in existing studies. About 12.25% of the considered articles provided information on their feature selection, and about 12.5% of the papers threw light on the different challenges encountered and the performance evaluation obtained in undertaking sarcasm detection. The study's results produced further enlightenment and trends on the identification and determination of sarcastic sentiment.

Keywords – Sentiment analysis; Sarcasm; Classification techniques; Feature selection; Opinion poll

  1. INTRODUCTION

    Sentiment analysis is the expression of user thoughts and attitudes toward a certain topic or subject on social media and the internet [1]. It is mostly used to aid in decision- making in the fields of business, politics, education, and the entertainment sector, among others. Some individuals think that sarcasm is only used for mocking and criticism [2, 3]. The majority of people consider sarcasm to be a witty language that conveys scorn or insult as well as a language used to delightfully correct something or someone. Sarcasm analysis is a difficult undertaking, say writers Parwal et al. [3], Bharti et al. [4], and Gamova et al. [5]. The discrepancy between literal and intended meaning is a characteristic of sarcastic thoughts that makes it difficult to discern. Sarcasm is employed often in the day to day speech and writing and it is prevalent in the contexts of online [6]. Sarcasm detection and scrutiny has become a core problem in NLP and detection of sarcastic sentiment

    in online media platforms including Twitter, Facebook, online blogs and others has become critical as they go a long

    way to influence decision making in organizations [7]. Most researchers ignore sarcasm when undertaking sentiment analysis because they see it as a complex task which consumes a lot of time and effort [3, 8, 9].

    Sarcasm in opinion analysis is a sophisticated form of sentiment expression where a persons opinion is directly opposite to what they truly mean [10, 11, 12]. It is typically used to express amusement or to express rage or disapproval at a certain circumstance [10, 11]. Due to the complexity of sarcasm detection, there is quite a few research undertaken on the topic. This paper therefore will do a systematic review and an in-depth analysis of sarcasm detection in sentiment analysis over five years to help enlighten us on the trends and the gap in research. Few selected papers would be analyzed and discussed to help us appreciate the trends and work done on the subject matter.

  2. BACKGROUND

    Sentiment analysis is a hot topic in artificial intelligence, and academics are putting a lot of effort into undertaking a number of studies in this area. People's opinions are important when making decisions. If decision-makers do not comprehend the public mood, they cannot lead effectively or make proper decisions. Social media is thriving, and on these channels, people's opinions are shared and exposed. What then is Sentiment Analysis? Sentiment analysis, according to Agarwal et al. [13], is the study that examines people's attitudes and sentiments toward things like products and services in the text.

    Social networks had 305 million monthly active users in one of the quarters of 2015, and in 2018 the quarterly figures of monthly active users grew to 335 million [14]. Social networks are extremely relevant information conduits because they may be used to gather and analyse information in real-time [15]. Users' feelings and attitudes are reflected in social media data on practically every subject where they can find readers and listeners [16]. Twitter was founded in 2006, and throughout its first year of operation, its user base grew quickly. There are more than 500 million registered

    users and more than 200 million active users each month [17]. All major candidates and political parties now have some sort of presence on social media thanks to this successful Twitter campaign. There has been an increase in study in the fields of social media sentiment analysis and data analytics as a result of the increased use of Twitter by candidates, politicians, and the general public during elections [18]. Uses for sentiment analysis can be found in e-commerce, politics, corporate settings, journalism, and more.

    Sarcasm in sentiment analysis is a topic that is being addressed by current sentiment analysis research. To identify sarcasm on Twitter, Manohar and Kulkarni [19] suggested a corpus-based and natural language processing strategy. Others, like Lunando and Purwarianti [20], classified sarcasm using machine learning algorithms by using parameters for the quantity of interjection words and negative information. So, these amazing and fascinating works inspired us and made us excited to think about this research topic.

  3. RELATED WORKS

    In Bouazizi et al. [21], the researchers develop a technique that effectively categorizes tweets regardless of the topic using a small number of variables. The study examines the usefulness of automatically identifying sarcastic tweets, demonstrating the accuracy of sentiment analysis that can be improved by understanding sarcastic and non-sarcastic sentiment.

    Two methods were employed by Bharti et al. [22] to identify sarcasm in a text. These are the interjection word's occurrence and the parsing-based lexicon building algorithm. The methods were contrasted with the most recent state-of-the-art method for sarcasm detection. Using a sentiment study, Lunado and Purwariati [20] identified two features to identify sarcasm. The interjections and the negative information are the features. In order to categorize sentiment, the sentiwordnet was employed. Using tweets from Twitter, Bhan et al. [23] suggest a system for measuring sarcasm. To identify the impact of sarcasm on texts and produce a score, many algorithms were developed. From the tweets that are received, various elements are created that contribute to a score. The study creates a separate portal to examine the user's sentence's score and determine the score. To identify sarcasm in tweets from the Twitter streaming API, Prasal et al. [24] compare various classification algorithms. To achieve the highest accuracy, the basset classifier is selected and combined with a variety of pre-processing and filtering methods employing emoji and slang dictionary mapping.

    In their opinion, information in sarcasm detection can be important to sentiment classification and vice versa,

    according to Magumater et al. [25]. n order to improve the performance of both tasks in a multitask learning environment, the research demonstrates the correlation between the two tasks and presents a multi-task learning- based framework utilizing a deep neural network that represents the association. The trends of sarcasm detection and their suggested techniques are studied by Razali et al. [26]. It focuses on sarcasm recognition and makes the case that sarcasm identification requires more than just text. The Naïve Bayes classification and AdaBoost algorithms were used by the authors and Bayana [27] to identify sarcasm on Twitter. The Naïve Bayes method divides tweets into sarcastic and non-sarcastic categories, whereas the AdaBoost algorithm uses iterative consideration of the subject of Dharmavarapu training data to convert weak to strong statements. Bagged gradient boosting is suggested in the paper by Khullar and Singh [28], with particle swarm optimization as the feature selection method. It is contrasted with other classifiers like bagged gradient boosting, gradient boosting, and random forest. The mapping between the emoji and acronym dictionaries is complete, and part of speech labeling is now used. Stop words and hashtags are identified and eliminated. Noise in the data is removed using particle swarm optimization. Furthermore, Losada and Benito [6] carried out additional research to improve sentiment tools' sensitivity and competence as well as to induce optimization with complex sarcasm detection. Six methods were also suggested by the authors of the study done by Bharti et al. [4] to analyze the sarcasm in tweets on twitter. The experiment's findings were contrasted with some of the state of the art at the time.

  4. RESEARCH METHODOLOGY

    The systematic literature review method provides a way of classifying, exploring, and examining the present research connected to any questions of interest and research areas.

    1. Research Problem

      In most situations, a sentiment or an opinion classified during sentiment classification may be a sarcastic sentiment and not the exact connotation of the word. But the question is, how can one be able to tell whether a sentiment is indeed positive and not just an ironic statement? In this study, we seek to pinpoint various sarcasm detection methods and approaches in sentiment analysis particularly the classification models used, challenges encountered, techniques used and among others.

    2. Research Questions

      • What are the classification techniques that could be employed in undertaking sentiment analysis?

        Motivation:

        The motivation here is to find out the popular classification technique considered especially when undertaking sarcasm in sentiment analysis. We would like to know the different classifiers that are considered by researchers and the widely used technique. For instance, the paper produced by Prasal et al. [22], helped us identify different classifiers and then proposed the simple and widely used technique.

      • What are the feature selection techniques that could be

        used?

        Motivation:

        To help us know some feature selection approaches considered when undertaking sentiment analysis. There may the Non-textual and textual feature selection approach. This would also help us know the set of features in a preprocessed text such as Unigram. Consequently, we will get to know some proposed set of features.

      • What type of dataset or what dataset could be used?

        Motivation:

        Different datasets are considered when undertaking sentiment analysis. What are these datasets? It may be a set of publicly available tweets or obtained private tweets that are classified and manually annotated by humans. The dataset also would pertain to a particular topic which we would want to know.

      • What are the challenges that could be encountered

        when undertaking sarcasm in sentiment analysis?

        Motivation:

        Bharti et al. [3] indicated that sarcasm detection is a very challenging task. We would want to find out the challenges that make sarcasm in sentiment analyses very tedious.

      • What performance evaluation could be obtained?

      Motivation:

      We would want to ascertain the prediction performance obtained using different evaluation measures. In the examination of the prediction performance in text mining, we may have four possible outcomes namely true positives, true negatives, false positives, and false negatives. The computation may be based on precision, recall, f-score and accuracy.

    3. Research Boundaries

      In this review, the authors reflect on how sarcasm is detected when conducting sentiment analysis. So, the population that will be observed consists of publications that take sarcasm into account while performing sentiment analysis on an opinion survey.

    4. Review Method

      The research protocol serves as the foundation for the review technique, and it is in this part that the search strategy, sources, studies to be chosen, and how to execute those choices are specified. This section's goal is to list the

      resources that will be used to conduct primary study searches. Based on the following criteria, all sources utilized in this study were analyzed:

      1. All publications are between 2015 and 2020.

      2. Journal Publications, conferences, magazines and books.

      3. For the year 2020, only journal or articles should be considered

      4. Papers with most of the keywords in the title.

      5. Publications whose title contains sarcasm and sentiment analysis.

      6. Publications whose abstract provide much enlightenment on sarcasm in sentiment analysis.

    5. Classification of Papers

      The review technique is built on the research protocol, which also outlines the search strategy, sources, studies to be used, and how to carry out those selections. The purpose of this part is to provide a list of the sources that will be consulted for primary study searches. All the sources included in this study were evaluated using the following standards:

        1. IEEE Xplore

        2. Oxford Academic

        3. The ACM digital library

        4. Science Direct

        5. Scopus

      6) Elsevier books

      Table 1 presents the allocation of the search results from each of the stated journals. The majority of these articles were obtained from IEEE and then followed by Scopus.

    6. Research Process

      The databases chosen in this research study included publishers sites which consist of published study from their database. The search strings that were used in the databases were on the bases of keywords and when least results are obtained, some alternative words in research questions. Some keywords were concatenated to form a search string. All the selected databases were searched using the search string, Sarcasm in sentiment analyses.

    7. Publication and Primary Study Selection

      1. Inclusion criteria

        The papers considered concentrate on sarcasm in sentiment assessments. The chosen articles must be full-text articles and must be accessible in English. Conference, journal, magazine, or book articles are anticipated for these articles.

        This research have more weight when providing empirical assessments. The intention was not about rating any work but to ascertain the importance of the work according to the domain proposed. Selected studies to be searched are papers from IEEE, ACM digital library, Science Direct, Elsevier, Scopus and Oxford Academic.

      2. Exclusion criteria

      The exclusion criteria consisted of the elimination of the duplicate results of articles. Further exclusions of studies that did not give many details about sarcasm in sentiment analysis have been made.

      Table 1 Distribution of search results obtained from different Publishers sites.

      Journal

      Database earch results of papers obtained

      Final results after all exclusion mechanism applied

      IEEE Xplore

      47

      7

      ACM Digital library

      2

      2

      Science Direct

      263

      1

      Scopus

      40

      6

      Elsevier books

      0

      0

      Oxford Academic

      20

      0

      Total

      368

      16

      H. Studies selection

      To lessen the potential of bias, it is vital to clarify the procedure and the standards for selecting and evaluating research after the sources have been identified. The protocol definition process should include deciding on the selection criteria. The research questions serve as the foundation for the inclusion and exclusion criteria. Hence, the researchers have determined that studies must provide recent initiatives (dating back no more than five years) that take into account all different types of debates concerning sarcasm in sentiment analysis. The keywords are sarcasm, classification methods, feature selection, sentiment analysis, and an

      opinion poll. So, the search term "Sarcasm in sentiment analysis" was utilized throughout the publisher's website. In IEEE, conference papers and journals were considered and 47 of such papers showed up in the search. A resolution was consequently made to consider both the journal and conference papers together with a magazine. Further exclusion was done using the formatting conditions in Microsoft spreadsheet using the keywords sarcasm and sentiment analysis. More so, the exclusion was considered by going through the abstracts of each of these articles to understand whether there was needed information as far as the topic and the keyword was concerned. 7 papers were obtained from the search. From the search using the keywords sarcasm and sentiment analyses in ACM database, only 2 conference papers published from 2015 to 2020 were obtained from the search. No search result was obtained from Elsevier. 40 articles were obtained from the search in the Scopus database. Excel conditional formatting was conducted taking into consideration highlight cell rules with texts that contain both sarcasm and sentiment analysis. The authors retrieved five (5) publications from this exercise. But the paper Hybrid method for sarcasm target identification to assist the sentiment analysis systems could not be accessed for free and hence excluded. In oxford academic database 20 papers were retrieved from the search all other things being equal. However advanced search by observation with much emphasis on the keywords; sarcasm and sentiment analyses did not fetch us any results. After 263 results being obtained from Science Direct only one article was considered after further excluding with emphasis on article titles consisting of both sarcasm and sentiment analysis. This review got rid of duplicate papers from different databases. All downloaded ACM digital library papers were also found in IEEE and hence discarded. 12 different papers from all other databases were considered for the studies. Fig.1 depicts the statistics on the final papers from the different databases primarily considered before extraction criteria applied whilst Fig. 2 shows the percentage statistics of papers obtained after the application of our inclusion and exclusion criteria. Fig. 3 also shows the selection process used to derive our papers and lastly, Fig. 4 is a graph showing the results of the scores of research questions answered.

      Fig. 1 Database search results of overall papers obtained and after the application of inclusion and exclusion criteria

      38%

      44%

      IEEE

      ACM Digital library Science Direct Scopus

      6% 12%

      Fig. 2 Statistics on papers obtained after inclusion and exclusion criteria applied

      Fig. 3 Chronology of the selection process (SLR protocol)

      Fig. 4 scores of papers that were considered based on research questions

  5. RESULTS AND DISCUSSION

    RQ1: Which classification techniques are employed in undertaking sentiment analysis?

    Four (4) papers, namely [LT1, LT11, LT16, LT14] (see Table

    2 in appendix), were considered for this question. In the research conducted by Bouazizi et al. [21], they performed the classification using Naive Bayes, Support Vector Machine (SVM), and Maximum Entropy classifiers [LT1]. In the paper produced by Prasal et al. [24], the sarcasm detection in the proposed model is done using classifiers such as Decision Tree, Random Forest, Gradient Boosting, Adaptive Boosting, Logistic Regression and Gaussian Naive Bayes. It was concluded that Decision Tree Classifier is a simple and widely used classification technique [LT11]. Magumater et al. [25], applied the final softmax classification [LT16]. In their study, Dharmavarapu and Bayana [27], also employed AdaBoost and Naïve Bayes classification [LT14].

    RQ2: Which dataset(s) are considered for sentiment analysis?

    Five (5) papers, namely [LT1, LT2, LT11, LT16, LT15] (see table 2 in appendix), were considered for this question. Bouazizi et al. [21], collected a set of publicly available tweets classifiable by humans and manually annotated them into positive and negative. Tweets are selected to belong to one of the following topics: politics, phone reviews, sports, movie reviews and electronic products [LT1]. Bharti et al. [22], did Database Collection for training 50000 tweets which were obtained with the sarcasm hashtag (#sarcasm) from Twitter with keyword love, amazing, good, hate, sad, happy, bad, hurt, awesome, excited, nice, great, sick, etc. For testing, Tweets are collected in two categories (i) tweets with sarcasm hashtag and (ii) tweets without a hashtag [LT2]. In the study by Prasal et al. [24], the dataset contains a collection of 2000 tweets which have class labels of 1 or 0, where 1 mean sarcastic and 0 means non-sarcastic. The dataset taken is that of about 2000 pre-classified tweets. The dataset contains two columns, Tweet and Label. The Tweet column contains the tweet, and the Label contains a binary label indicating whether the tweet is sarcastic or not [LT11]. According to Magumeter

    et al. [25], their dataset16 consisted of 994 samples, each sample containing a text snippet labelled with sarcasm tag, sentiment tag, and eye movement data of seven readers. The authors ignored the eye-movement data in the experiments. Of those samples, 383 are positive and 350 are sarcastic [LT16]. In the paper produced by Suhaimin et al. [29], they took into consideration a dataset which is manually arranged, the tweets in the dataset are physically named sarcastic and non-sarcastic dependent on human instinct which has organized an exact dataset for preparing. The physically ordered dataset is one of the presentations in this paper. The dataset contains an accumulation of 1000 tweets. The dataset taken is that of around 1000 pre-characterized tweets [LT15].

    RQ3: Which feature selection is being used?

    Two (2) papers, namely [LT5, LT7] (see table 2 in appendix), were considered for this question. Different papers use different feature selection approaches. In the feature selection of the paper produced by Parwal et al. [2], two types of features were extracted:

    1. Non-textual features: From the raw tweets they first extracted 6 features by counting the number of positive and negative Hashtags, that of positive and negative Emoticons, and that of positive and negative slang words.

    2. Textual features: After extraction of non-textual features. There are several features taken from the pre-processed text: Unigram, Negativity, number of interjection words [LT5]. To measure sarcasm accurately, Bhan et al. [23], proposed a set of features namely: Ngrams, Sentiments, Topics, Pos-tag and Capitalization. This system ues sentiwordnet Dictionary to assign negative and positive scores to each word and store it using its POS-ID. Using the above features, they trained their topic modeller using all tweets, then it generated the features for all tweets and then trained a classifier using these features [LT7].

      RQ4: What are the challenges that could be encountered when undertaking sarcasm in sentiment analysis?

      One paper which is [LT8] (see table 2 in appendix), answered this question. The study conducted by Khullar and Singh[28], identified these challenges with sarcasm detection:

      1. It might be utilized indirectly, more so the authors might

        have the type of incongruity which makes it hectic and tedious to comprehend the sentiments.

      2. The snide tweets communicate negative guesstimate utilizing positive words. In this way the classifier would erroneously dispense sentiments to these tweets.

      3. There is a wide usage of slang words, abbreviations, smileys, special symbols, and unstructured data which makes it quite tedious to identify sentiments [LT8].

    RQ5: Which performance evaluation could be obtained? Three (3) papers, namely [LT1, LT4, LT16] (see table 2 in appendix), were considered for this question. Bouazizi et al. [21], compared their proposed method to the baseline one presented by the n-grams model. They evaluated the two methods using one Key Performance Indicator (KPI) which is the accuracy. The results showed that their approach outperforms the baseline one. They obtained an accuracy that exceeded 80% using the 3 algorithms. However, SVM accuracy is better than that of Naive Bayes and Maximum Entropy [LT1]. In the study conducted by Bharti et al. [22], the first approach attains a 0.89, 0.81 and 0.84 precision, recall and f score respectively. The second approach attains 0.85,

    0.96 and 0.90 precision, recall and f score respectively in tweets with the sarcastic hashtag [LT4]. Magumeter et al. [25], stated that their method outperformed the state of the art by 3 4% in the benchmark dataset [LT16].

  6. THREATS TO VALIDITY

    Since the researchers considered only a few databases with few papers being obtained and considered for the research questions, there is a possibility of a narrowed study of research. Papers that were not written in English were not considered which implies the authors will miss key information in articles written in different languages. Inevitably, there were biases since some articles were not selected because their abstracts and conclusions were not conveying our expectations. Since the motivations behind the research questions are subjective, there are possibilities of data extraction inaccuracy and data synthesis biases. Only a few individuals carried out this research, the possibility of limitations in the research is probable because the knowledge domain in the subject matter may not be broad as expected.

  7. CONCLUSION

Research in sentiment analysis affirms how tedious the determination of sarcasm in sentiment is. Quite a few works has been done on this topic using different techniques, classifiers and methodologies. The trends have been studied and basic questions asked are answered taking into consideration some selected articles from different databases. The authors looked at the classifiers used, challenges encountered, the datasets used, their performance evaluation and the feature selection used. The study shows that 25% of the reviewed papers produce details on the classification techniques being used, 31.25% delivers more details on the datasets and preprocessing techniques, 12.25% offers detailed information on their feature selection, and 6.25% throw light on the performance evaluation and the different challenges encountered in undertaking sarcasm detection. Our results indicate that much has not been done in the area of sarcasm in sentiment analysis. Therefore, more research on determining sarcastic sentiment in sentiment analysis can be considered using different state-of-the-art techniques, classifiers, tools, and methodologies.

The study's results produced further enlightenment and trends on the identification and determination of sarcastic sentiment.

REFERENCES

[1] K. Sundararajan and A. Palanisamy, Multi-Rule Based Ensemble Feature Selection Model for Sarcasm, vol. 2020, 2020.

[2] S. Porwal, G. Ostwal, A. Phadtare, M. Pandey, and M. V. Marathe, Sarcasm Detection Using Recurrent Neural Network, Proc. 2nd Int. Conf. Intell. Comput. Control Syst. ICICCS 2018, no. Iciccs, pp. 746 748, 2019.

[3] K. Parmar, N. Limbasiya, and M. Dhamecha, Feature based Composite Approach for Sarcasm Detection using MapReduce, Proc. 2nd Int. Conf. Comput. Methodol. Commun. ICCMC 2018, no. Iccmc, pp. 587591, 2018.

[4] D. K. Bharti, R. Pradhan, K. S. Babu, and S. K. Jena, Sarcastic Sentiment Detection Based on Types of Sarcasm Occurring in Twitter Data, no. August, 2017.

[5] A. A. Gamova, A. A. Horoshiy, and V. G. Ivanenko, Detection of Fake and Provokative Comments in Social Network Using Machine Learning, 2020 IEEE Conf. Russ. Young Res. Electr. Electron. Eng., pp. 309311, 2020.

[6] P. L. Teh, O. P. Boon, N. N. Chan, and Y. K. Chuah, A comparative study of the effectiveness of sentiment tools and human coding in sarcasm detection, no. January 2019, pp. 015, 2018.

[7] S. K. Bharti, R. Naidu, and K. S. Babu, Hyperbolic Feature-based Sarcasm Detection in Tweets: A Machine Learning Approach, 2017 14th IEEE India Counc. Int. Conf. INDICON 2017, 2018.

[8] D. A. P. Rahayu, S. Kuntur, and N. Hayatin, Sarcasm detection on Indonesian twitter feeds, Int. Conf. Electr. Eng. Comput. Sci. Informatics, vol. 2018-Octob, pp. 137141, 2018.

[9] P. Chaudhari and C. Chandankhede, Literature survey of sarcasm detection, Proc. 2017 Int. Conf. Wirel. Commun. Signal Process. Networking, WiSPNET 2017, vol. 2018-Janua, pp. 20412046, 2018.

[10] S. Rendalkar and C. Chandankhede, Sarcasm Detection of Online Comments Using Emotion Detection, Proc. Int. Conf. Inven. Res. Comput. Appl. ICIRCA 2018, no. Icirca, pp. 12441249, 2018.

[11] Y. Diao, H. Lin, L. Yang, X. Fan, Y. Chu, and D. I. Wu, A Multi- Dimension Question Answering Network for Sarcasm Detection,

vol. 4, 2020.

[29] M. S. Suhaimin, M. Hanafi, A. Hijazi, R. Alfred, and F. Coenen, Modified framework for sarcasm detection and classification in sentiment analysis, vol. 13, no. 3, pp. 11751183, 2019.

Table 2 Selected articles using SLR

[12] A. Kumar, V. T. Narapareddy, V. A. Srikanth, A. Malapati, L. Bhanu, and M. Neti, Sarcasm Detection Using Multi-Head Attention Based

Tracking

code

Citation Source

Bidirectional LSTM, IEEE Access, vol. 8, pp. 63886397, 2020.

[13] B. Agarwal, N. Mittal, P. Bansal, and S. Garg, Sentiment Analysis Using Common-Sense and Context Information, Comput. Intell.

Neurosci., vol. 2015, pp. 19, 2015.

[14] J. C. Losada and R. M. Benito, Recurrent Patterns of User Behavior in Different Electoral Campaigns: A Twitter Analysis of the Spanish General Elections of 2015 and 2016, vol. 2018, 2018.

[15] M. A. Paredes-valverde, R. Colomo-palacios, M. P. Salas-zárate, and

R. Valencia-garcía, Sentiment Analysis in Spanish for Improvement of Products and Services: A Deep Learning Approach, vol. 2017, 2017.

[16] Y. Wang, K. Kim, B. Lee, and H. Y. Youn, Word clustering based on POS feature for efficient twitter sentiment analysis, Human- centric Comput. Inf. Sci., 2018.

[17] A. Romanowski, Sentiment Analysis of Twitter Data within Big Data Distributed Environment for Stock Prediction, vol. 5, pp. 1349 1354, 2015.

[18] W. Park, W. Park, and W. Park, Sentiment based Analysis of Tweets during the US Presidential Elections, 2017.

[19] M. Y. Manohar and P. Kulkarni, Improvement sarcasm analysis using NLP and corpus based approach, Proc. 2017 Int. Conf. Intell. Comput. Control Syst. ICICCS 2017,vol. 2018-Janua, pp. 618622, 2018.

[20] E. Lunando and A. Purwarianti, Indonesian social media sentiment analysis with sarcasm detection, 2013 Int. Conf. Adv. Comput. Sci. Inf. Syst. ICACSIS 2013, pp. 195198, 2013.

[21] M. Bouazizi and T. Ohtsuki, Opinion Mining in Twitter How to Make Use of Sarcasm to Enhance Sentiment Analysis, 2015 IEEE/ACM Int. Conf. Adv. Soc. Networks Anal. Min., pp. 1594 1597, 2015.

[22] S. K. Bharti, K. S. Babu, and S. K. Jena, Parsing-based sarcasm sentiment recognition in Twitter data, Proc. 2015 IEEE/ACM Int. Conf. Adv. Soc. Networks Anal. Mining, ASONAM 2015, pp. 1373 1380, 2015.

[23] N. Bhan and M. DSilva, Sarcasmometer using sentiment analysis and topic modeling, Int. Conf. Adv. Comput. Commun. Control 2017, ICAC3 2017, vol. 2018-Janua, pp. 16, 2018.

[24] A. G. Prasad, S. Sanjana, S. M. Bhat, and B. S. Harish, Sentiment analysis for sarcasm detection on streaming short text data, 2017 2nd Int. Conf. Knowl. Eng. Appl. ICKEA 2017, vol. 2017-Janua, no. 2009, pp. 15, 2017.

[25] N. Majumder, S. Poria, H. Peng, N. Chhaya, E. Cambria, and A. Gelbukh, Sentiment and Sarcasm Classification with Multitask Learning, IEEE Intell. Syst., vol. 34, no. 3, pp. 3843, 2019.

[26] M. S. Razali, A. A. Halin, N. M. Norowi, and S. C. Doraisamy, The importance of multimodality in sarcasm detection for sentiment analysis, IEEE Student Conf. Res. Dev. Inspiring Technol. Humanit. SCOReD 2017 – Proc., vol. 2018-Janua, pp. 5660, 2018.

[27] B. D. Dharmavarapu and J. Bayana, Sarcasm Detection in Twitter using Sentiment Analysis, no. 1, pp. 642644, 2019.

[28] H. Khullar and A. Singh, A Proposed Approach for Sentiment Analysis and Sarcasm Detection on Textual Data, no. 1, pp. 3387 3391, 2019.

  1. LT1 M. Bouazizi and T.

    Ohtsuki, Opinion Mining in Twitter How to Make Use of Sarcasm to Enhance Sentiment Analysis, 2015 IEEE/ACM Int.

    Conf. Adv. Soc. Networks Anal. Min., pp. 1594

    1597, 2015.

  2. LT2 D. K. Bharti, R. Pradhan, K. S. Babu, and S. K. Jena, Sarcastic

    Sentiment Detection Based on Types of Sarcasm Occurring in Twitter Data, no. August, 2017.

  3. LT3 P. L. Teh, O. P. Boon,

    N. N. Chan, and Y. K. Chuah, A

    IEEE/ACM

    Scopus

    Scopus

    comparative study of the effectiveness of sentiment tools and human coding in sarcasm detection, no.

    January 2019,

    pp. 015, 2018.

  4. LT4 M. Bouazizi and T. Otsuki, A

    Pattern-Based Approach for Sarcasm Detection on Twitter, IEEE Access, vol. 4,

    pp. 54775488,

    2016.

  5. LT5 S. Porwal, G. Ostwal,

    1. Phadtare, M. Pandey, and M.

      V. Marathe, Sarcasm Detection

      Using Recurrent Neural Network, Proc. 2nd Int.

      Conf. Intell. Comput. Control Syst.

      ICICCS 2018,

      no. Iciccs, pp. 746748, 2019.

      IEEE

      IEEE

  6. LT6 S. Rendalkar and C. Chandankhede,

    Sarcasm Detection of Online Comments Using Emotion Detection, Proc. Int. Conf.

    Inven. Res. Comput. Appl. ICIRCA 2018,

    no. Icirca, pp. 12441249,

    2018.

  7. LT7 N. Bhan and M. DSilva,

    Sarcasmomete r using sentiment analysis and topic modeling, Int.

    Conf. Adv. Comput. Commun. Control 2017,

    ICAC3 2017,

    vol. 2018-

    Janua, pp. 16,

    2018.

  8. LT8 H. Khullar and A. Singh, A Proposed

    Approach for Sentiment

    IEEE

    IEEE

    Scopus

    Analysis and Sarcasm Detection on Textual Data, no. 1, pp.

    33873391,

    2019.

  9. LT9 Y. Wang, K. Kim, B. Lee, and H. Y.

    Youn, Word clustering based on POS feature for efficient twitter sentiment analysis, Human-centric Comput. Inf.

    Sci., 2018.

  10. LT10 S. K. Bharti, K. S. Babu, and S. K. Jena, Parsing- based sarcasm

sentiment recognition in Twitter data, Proc. 2015 IEEE/ACM Int.

Conf. Adv. Soc. Networks Anal.

Mining, ASONAM 2015, pp.

13731380,

2015.

Springer

IEEE/ACM

Sanjana, S. M. Bhat, and B. S. Harish, Sentiment analysis for sarcasm detection on streaming short text data, 2017 2nd Int. Conf.

Knowl. Eng. Appl. ICKEA 2017, vol.

2017-Janua, no. 2009, pp. 15,

2017.

  1. LT12 A. G. Prasad,

    S. Sanjana, S.

    M. Bhat, and B.

    S. Harish, Sentiment analysis for

    sarcasm detection on streaming short text data, 2017 2nd Int. Conf.

    Knowl. Eng. Appl. ICKEA 2017, vol.

    2017-Janua, no. 2009, pp. 15,

    2017.

  2. LT13 M. S. Razali, A. A. Halin, N. M.

    IEEE

    IEEE/ACM

    11 LT11 A. G. Prasad, S. IEEE

    Norowi, and S.

    C. Doraisamy,

    The importance of multimodality in sarcasm detection for sentiment analysis, IEEE Student Conf.

    Res. Dev. Inspiring Technol. Humanit.

    SCOReD 2017

    – Proc., vol. 2018-Janua, pp. 5660, 2018.

  3. LT14 B. D. Dharmavarapu

and J. Bayana, Sarcasm Detection in Twitter using Sentiment A Sarcastic sentiment detection based on types of sarcasm occurring in twitter datanalysis, no. 1, pp. 642

644, 2019.

Scopus

Modified framework for sarcasm detection and classification in sentiment analysis, vol.

13, no. 3, pp.

11751183,

2019

16 LT16 N. Majumder, S. Poria,

H. Peng, N. Chhaya, E. Cambria, and

A. Gelbukh, Sentiment and

Sarcasm Classification with Multitask Learning, IEEE Intell.

Syst., vol. 34,

no. 3, pp. 38

43, 2019.

IEEE

15 LT15 M. S. Suhaimin, M. Hanafi, A. Hijazi, R.

Alfred, and F. Coenen,

Scopus