Face Emotion Recognition Application

DOI : 10.17577/IJERTCONV11IS04009

Download Full-Text PDF Cite this Publication

  • Open Access
  • Authors : Adithya Sajikumar, Athul A. Nair, Christy Mol G. Varghese, Jiby Wilson, Josmy George
  • Paper ID : IJERTCONV11IS04009
  • Volume & Issue : Volume 11, Issue 04
  • Published (First Online): 01-07-2023
  • ISSN (Online) : 2278-0181
  • Publisher Name : IJERT
  • License: Creative Commons License This work is licensed under a Creative Commons Attribution 4.0 International License

Text Only Version

Face Emotion Recognition Application

Face Emotion Recognition Application

Adithya Sajikumar

Department of Computer Science Engineering

Mar Baselios Christian College of Engineering and Technology Kuttikanam, India

Athul A. Nair

Department of Computer Science Engineering

Mar Baselios Christian College of Engineering and Technology Kuttikanam, India

Christy Mol G. Varghese Department of Computer Science Engineering

Mar Baselios Christian College of Engineering and Technology Kuttikanam, India

Jiby Wilson

Department of Computer Science Engineering

Mar Baselios Christian College of Engineering and Technology Kuttikanam, India

Josmy George Assistant Professor

Department of Computer Science Engineering

Mar Baselios Christian College of Engineering and Technology Kuttikanam, India

Abstract: Face emotion Recognition (FER) is the technique of identifying and examining facial expressions to ascertain an individual's emotional state. It uses computer vision and machine learning techniques to identify the emotions on a person's face, such as Deep Neural Networks (DNNs), which may be used to analyse facial photos and extract data for emotion recognition. A smart phone with a facial expression system would offer amusing applications that could be used to track a user's mood throughout the day or as a tool for tracking their daily motion in psychological studies. Traditional face expression algorithms, however, typically require a lot of computational power and can only be used offline on a computer. Here is a proposal. Here, we offered a solution to the issues face emotion applications are now experiencing. Here, we recognize all types of emotions, even neutral ones, using deep neural networks for more accuracy, and we also recommend music, jokes, films, and other media.

Keywords FER (Face emotion Recognition), DNN

  1. INTRODUCTION

    Affective computing, commonly referred to as the human facial expression, is crucial for social communication applications. Facial emotion recognition is the practise of determining human emotions from their images or videos. The human brain is capable of quickly and accurately detecting human emotion. The method utilised to mimic how the human brain functions is known as Face Emotion Recognition System (FER). In order to categorise human emotions based on diverse facial expressions, such as pleased, sad, astonished, afraid, and disgusted, etc., the face emotion recognition system is utilized. The three main components of the FER system are preprocessing, face feature extraction, and emotion categorization. A smile can be used to convey happiness and produces eyes with a curvy form. Skewed eyebrows might convey a dejected face.

    Humans have the ability to convey their rage by tensely raising and lowering their eyebrows. Human emotions can be categorized using the facial feature expressions mentioned above.

    Introducing our new face emotion recognition application. Our program analyses and detects facial expressions using cutting-edge technology to give users insightful information about their emotions. To deliver precise and immediate emotional insights, our software makes use of deep learning techniques, feature extraction, and powerful facial landmark identification. Additionally, it is simple to use and open to everyone because to its user-friendly interface. Our software allows you to precisely identify a wide range of emotions, including happiness, sadness, fury, fear, and surprise. This makes it perfect for a variety of purposes, such as marketing research and mental health diagnosis and therapy.

  2. LITERATURE REVIEW

    [1] The article "A Review on Different Facial Feature Extraction Methods for Face Emotions Recognition System" by Prof. Devangi Kotak and Viha Upadhyay covers the review of various facial feature extraction methods. Additionally, it shows how various methodologies may be compared. Finally, it highlights some potential future research projects that could assist to improve the accuracy and effectiveness of FER. [3] The authors of the paper "Facial emotion recognition using deep learning: review and insights" (Wafa Mellouka, Wahida Handouzia) talk about their plans to conduct a study on recent advancements in FER using deep learning. The goal of the paper is to assist and direct scholars by reviewing recent research and offering suggestions for how to further this topic. They provide an overview of recent advances in emotion detection and facial expression identification using several deep learning architectures. [1] The paper Emotion Recognition from Text

    Stories The paper "Analysis of Emotions in a Story Text Using an Emotion Embedding This subject is covered in "Model" by Seo-Hui Park, Byung-Chull Bae, and Yun- Gyung Cheong. They discuss the findings of our research to extract emotional terms from a collection of text stories and to detect emotions using the suggested emotion embedding model in the study, which also proposes an emotion embedding-based learning model. In this context, "emotion embedding model" refers to a learned embedded layer in the CNN emotional categorization learning process. [8] The study "Interviewee Performance Analyzer Using Facial Emotion Recognition and Speech Fluency Recognition" by Yashwanth Adepu, Vishwanath R Boga, and Sairam U talks about a machine that can analyse a candidate's performance in an interview like a human. They have developed a system via which an interview can be rated. [4] A multimodal emotion identification system is covered in the paper "Multi- Modal Emotion Recognition from Speech and Facial Expression Based on Deep Learning" by Linqin Cai, Jiangong Dong, and Min Wei. This system's fusion benefits from the complementing information of audiovisual aspects. Additionally, they create several small-scale kernel convolution blocks to extract aspects of facial emotion. This model can produce more accurate recognition results when compared to unimodal emotion recognition. [6] An innovative method described by Danai Styliani Moschona in her article, "An Affective Service based on Multi-Modal Emotion Detection, combining EEG enabled Emotion Tracking and Speech Emotion Recognition," may provide useful perceptions into a person's emotional state and inspire fresh research into the relationship between speech and brain waves. [6] The paper "Two Stage Emotion Recognition using Frame-level and Video-level Features" by Carla Viegas they compared the results of a 7 class classification problem with a two stage classification. During stage. We reduced the seven emotion classes to three (positive, neutral, negative). [7] The paper "Facial Emotion Recognition Using Machine Learning" by Nitisha Raut discusses feature extraction and analysis of the machine approach on the dataset. Algorithms like logistic regression, linear discriminant analysis, and random forest classifier can be fine-tuned to achieve good accuracy and results. [9] The authors of the article "Facial Emotion Recognition" by Ma Xiaoxi, Lin Weisi, Huang Dongyan, Dong Minghui, and Haizhou Li examine various learning techniques and have used Deep Boltzmann Machine (DBM) and Support Vector Machine (SVM) to recognize face expressions of emotion. [8] The paper A Face Emotion Recognition Method Using Convolutional Neural Network and Image Edge Computing by Hongli Zhang, Alireza Jolfaei2, And Mamoun Alazab discusses about a facial CNN model-based expression recognition technique that successfully extracs face features. They can alleviate the incompleteness brought on by

    2

    artificial design elements by automatically learning pattern features.

  3. PROPOSED WORK

    1. PROBLEM STATEMENT

      Due to lighting fluctuations, huge dimensionality, uncontrollable settings, position variations, and aging, face emotion identification faces several difficulties. Although illumination change is still evolving, FER has made significant progress and accuracy in recent years in order to meet these issues. Human facial expressions can be categorised into seven basic emotional states: joyful, sad, surprised, fearful, angry, disgusted, and neutral. When we communicate our emotions through our faces, certain sets of facial muscles are consciously engaged. These occasionally complicated signals in our facial expressions reveal a wealth of information about how we are feeling. The quality of the data used in AI and ML models presents another difficulty. Incomplete, inconsistent, or biased data can cause models to provide inaccurate or biased outcomes even with vast volumes of data. Another crucial difficulty in the deployment of AI and ML techniques is assuring data security and privacy. Businesses must make sure that sensitive information is properly safeguarded and that only authorized staff can access it.

    2. PROPOSED METHEDOLOGY

      Identification of human emotions is the goal of facial emotion recognition (FER). Either the facial or verbal communication can convey the sentiment. Face emotion recognition allows us to quickly and cheaply assess how information and services affect people. For the purpose of enhancing our current moods, this application will recommend songs, jokes, and intriguing stuff. It can be used in a variety of intriguing and practical contexts, including e- learning and the care of patients in the medical industry. Deep Neural network (DNN) is the most popular way of analyzing images. Three phases make up the proposed methodology. The first phase Recommended is Data collection and cleaning, used to extract emotions. After that model training, model testing, and app building is done.

    3. SYSTEM DESIGN

      CLASS 1

      MODEL INFERENCE

      MACHINE LEARNING MODEL

      FIGURE 1. SYSTEM ARCHITECTURE

      FIGURE 2. DATA FLOW DIAGRAM

      CLASS 2

      CLASS 3

      The definition of a system's architecture, components, modules, interfaces, and data to meet a set of requirements is known as its systems architecture. It takes a rigorous and thorough approach to design a system that satisfies all the practical needs, such as adaptability, efficiency, and security. The system architecture for recognizing face emotions is as displayed above. Here, we have both a model interference and a machine learning model. We will compare a camera-captured image to the machine learning model. The data is processed and an output is calculated utilizing various classes in machine learning inference. Thus, classification links the distinctive feature displacement pattern of the expression that is the most similar to an unseen example. Output emotion will be recognized, and the user will be given suggestions that are appropriate.

      Data flow diagrams are used to illustrate the flow of data as well as the functions and processes involved in storing, changing, and distributing data between different system components and between the system and its environment using a specific set of graphical representations. It also clearly describes the physical prerequisites for the systems

      Growth and shows how information should flow logically within a system. The main characteristics of a data flow diagram are unambiguous generation of the manual and automated system needs, as well as simplicity of notation. First, we take pictures using the camera. The photos are then prepped. Since preprocessing reduces image noise, it enhances the performance of face expression recognition. A variety of processes, including picture scaling and clarity adjustments, contrast adjustments, and extra enhancement processes to enhance expression frames are all included in image preprocessing. The system should then identify the face and map its features. It operates by recognizing and measuring face features present in an image, such as the mouth, lips, eyebrows, etc. Then, it will compare the training dataset to the identified emotions to determine which emotions the image contains.

    4. SYSTEM REQUIREMENTS

      The model and data set will be used to face emotion detection to assess its performance as a predictor, and a software application will be created utilizing the Python, Flutter, and Dart languages.

      3 52

      Resource Required

      • Operating system: Windows 7 or above

      • RAM Capacity: minimum 4 GB ram

      • Processor: Intel i3 or above

      • 2 GB or above Graphics Card

      • Sound Card

  4. CONCLUSION

    The ability to recognize facial emotions has advanced significantly in recent years. It is challenging to recognize emotions and faces, respectively. They require a lot of work to improve facial detection and emotion recognition. Measures of performance. The area of emotion identification is becoming more well-known as a result of its applications in sectors like education, software engineering, and gaming. In order to identify emotions, this study provided a thorough overview of the numerous methodologies and techniques used to detect facial expressions in people. The proposed study gave the web application the ability to analyses a person's emotions in a way that is comparable to that of a human. It can be done by photographing where the brows and eyes are, where the mouth is, and how the facial expressions change noticeably.

  5. REFERENCES

[1] Upadhyay and D. Kotak, "A Review on Different Facial Feature Extraction Methods for Face Emotions Recognition System," 2020 Fourth International Conference on Inventive Systems and Control (ICISC), Coimbatore, India, 2020, pp. 15-19, doi: 10.1109/ICISC47916.2020.9171172.

[2] Wafa Mellouk, Wahida Handouzi, Facial emotion recognition using deep learning review and insights,

Procedia Computer Science, Volume 175, 2020

[3] S. -H. Park, B. -C. Bae and Y. -G. Cheong, "Emotion Recognition from Text Stories Using an Emotion Embedding Model," 2020 IEEE International Conference on Big Data and Smart Computing (Big Comp), Busan, Korea (South), 2020, pp. 579-583, doi: 10.1109/BigComp48618.2020.00014.

[4] Y. Adepu, V. R. Boga and S. U, "Interviewee Performance Analyzer Using Facial Emotion Recognition and Speech Fluency Recognition," 2020 IEEE International Conference for Innovation in Technology (INOCON), Bangluru, India, 2020, pp. 1-5, doi: 10.1109/INOCON50539.2020.9298427.

[5] Cai, J. Dong and M. Wei, "Multi-Modal Emotion Recognition From Speech and Facial Expression Based on Deep Learning," 2020 Chinese Automation Congress (CAC), Shanghai, China, 2020, pp. 5726-5729, doi: 10.1109/CAC51589.2020.9327178.

[6] S. Moschona, "An Affective Service based on Multi- Modal Emotion Recognition, using EEG enabled Emotion Tracking and Speech Emotion Recognition,"

4

2020 IEEE International Conference on Consumer Electronics – Asia (ICCE-Asia), Seoul, Korea (South), 2020, pp. 1-3, doi: 10.1109/ICCE- Asia49877.2020.9277291.

[7] C. Viegas, "Two Stage Emotion Recognition using Frame-level and Video-level Features," 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020), Buenos Aires, Argentina, 2020, pp. 912-915, doi: 10.1109/FG47880.2020.00143.\

[8] Raut, Nitisha, "Facial Emotion Recognition Using Machine Learning" (2018). Master's Projects. 632.

[9] M. Xiaoxi, L. Weisi, H. Dongyan, D. Minghui and H. Li, "Facial emotion recognition," 2017 IEEE 2nd International Conference on Signal and Image Processing (ICSIP), Singapore, 201, pp. 77-81, doi: 10.1109/SIPROCESS.2017.8124509.

[10] H. Zhang, A. Jolfaei and M. Alazab, "A Face Emotion Recognition Method Using Convolutional Neural Network and Image Edge Computing," in IEEE Access, vol. 7, pp. 159081-159089, 2019, doi: 10.1109/ACCESS.2019.2949741.

.