Expression Recognition using PCA+ Classifier Technique

DOI : 10.17577/IJERTV3IS042247

Download Full-Text PDF Cite this Publication

Text Only Version

Expression Recognition using PCA+ Classifier Technique

Ms. Arati P. Bhadavankar1

Mr. A. V. Shah 2

Mr. Prashant M. Jadhav3

Department of E & TC

Department of E & TC

Department of Electronics

SIT Polytechnic, Yadrav

TEI, Rajwada Ichalkaranji

TEI, Rajwada Ichalkaranji

Abstract: Expression is the most powerful way to display emotion in human being. Expression not only gives information about ones mood but also helps in different applications like medical field; by observing expression of patients we can alert a doctor about his/her pain. But here question arises that whether detecting expression is that much easy for machine as it is for human??? The answer is no it is not that much easy for machine. Basic steps that machine has to follow for expression analysis are Face Detection, feature extraction and classification of expression. According to training sequences expression detection is basically performed on some expressions like, Neutral, Happy, Sad, Anger, Disgust, surprise etc. in this paper focus is given on three expressions Happy sad & Neutral. This paper also states method related to classification as well as approach used for preprocessing and feature extraction are discussed to solve problem of expression detection for machine.

Keywords: -Face detection, feature extraction, classification.

  1. INTRODUCTION

    The Intelligence of Human Computer Interaction is one of the innovative researching areas. In which automatic facial expression analysis is an interesting and challenging problem as facial expression recognition is an important part of Human Computer Interaction [1]. The primary goal of expression recognition research is to create a system which can identify specific human expression from six/seven different emotions like fear, anger, Happy, sad, disgust, neutral, surprise and use them to convey information. By observing face expression, one can decide whether a man is serious, happy, thinking, sad, feeling pain and so on. Recognizing the expression of a man can help in many of the areas like in the field of medical science where a doctor can be alerted when a patient is in severe pain and also in behavioral sciences [9].

    Here question arises that whether machine or simply computer can perform same task of expression detection? Answer is not that much easily as human can. The machine has to follow some procedure which is divided in basic three steps: I Face detection for this one can use databases which is available on internet like JFEEE, FG-NET etc. or one can create its own database to skip face detection step else there

    are various methods are available to perform face detection. If one is using its own created colored dataset then preprocessing must be performed. It consist some tasks like reduction of noise, gray scale conversion, edge detection. Different methods or commands are there in MATLAB to perform these entire actions. Next step is Feature extraction in this we have to extract the facial features from the observed facial image. The facial features are the prominent features of the various parts of the face- eyebrows, eyes, nose, mouth, and chin. The final step is to develop a classifier, which will classify a facial expression into one of the basic facial expressions which stated.

    Most of the methods of facial expression recognition assume that the conditions, under which a facial image sequence is obtained, are controlled. Human face varies from one person to another. This variation in faces could be due to race, gender, age, and other physical characteristics of an individual. Therefore face detection becomes a challenging task in computer vision. It becomes more challenging task due to the additional variations in scale, orientation, pose, facial expressions and lighting conditions. Many methods have been proposed to detect faces such as neutral networks, skin locus, and color analysis [9]. Usually, the image sequence has the face in frontal view. Once the face is detected from the image sequence, the next step is to extract the information about the shown facial expression. Because of high variability in the types of faces, it is very difficult for the machine to extract facial features. Variations in lighting conditions, head movements, non frontal views, various distractions like glasses, facial hair makes the problem more difficult. Finally we have to classify the extracted facial expression information into a particular facial action or basic emotion. The techniques used for classification are distance classifier, template based, neural network, rule based, and support vector machines (SVM).

  2. RELATED WORK

    PENG Zhao-yi, ZHU Yan-hui and ZHOU Yu proposed a method of real-time facial expression recognition based on adaptive Canny operator edge detection. In this method, first face location is performed based on an adaptive skin color and structure model. Then, facial expression feature extraction method based on adaptive canny operator edge detection and AAM (Active Appearance Model) algorithm is done. Finally, they used least-squares method to recognize expressions.[9]

    Jagdish Lal Raheja, and Umesh Kumar have used The approach which is based on add-boosted classifier for face detection and simple token finding and matching using back propagation neural network. This approach can be adapted to real time system very easily. The paper briefly describes the schemes of capturing the image from web cam, detecting the face, processing the image to recognize the gestures and expressions.[7]

    Mandeep Kaur, Rajeev Vashisht, Nirvair Neeru, proposed PCA for classification of emotions using Singular Value Decomposition. They achieved excellent classification results for all principal emotions along with Neutral on training dataset. The proposed algorithm is implemented on both real time as well as JAFFE database. Each image is enhanced, localized and its distinct feature s are extracted using SVD.[5]

    Ajit P. Gosavi, S. R. Khot have implemented a facial expression recognition system using Principal component analysis method. This approach has been studied using JAFFE image database. The experiment results demonstrate that the accuracy of the JAFFE images using Principal component analysis is 91.63%. Similarly precision rate obtained is 72.82% in case of Principal component analysis method.[10]

  3. PROPOSED EXPRESSION RECOGNITION

    METHOD

    This section describes expression recognition method The proposed expression detection method consists of four parts: Data Acquisition, Pre-processing, Feature extraction using PCA (Principle Component Analysis) & Classification using Euclidian Distance Classifier. Each of them is described below:

    DATA ACQUIZATION

    (INPUT FROM CAMMERA OR COLLECTED DATABASE)

    PREPROCESSING

    (NORMALIZATION, NOISE REDUCTION, RGB TO GRAY)

    FACIAL FEATURE EXTRACTION

    (PCA FOR DIMENTIONALITY REDUCTION)

    CLASSIFIER (EUCLIDIAN DISTANCE)

    Detected Expression

    Fig. 1 Expression Recognition System

    1. DATA ACQUISATION:

      As stated above data acquisition is nothing but to prepare a database which will be used for training & testing. One method to prepare it is, to take output of high definition camera. Else there are many databases easily available on internet to eliminate the step of acquisition & detection. For recognizing expression one has to detect face there are number of techniques to detect the fac in image.

      Jagdish Lal Raheja, and Umesh Kumar [7] used the technique proposed by Viola and Jones is used to detect the face. The main reason for using this technique is that its implementation is feature based and relatively fast compared to other available techniques. Mandeep Kaur, Rajeev Vashisht, Nirvair Neeru [5] uses facial expression database which is JAFFE. Peng Zhao-yi, Zhu Yan-hui and Zhou Yu have adopted the method of multi-pose face detection based on adaptive skin color and structure model to locate the human face [9].

      For this paper, some photos from JAFFE database available on internet and some photos taken by camera are used. Here JAFFE stands for Japanese Female Facial Expression. It contains 213 images of 7 facial expressions including neutral posed by 10 Japanese female models. Other

      sample pictures are taken by different cameras for various people with different conditions for three expressions neutral, happy, sad as stated above. As this paper using database directly, no face detection is required here.

      There are two sub databases prepared one for training and one for testing purpose. In training database there are total 44 photos all in JPEG format which are for five different people. Some of them are captured by using mobile camera and few of them are directly taken from internet, named from Image001.jpg to Image044.jpg. for training database one text file is also prepared named as LabelFile.txt which consist list of image name and expression related to that image. Testing database also consist photos of JPEG format named from Image001.jpg to Image020.jpg.

      For both training and testing database following processes are carried out:

      1. all images from training database are read, after image which is to be tested is also read by choosing test database.

      2. preprocessing step is performed on selected images from training and testing database.

      3. PCA algorithm is applied on every image from training database so as to calculate eigenvalues and eigenvectors.

      4. Euclidian distances are calculated.

      5. For testing image PCA is applied to extract feature s, and then Euclidian distance is calculated, and the minimum value is chosen in order to find out the train image which is most similar to the test image.

    2. PREPROCESSING

      Images in database are may be affected because of camera type, background, light effects. Hence preprocessing is required. The steps followed in preprocessing are:

      1. Cropping & Normalization: Input image is cropped to specific size to detect face only which consist eyes, Nose, mouth. After that cropped image is then resized to 280*180 pixels using following instruction.

        I = imresize(aa,[280,180]);

        Here image named as aa is resized to 280*180 pixels from its original dimentions. If want to resize more number of images then one can use for loop.

      2. Light compensation: effect of light like brightness, darkness is adjusted using command imadjust as follows

        J= imadjust(I,[low_in;high_in],[low_out; high_out])

        This instruction maps the values in I to new values in J such values between low_in and high_in map to values between low_out and high_out. Values below low_in and above high_in are clipped; that is, values

        below low_in map to low_out, and those above high_in map to high_out.

      3. As database consist color photographs there is need to convert RGB color values to YCbCr color space using MATLAB command rgb2ycbcr.

    3. FACIAL FEATURE EXTRACTION

      The feature is defined as a function of one or more measurements, each of which specifies some quantifiable property of an object, and is computed such that it quantifies some significant characteristics of the object. We classify the various features currently employed as follows:

      1. General features: Application independent features such as color, texture, and shape. According to the abstraction level, they can be further divided into:

      2. Pixel-level features: Features calculated at each pixel,

        e.g. color, location.

      3. Local features: Features calculated over the results of subdivision of the image band on image segmentation or edge detection.

      4. Global features: Features calculated over the entire image or just regular sub-area of an image.

      5. Domain-specific features: Application dependent features such as human faces, fingerprints, and conceptual features.

        Of these three activities, preprocessing, Feature extraction and classification; feature extraction is most critical because the particular features made available for discrimination directly influence the efficacy of the classification task. The end result of the extraction task is a set of features, commonly called a feature vector, which constitutes a representation of the image.[6]

        Feature extraction converts pixel data into a higher- level representation of shape, motion, color, texture, and spatial configuration of the face or its components. The extracted representation is used for subsequent expression categorization. Feature extraction generally reduces the dimensionality of the input space.

        Principal Components Analysis (PCA) is a way of identifying patterns in data, and expressing the data in such a way as to highlight their similarities and differences. Since patterns in data can be hard to find in data of high dimension, where the luxury of graphical representation is not available, PCA is a powerful tool for analyzing data. The steps involved in performing PCA on a set of data are:-

        1. Get some data

        2. Subtract the mean

        3. Calculating the covariance matrix

        4. Calculate the eigenvectors and Eigen values of the covariance matrix

        5. Choosing components and formatting a feature vector

        6. Deriving the new data set

        7. Getting the old data back [ 2 ]

      [C,S,L]=princomp(img name);

      This command returns the principal component coefficients. Rows of image name correspond to observations, columns to variables. C is a p-by-p matrix, each column containing coefficients for one principal component. The columns are in order of decreasing component variance.

      S is the representation of image name in the principal component space. Rows of SCORE correspond to observations, columns to components.

      L is a vector containing the eigenvalues of the covariance matrix of X.

      Principal Component Analysis is standard technique used in statistical pattern recognition and signal processing for data reduction and Feature extraction. Principal Component Analysis (PCA) is a dimensionality reduction technique based on extracting the desired number of principal components of the multi-dimensional data. The purpose of PCA is to reduce the large dimensionality of the data space (observed variables) to the smaller intrinsic dimensionality of feature space (independent variables).

    4. FACIAL EXPRESSION CLASSFICATION

    As the features are extracted, a suitable classier must be chosen. Facial classification is performed by using classifier. There is wide range of classifiers available to solve facial expression recognition problem. For this purpose, one investigated the use of the following methods for designing an age estimator.

    Quadratic Functions: Optimization methods have been used for defining the optimum coefficients of quadratic functions.

    Shortest Distance Classifier: Based on the training data, the distributions of face parameters corresponding to a certain expressions are defined.

    Supervised Neural Networks: Supervised neural networks have been trained with a set of face parameters and their corresponding expressions so that given an unknown set of parameters they produce at the output an estimate of the expression of the person in the corresponding face image.

    Unsupervised Neural Networks: The Kohonen Self-Organizing Map (SO), which is a clustering algorithm, has been utilized to train networks to classify a set of input vectors of face parameters in a number of clusters corresponding to different expression.[4]

    The classifier based on the Euclidean distance has been used which is obtained by calculating the distance between the images which are to be tested and the already available images used for training. Then the minimum distance is observed from the set of values and based on these values decision making is performed.

    The formula for the Euclidean distance is given by,

  4. RESULTS:

    Result Table for Expression detection:

    Sr. No.

    Training Image Number

    Testing Image Number

    Training Expression

    Testing Expression

    Result

    Best Match

    ×/

    1

    1

    1

    Happy

    Neutral

    Neutral

    Image 19

    2

    2

    2

    Happy

    Neutral

    Neutral

    Image 34

    3

    3

    3

    Happy

    Neutral

    Neutral

    Image 34

    4

    4

    4

    Disgust

    Happy

    Happy

    Image 29

    5

    5

    5

    Disgust

    Neutral

    Neutral

    Image 34

    6

    6

    6

    Disgust

    Sad

    Sad

    Image 32

    7

    7

    7

    Anger

    Sad

    Sad

    Image 38

    8

    8

    8

    Anger

    Happy

    Happy

    Image 37

    9

    9

    9

    Anger

    Happy

    Happy

    Image 36

    10

    10

    10

    Sad

    Happy

    Happy

    Image 22

    11

    11

    11

    Sad

    Neutral

    Neutral

    Image 17

    12

    12

    12

    Sad

    Happy

    Happy

    Image 20

    13

    13

    13

    Sad

    Neutral

    Neutral

    Image 15

    14

    14

    14

    Neutral

    Sad/Angry

    Anger

    Image 09

    15

    15

    15

    Neutral

    Sad

    Anger

    Image 09

    ×

    16

    16

    16

    Neutral

    Neutral

    Neutral

    Image 18

    17

    17

    17

    Neutral

    Neutral

    Neutral

    Image 34

    18

    18

    18

    Neutral

    Happy

    Neutral

    Image 19

    ×

    19

    19

    19

    Neutral

    Disgust

    Disgust

    Image 25

    20

    20

    20

    Happy

    Disgust

    Disgust

    Image 04

    21

    21

    Happy

    22

    22

    Happy

    23

    23

    Disgust

    24

    24

    Disgust

    25

    25

    Disgust

    26

    26

    Sad

    27

    27

    Sad

    28

    28

    Sad

    29

    29

    Happy

    30

    30

    Happy

    31

    31

    Sad

    32

    32

    Sad

    33

    33

    Neutral

    34

    34

    Neutral

    35

    35

    Neutral

    36

    36

    Happy

    37

    37

    Happy

    38

    38

    Sad

    39

    39

    Sad

    40

    40

    Sad

    41

    41

    Happy

    42

    42

    Happy

    43

    43

    Neutral

    44

    44

    Neutral

    Accuracy: ((Total no of Images No of False decisions)/Total no of Images))*100

    ((20-2)/20)*100 = 90 %

    Results for Expression Detection:

    Tested Image No

    Image

    Expression

    Actual

    Matched Training Image No

    Image

    Expression Detected

    ×/

    08

    Happy

    37

    Happy

    06

    Disgust

    32

    Disgust

    01

    Neutral

    19

    Neutral

    18

    Happy

    19

    Neutral

    ×

    12

    Happy

    20

    Happy

  5. CONCLUSION & FUTURE SCOPE:

    In this paper expression detection technique with PC and Euclidian distance classifier is discussed. As earlier stated recognition of expression is divided into three sub problems face detection, feature extraction and classification. This technique is not best for all the problems, because there are some problems like head rotation, factor of aging, variation in illumination due light effects, etc. Thus if neural network is used then better results may be obtained. And also if camera of good quality is used to capture image to create database then also results may be improved. The average Accuracy of the system obtained is about 80-90 %. We got

    90 % average recognition rate for five principal emotions namely happy, sad, disgust, anger along with neutral.

  6. REFERENCES

  1. Praseeda Lekshmi.V Dr.M.Sasikumar, A Neural Network Based Facial Expression Analysis using Gabor Wavelets, world Academy of sciences, Engineering and Technology 42 2008.

  2. Akshat Garg , Vishakha Choudhary, FACIAL EXPRESSION RECOGNITION USING PRINCIPAL COMPONENT ANALYSIS International Journal of Scientific Research Engineering &Technology (IJSRET) Volume 1 Issue4 pp 039-042 July 2012.

  3. Andreas Lanitis, Chrisina Draganova, and Chris Christodoulou, Comparing Different Classifiers for Automatic Age Estimation, IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART B: CYBERNETICS, VOL. 34, NO. 1, FEBRUARY 2004.

  4. Peng Zhao-yi, Zhu Yan-hui and Zhou Yu Real-time Facial Expression Recognition Based on Adaptive Canny Operator Edge Detection 2010 Second International Conference on Multimedia and Information Technology..

  5. Mandeep Kaur, Rajeev Vashisht, Nirvair Neeru, Recognition of Facial Expressions with Principal Component Analysis and Singular Value Decomposition," International Journal of Computer Applications (0975 8887) Volume9No.12, November 2010.

  6. Ryszard S. Chora´ s Image Feature Extraction Techniques and Their Applications for CBIR and Biometrics Systems INTERNATIONAL JOURNAL OF BIOLOGY AND BIOMEDICAL ENGINEERING Issue 1, Vol. 1, 2007.

  7. Jagdish Lal Raheja, and Umesh Kumar, Human Facial Expression Detection From Detected In Captured Image Using Back Propagation Neural Network. International Journal of Computer Science & Information Technology, Vol.2, No.1, February 2010.

  8. Essa I, Pentland A. Coding, analysis, interpretation, and recognition of facial expressions, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.19, no.7, pp.757-763., 1997.

  9. Peng Zhao-yi, Zhu Yan-hui and Zhou Yu Real-time Facial Expression Recognition Based on Adaptive Canny Operator Edge Detection 2010 Second International Conference on Multimedia and Information Technology.

  10. Ajit P. Gosavi, S. R. Khot, Facial Expression Recognition Using Principal Component Analysis International Journal of Soft Computing and Engineering (IJSCE) ISSN: 2231-2307, Volume-3, Issue-4, September 2013.

Leave a Reply