- Open Access
- Total Downloads : 318
- Authors : Ale Daniel T. , Ogunti Erastus, Ogundipe Adebayo, Adebayo Adeola
- Paper ID : IJERTV4IS100394
- Volume & Issue : Volume 04, Issue 10 (October 2015)
- Published (First Online): 29-10-2015
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Eigenface-based Face Real Time Recognition System
Ale Daniel T.
Electrical, Electronics & Computer Engineering Department
Afe Babalola University, Ado-Ekiti, Nigeria
Ogunti Erastus
Electrical & Electrical Engineering Dept, Federal University of Technology, Akure
Ogundipe Adebayo Computer Science Department Afe Babalola University,
Ado-Ekiti, Nigeria
Adebayo Adeola
Electrical, Electronics & Computer Engineering Department
Afe Babalola University, Ado-Ekiti, Nigeria
Abstract There is a need for the automatic monitoring of an environment for surveillance purposes, attendance management and access control. This automatic monitoring system becomes very meaningful if coupled with the ability to identify persons present in such environment. Most surveillance and monitoring systems are basically video cameras or CCTVs which give data in form of videos and images. The process of identification of persons in the videos or images can be taken care of through the detection and recognition of faces present in the images. This project states the design of a face detection and recognition system implemented with MATLAB. Viola Jones algorithm was employed for the detection of faces. The system based on Viola Jones algorithm was slow during implementation, hence this research work attempts to enhance the speed of detection. Principal Component Analysis (PCA) was employed for dimensionality reduction, and the matching correlation analysis for matching. The performance of the system was measured based on the detection speed and recognition efficiency, the system was tested with 88 test images. Case studies were taken for the performance of the system with varying skin color and with occlusion.
KeywordsAdaboost; PCA; Recognition; Camera; Face detection
-
INTRODUCTION
The face, is the foremost distinguishing feature of the human body. The face not only gives ones individual identity, it can be used to protect against fraud transactions, security breaches and take care of personal data. Face recognition is one of the various techniques used under biometrics. Biometrics is the study of measurable biological characteristics. It consists of various techniques that analyzes and measures unique physical characteristics such as face, fingerprints, iris, retina and voice for authentication purposes. Biometrics identifies or verifies a person based on individuals physical characteristics (that are unique), by matching the real time patterns against the one that have been saved on a database.
Real time face recognition involves the task of locating human faces in a video stream and identifying the faces by matching them against the database of known faces. In the past few years, face recognition has received a significant attention and regarded as one of the most successful applications in the field of image analysis and its application [1]. The human faces represent complex, multidimensional, meaningful visual stimulant, it gives ones individual identity, it can be used to protect against fraud transactions, security breaches and take care of personal data., therefore it has a wide variety of applications.
Face detection is the base for face tracking and face recognition, it is required to performed face detection before face recognition and its results directly affect the process and accuracy of face recognition [2]. Face detection is a process, that analyses an input image and to determine the number, location, size, position and the orientation of faces present in that image. The accurate detection of human faces in an arbitrary scene, is the most process involved in face recognition, the image is searched for important features i.e. Face detection is done to extract relevant information for face and facial expression analysis, when faces are located in a scene, the recognition step afterwards is not so complicated.
For the past twenty years, holistic-based methods attract the most attention against other methods. This category includes eigenface (performed by the PCA), fisherface (performed by the LDA), and some other transformation basis such as the independent component analysis (ICA).
For face recognition, the idea of the eigenfaces was given by Turk and Pentland in early 1990s. The approach transforms face images into a small set of characteristic feature images, called eigenfaces, which are the principal components ( found by performing Principal Component Analysis PCA) of the initial set of faces images. Recognition is performed by projecting a new image into the subspace spanned by the eigenfaces (face space) and then classifying the face by comparing its position in face space with positions of known individuals. This is an information theory approach of coding and decoding face images and may give insight into the information content of face images, emphasizing the significant local and global features. Such features may or may not be directly related to intuition notion of the face features such as the eyes, nose, lips and hair.
In mathematical terms, the principal components of the distribution of faces, or the eigenvectors of the covariance matrix of the set of face images are found. These eigenvectors can be thought of as a set of features, which together characterize the variation between the face images.
Each image location contributes more or less to each eigenvector, so that these vectors can be displayed as a sort of ghostly face, which are called the eigenfaces.
Figure 2.6 An Eigenface. Source: Zahid Riaz et al, 2003. The method was tested using a database of 2,500 images of
16 people under all combinations of 3 head orientations, 3 head sizes or scales, and 3 lighting conditions and various resolutions.
Recognition rates of 96%, 85% and 64% were reported for lighting, orientation and scale variation. Though the method appears to be fairly robust to lighting variations, its performance degrades with scale changes.
The eigenfaces have the advantage of dimension reduction as well as saving the most energy and the largest variation after projection, while they do not exploit the information of face label included in the database. Besides, there have been several researches showing that the illumination differences result in serious appearance variations, which means that first several eigenfaces may capture the variation of illumination of faces rather than the face structure variations, and some detailed structured difference may have small eigenvalues and their corresponding eigenfaces are probably dropped when only preserving the M largest eigenvectors.
Despite calculating the projection bases from the whole training data without labels (without human identities, which corresponds to unsupervised learning), it was proposed to use the linear discriminative analysis (LDA) for bases finding (Belhumeur, 1997). The objective of applying the LDA is to look for dimension reduction based on discrimination purpose as well as to find bases for projection that minimize the intra- class variation but preserve the inter-class variation. They didnt explicitly build the intra-class variation model, but linearly projected the image into a subspace in a manner which discounts those regions of the face with large intra-class deviation.
The PCA basis preserved the largest variation after projection, while the projection result is not suitable for recognition. On the other hand, the LDA exploits the best projection basis for discrimination purpose, although it doesnt preserve as much energy as what the PCA does, the projection result clearly separates these wo classes by just a simple thresholding.
-
Independent Component Analysis
The PCA exploits the second-order statistical property of the training set (the covariance matrix) and yields projection bases that make the projected samples uncorrelated with each other. The second-order property only depends on the pair-wise relationships between pixels, while some important information for face recognition may be contained in the higher-order relationships among pixels. The independent component analysis (ICA) is a generalization of the PCA, which is sensitive to the higher-order statistics (Theodoridis and Koutroumbas, 2009).
In the works proposed by Bartlett et al, they derived the ICA bases from the principle of optimal information transfer through sigmoidal neurons. In addition, they proposed to architectures for dimension-reduction decomposition, one treats the image as random variables and the pixels as outcomes, and the other one treats the pixels as random variables and the image as outcomes. The Architecture I found n source of pixel images, and a human face could be decomposed in to a weight vector. This architecture finds a set of statistically independent basis images and each of them captures the features in human faces such as eyes, eyebrows, and mouths.
The Architecture II finds the basis images which have similar appearances as the PCA does and has the decomposition. This architecture uses the ICA to find a representation in which the coefficients used to code images are statistically independent. Generally speaking, the first architectural finds spatially local basis images for the face, while the second architecture produces a factorial face code. In their experimental results, both these two representation were superior to the representation based on the PCA for recognizing faces across days and changes in expression, and a classifier combining these two ICA representation gave the best performance (Bartlett et al, 2002).
Perform PCA
Save feature
-
-
SYSTEM MODEL Principal Component Analysis (PCA)
Cropped face
This is an algorithm used for the process of face recognition. It involves a sequence of sets that implement various statistical mathematics such as mean, covariance etc. to extract/save important facial features and use them for comparism with an input image. Mathematically, it is the process of computing the principal components of a distribution of faces i.e. finding the eigenvectors of a covariance matrix of a set of face images taking each image as a vector in a high dimensional space. The eigenvectors are seen as a set of feature that together characterize the variation between the face images. PCA involves the following steps;
Step 1: Create training set
The training set is a group of face images that is used to train the recognizer. The training set consists of M images of image size of N × N. it is importance for these images to be centered and of the same size.
Step 2: Convert images to vector space
Each image is converted to a vector form, i.e. N × N images are converted to a N² × 1 column vector which gives the face vector space, such that suppose is an N²×1 vector, corresponding to an N×N face image I .
i = , ( = )
=1
The weights w are the proportion of the eigenface that makes up the original face, the weights can be represented by a column vector
1
= 2
Step 3: Normalize the face Vector
This step involves removing the common features o that the faces share, so that each face is left with its unique feature. To do that we need to:
-
compute the average/mean face vector
= 1
=1
The average face contains common features of the training set.
-
subtract the mean face from each face vector to get the normalize face vector:
=
Step 4: Calculate Eigenvector
1. calculate the covariance matrix
= = [1, 2 . ]
2 ×
=
Recognizing an unknown face
Given an unknown input face, the input image is first converted to face vector, then normalize the face vector. It should be noted that the average face calculated/used to train the recognizer would be used to obtain the normalize face. The next step is to project the normalize face vector into the eigenspace .i.e. to represent the unknown face as a linear combination of K eigenfaces. The weight vector of the input image are then obtained. The distance between the input weight vector and the weight vector of all the training images are calculated and compared. A threshold is selected which is used to determine if the input image is a face or not. If the distance calculated is < the threshold it is a face if not it is not a face.
-
-
RESULT
The experiments were executed on a Lenovo computer with
-
Ghz core i5 processor and 6GB ram and the algorithm/programs are implemented on MATLAB using a camera with a resolution of 1280 x 720. The face recognition system is operated through a MATLAB GUI (graphical user interface).
Database Structure
1
nn . . ² × 2
A database consisting of 88 images i.e. 8 different pictures of
=1
This matrix = is very large, this is not practical as it
requires a huge amount of computation which could also affect the system as the system may slow down or run out of memory. To solve this problem we need to calculate a covariance matrix of reduced dimensionality, by calculating C as
=
This covariance matrix will still give the same eigenvectors as
= , so instead of using a covariance matrix with a high dimensionality we can use one with a low dimensionality and still obtain the same results. The covariance matrix is now of size × , which will generate M eigenvalues and eigenvectors. The eigenvector will a matix of size × 1. We need to find only K significant eigenvectors (corresponding to the K largest eigenvalues), such that and can represent the whole training set. The selected K eigenvectors must be in the original dimensionality of the face vector space.
=
Where is the eigenvector in the original dimension, is the eigenvector in the lower dimensional space.
Step 5: representing faces onto the basis
Each face in the training set can be represented as a linear combination of the best K eigenvectors. The faces can be represented as the weighted sum of the K eigenfaces + the mean face.
11 different individuals were initially created and used for testing the system. The database was created during the process of face detection.
Figure 1 Database folder structure
Figure 2: Face detection and recognition GUI
Figure 4.2 Face recognition GUI
The GUI was developed to provide easy navigation when operating the face recognition system. The GUI consists of 2 panels, the first panel is the connection panel, and this panel is responsible for establishing a connection between the camera, the program and Arduino controlling the servomotors. The baud rate of 9600 is displayed on this panel. On this panel it is possible to
-
Select the camera to be used, provided it is connected to the system.
-
Set the configuration to be employed
The second panel is the control panel, which is used in initiating the detection and face recognition process.
The face detection is initialized by clicking the snap button, the program then initializes the camera, while the camera is running the program checks the image for a face, it keeps attempting till it detects a face. A figure is then displayed showing the detected face and some other information as shown in figure 3.
Figure 3 GUI showing the detected face enclosed in the boundary box(upper right), the cropped out detected face (upper left) and the number of attempts made below the face was detected.
Figure 3 shows the detected face cropped with the automatic Viola Jones algorithm employed for the research, and also the number of attempts the algorithm made to detect the face.
The database of cropped faces were used for the face recognition. The images in the database were first converted to the grayscale intensity image, which makes it easier for histogram normalization of the image to be performed. The images in the database are of varying illumination, this affects the result of the face recognition processing. Histogram normalization is done to solve the problem of illumination.
Figure 4: Preprocessing of captured image
The plot shows the histogram for the pixel intensity of the original image I and the normalize image IG. The histogram plot of the original image shows a clustered distribution and an illumination gradient across the image i.e. the image is greatly affected by illumination. The second image IG was normalized by the process of histogram equalization, which allows the pixels to be evenly distributed over the whole intensity range
i.e. the image is transform so that it gives a nearly flat and evenly distributed histogram.
After the histogram equalization, the mean of the images in a folder is first calculated, the eigenvalues and eigenvectors are then obtained.
The implementation of the algorithm developed shows that a considerable reduction in the minimum amount of eigenfaces required was achieved. Figure 5 shows this result.
Figure 5: Minimum Eigenfaces required for recognition.
-
-
CONCLUSION
The choice of any face detection and recognition method in any study should be based on the application of the system. No method is the best for all applications. Haar-like features are the facial features, the name was gotten from their intuitive similarity with Haar wavelets and were used in the first real- time face detector. A Haar-like feature considers adjacent rectangular regions at a specific location in a detection window, sums up the pixel intensities in each region and calculates the difference between these sums, these computation is done using integral images. This difference is then used to categorize subsections of an image i.e. to determine whether it is a face region or non-face region. The key advantage of a Haar-like feature over most other features is its calculation speed. Principal Component Analysis uses eigenvectors and eigenvalues/eigenfaces. Eigenfaces are the principal components divide the face into feature vectors. The feature vector information can be obtained from covariance matrix. These Eigenvectors are used to quantify the variation between multiple faces. The faces are characterized by the linear combination of highest Eigenvalues. Each face can be considered as a linear combination of the eigenfaces. The face can be approximated by using the eigenvectors having the largest eigenvalues.
This project is designed to detect faces, create its data base and recognize faces. The system can be used in biometrics. It can also be used in video surveillance, human computer interface and image database management. The detection rate of the system was very fast and accurate, the recognition efficiency of the system is considerably high, although it is affected by the lighting condition of the room.
REFERENCES
-
Omaima N. A. AL-Allaf, Review of Face Detection Systems based Artificial Neural Networks Algorithms, The International Journal of Multimedia & Its Applications (IJMA) Vol.6, No.1, February 2014.
-
S.V. Viraktamath, Mukund Katti, Aditya Khatawkar and Pavan Kulkarni, Face Detection and Tracking Using Opencv. The SIJ Transactions on Computer Networks & Communication Engineering (CNCE), Vol. 1, No. 3, July-August 2013 ISSN: 2321 2403 © 2013 | Published by The Standard International Journals (The SIJ) 45.
-
Alpaydin. E, Introduction to machine learning, 2nd ed., The MIT Press, 2010.
-
Andrew W. Senior and Ruud M. Bolle Face Recognition and Its Applications IBM T.J.Watson Research Center,
-
Augusteijn M.F. And T.L. Skujca, (1993) Identification of Human Faces through Texture-Based Feature Recognition and Neural Network Technology, Proc. IEEE Conf. Neural Networks, pp. 392-398.
-
Bartlett. M .S, Movellan. J. R, and Sejnowski. T. J, (2002). Face recognition by independent component analysis, IEEE Trans. Neural Networks, vol. 13, no. 6, pp. 1450-1464, 2002.
-
Belhumeur. P.N, Hespanha. J. P, and Kriegman. D. J, (1997). Eigenfaces vs. Fisherfaces Recognition using class specific linear projection, IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 7, 711 720, 1997.
-
Bishop .C. M, (1995) Neural networks for pattern recognition, Oxford University Press.
-
Bishop .C. M, (2006) Pattern recognition and machine learning, Springer.
-
Cortes. C and Vapnik.V, (1995). Support vector networks, Machine learning 20: 1- 25.
-
Fischler M.A, Elschlager R.A (1973), the representation and matching of pictorial structures. IEEE Transactions on Computers, c-22(1).
-
Hastie. T, Tibshirani. R, and Friedman. J, (2005) The Elements of Statistical Learning, 2nd ed, Springer,
-
Hazem M. El-Bakry (2002), Face Detection Using Neural Networks and Image Decomposition Lecture Notes in Computer Science Vol. 22, pp: 205-215.
-
Henry A. Rowley, Shumeet Baluja &Takeo Kanade. (1997) Rotation Invariant Neural Network- Based Face Detection, December, CMU- CS-97-201.
-
Kai Chen, Le Jun Zhao (2009) Robust Realtime Face Recognition and Tracking System JCS&T Vol. 9 No. 2 October 2009.