- Open Access
- Total Downloads : 128
- Authors : Narayan T Deshpande, S Ravishankar
- Paper ID : IJERTV4IS060938
- Volume & Issue : Volume 04, Issue 06 (June 2015)
- DOI : http://dx.doi.org/10.17577/IJERTV4IS060938
- Published (First Online): 25-06-2015
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
A Review on Face Biometrics
Narayan .T. Deshpande
Associate Professor Dept. E & C
BMS College of Engg Bangalore, India
Dr. S. Ravishankar,
Professor Dept of E & C
R V College of Engg Bangalore, India
AbstractFace recognition has emerged as a powerful tool for identifying persons in criminal investigations and anti national activities. In a typical face recognition system, face images from a number of persons are enrolled into the system as data bank, and the face image of a test persons are matched to the data bank using a one-to-one or one-to-many scheme. The one-to-one and one-to-many matching are called verification and identification, respectively. The face has several advantages viz., face is the body part generally always exposed and it contains a large number of identifying features, no contact required, the face biometric is easy to capture even at a long distance, the face conveys not only the identity but also the internal emotion and the person's age. This makes face recognition an important topic in human computer interaction as well as person recognition. Many face recognition methods have been proposed, such as Principal Component Analysis, Independent Component Analysis, Elastic Graph-Matching and Support Vector Machines. These methods achieve acceptable results in well-controlled situations, but their accuracy degrades in moderately controlled or uncontrolled environments. The limitations of face biometric are expression, age, Pose and lighting variations .Face recognition has a wide range of applications, including law enforcement, civil applications, and surveillance systems. Face recognition applications are also been extended to smart home systems where the recognition of the human face and expression is used for better interactive communications between human and machines.
KeywordsViola jones,Modified ada boost algorithm,Local binary pattern,Eigen face ,PCA
I. INTRODUCTION
In todays real world, human identification plays a major role for many societal transactions. Biometric systems basically verify the persons identity based on his anatomical and behavioral characteristics such as face, signature, fingerprint, ear, iris, retina, palm print, voice, DNA and gait. Biometric traits constitute a strong and permanent link between a person and his identity and these qualities cant be simply lost, forgotten, shared or forged. Since biometric systems require the user to be present at the time of verification, it can also deter users from making false claims.
Some of the applications of Biometrics are (i) Biometric attendance systems, which are being used in various sectors and organizations to monitor employee timekeeping. (ii) Biometric safes and locks to provide security to the
homeowners. (iii) Biometric access systems which provides strong security at entrances. (iv) Biometric systems for securing access to pc's (v) Wireless biometrics for high end security and providing safer transactions from wireless devices like PDA's, etc. (vi) Biometrics in recognizing DNA patterns for identifying criminals, etc. (vii) Biometrics airport security devices deployed at some of the airports to enhance the security standards.
LITERATURE SURVEY
Zhimingliu et al., [1] presented the Hybrid Color and Frequency Feature method for face recognition. The Enhanced Fisher Model (EFM), extracts the complementary frequency features in hybrid color space for improving face recognition performance. Carmen Martinez. et al., [2]proposed a method to improve accuracy when only a small set of labeled examples are available using unlabelled data. Eigen face technique is applied to reduce the dimensionality of the image space and ensemble methods to obtain the classification of unlabelled data. Ensembles unlabeled data choose the 3 or 5 examples for each class that are most likely belong to that class. These examples are appended to the training set in order to improve the accuracy and the process is repeated until no more examples to classify. The experiments were performed using k-nearest-neighbor, Artificial Neural Networks and locally weighted linear regression learning. Hui-chenglian et al., [3] presented Multi- view gender classification considering both shape and texture information to represent facial images.
The face area is divided into small regions, from which local binary pattern (LBP) histograms are extracted and concatenated into a single vector efficiently representing the facial image. Support vector machine classifier is used for classification Jing Wu et al., [4] proposed gender classification using Shape From Shading (SFS). Linear Discriminate Analysis is used based on the Principal Geodesic Analysis parameters to discriminate female and male genders of the test faces. SFS technique is used to improve the performance analysis of classification in gray scale face images. RyotatsuIga et al., [5] developed an algorithm to estimate gender and age using Support Vector Machine (SVM) based on features such as geometric arrangement and luminosity of facial images. The graph matching method with Gabor Wavelet Transform (GWT) method is used to detect the position of the face. GWT features, such as geometric arrangement color, hair and
mustache are used for gender estimation. GWT features viz., texture spots, wrinkles, and flabs are used for age estimation. Kazuya Ueki et al., [6] presented age-group classification using facial images under various lighting conditions. Hu- chengLian et al., [7] proposed Min-Max Modular Support Vector Machine (M3-SVM) to estimate age. Facial point detection Gabor Wavelet Transform and retina sampling method is used to extract features from face images. The task decomposition method is used in M3-SVM to classify gender information inside age samples. Ye Jihua et al., [8] proposed an advanced face recognition method of BPNN based on curvelet transform and 2DPCA to increase face recognition rate. Curvelet transform is used to process the face images to obtain higher dimension feature of face images, then 2DPCA was used to reduce the dimension. Maria De Marsico et al.,
-
proposed a novel frame work for real world face recognition in uncontrolled settings named FACE(Face Analysis for Commercial Entities). Its robustness comes from normalization strategies to address pose and illumination variations. Jian Yang et al., [10] presented a dimensionally reduction method that fits SRC well.SRC adopts a class reconstruction residual based decision rule, it is used as a criterion to steer the design of a feature extraction method. SRC-DP maximizes the ratio of between-class reconstruction residual to within-class reconstruction residual in the projected space and thus enables SRC to achieve better performance .SRC-DP provides low dimensional representation of human faces to make SRC based face recognition system more efficient.
Shan Du et al., [11] presented a novel face image pre- processing approach that deals with the illumination problem to make face recognition robust to illumination variations. Logarithm transform is first used to convert a face image in to logarithm domain. Then discrete cosine transform coefficients of it are modified to remove illumination variations. The reconstructed log image by inverse discrete cosine transform of the modified coefficients is used for the final recognition. MengYang et al., [12] proposed a new face coding model, namely regularized robust coding (RRC) , which could robustly regress a given signal with regularized regression coefficients. By assuming that the coding residual and the coding coefficient are respectively indeendent and identically distributed, the RRC seeks for maximum a posterior solution of the coding problem. An iteratively reweighted regularized robust coding algorithm is proposed to solve the RRC model efficiently. G.Prabhu Teja et al., [13] proposed a method to reduce Equal Error Rates of the Eigenface and Fisherface methods. The principal component analysis, Linear Discriminant Analysis and their modified methods of face recognition are implemented under subspace technique. By applying a range of image processing techniques it was demonstrated that the performance is highly dependent on the type of preprocessing steps used.
Manzoor Ahmad Lone et al., [14] developed a face recognition based on one combination of four individual techniques namely Principal Component Analysis, Discrete Cosine Transform, Template Matching using Correlation and Partitioned Iterative Function System. Scores of all these four
techniques are fused in a single face recognition system. The system based on combination of all of the four individual techniques out performs. Wonjun Hwang et al ., [15] presented a robust face recognition system for large-scale data sets taken under uncontrolled illumination variations. The proposed face recognition system consists of a novel illumination-insensitive preprocessing method, a hybrid Fourier-based facial feature extraction, and a score fusion scheme.
PROBLEM DEFINITION
At present most of the face recognition systems have low performance in terms of matching accuracy for covariates of face samples. Nowadays the research effort in face algorithm is focused on improving their performances, basically increasing the TSR (Total Success Rate), increasing the reliability and reducing the FRR (False Rejection Ratio) and FAR (False Acceptance Ratio) for standard Database such as ORL, L-SPECK etc. The existing techniques require large time to extract the features which is very slow processing and results into time overhead for processing a single face image. In India, UIDAI is also collecting face biometric (along with other biometric modalities) to issue a unique identification number to all the citizens. The large population of India poses a greater challenge to accurate face recognition.
SCOPE OF RESEARCH
India has planned a determined mega project to issue an unique identification number (UIDAI) to every people. That number will be stored in centralized database containing of the biometric information of the every individual. If implemented, this would be the massive implementation of the biometrics in the world. The government will then use this information to issue id cards. Officials in India will spend one year classifying Indias population according to the demographics indicators. Several general purpose algorithms and techniques are available for face identification. The existing algorithms may take more computational time because of complex calculations. Hence these algorithms are may not be efficient for real time applications. The scope of research is to develop efficient, faster and simple face recognition algorithms suitable for real time applications.
OBJECTIVES OF RESEARCH
Face is composed of the skull characteristics and the musculature and associated soft tissue.These structures influence the variation among human faces, along with gender and age. The main objective of the proposed research is to
-
Improve matching accuracy in the presence of covariates such as pose, lighting, expression, occlusion, weight changes, hair style, aging etc.
-
Develop efficient algorithm and verify the performance analysis for Levels of facial features grouped as following:
-
Level 1 features which consist of gross facial characteristics such as general geometry of the face. These coarse features are usually the global face features. They can be easily obtained even from low
resolution images and quickly used to distinguish between an elongated and a round face.
-
Level 2 features consist of more localized face characteristics. Some examples of level 2 features include the structure of facial components (e.g. mouth), the spatial relationship between facial components, etc.
-
Level 3 features consist of the finest details of the face, which include scars, moles, freckles, etc.
-
-
Study and modify current face recognition algorithms to be more robust in the presence of these emerging covariates such as face recognition in e- commerce and social welfare programs, matching biological twins, look-alikes and Matching low resolution face images for public safety and security through surveillance cameras are installed at public places, airport gates, security checkpoints, and government buildings to monitor a large area from a single location .
-
Improve the performance parameters such as False Rejection Ratio (FRR), False Acceptance Ratio (FAR) and Total success Rate (TSR) considering standard databases like ORL, L-Speck etc.,
-
To Design and verify the proposed algorithm for Real time systems.
-
METHODOLOGY
In this research work the entire process is divided in two stages, face detection and face recognition. Face detection is done using viola jones algorithm and recognition is done using PCA and eigen faces
-
The Viola-Jones face detector
This work has been carried out concerning the execution of the Viola-Jones face detection algorithm and also deals with on the methodology and theory behind this algorithm. The main principle of the Viola-Jones algorithm is to examine a sub-window capable of finding faces across a known input image. The approach would be to rescale the given image to different sizes and then executing the fixed size detector through these images. Due to the calculation of the various size in images, this method turns out to be time consuming. Disagreeing with the standard approach, Viola- Jones rescale the detector instead of the input image and run the detector several times through the image, each time with a different size. At first anyone can suspect both methods are to be equally time consuming, but Viola-Jones has scale invariant detector that requires the same number of calculations irrespective of the size. This detector is constructed using integral image and some simple rectangular features which inherits the features of Haar wavelets
-
The scale invariant detector
The first stage of the Viola – Jones face detection algorithm is to change the input image into an integral image. This could be done by making each pixel equal to the sum of entire
pixels above and to the left of the concerned pixel. This is demonstrated in the following figure
This agrees for the calculation of the sum of all pixels inside any given rectangle using only four values. These results are the pixels in the integral image that coincide with the corners of the rectangle in the given input image. This is demonstrated in the following figure
Since both rectangle B and C include rectangle A , the sum of A has to be added to the calculation. It has now been demonstrated how the sum of pixels within rectangles of arbitrary size can be calculated in constant time. The Viola- Jones face detector investigates a given sub-window using features consisting of two or more rectangles. The different types of features are shown in the following figure.
Each feature results in a single value which is derived by subtracting the sum of the white rectangles from the sum of the black rectangles. These operations could also be carried out directly on the raw pixels, but the variation due to different pose and individual characteristics would be expected to hamper this approach. The goal is now to smartly construct a mesh of features capable of detecting faces and this is discussed in next section
-
The modified AdaBoost algorithm
In order to find these features Viola-Jones use a modified version of the AdaBoost algorithm developed by Freund and Schapire i 1996. AdaBoost is a machine-learning boosting algorithm capable of forming a strong classifier through a clusters of weak classifiers. A weak classifier classifies correctly in only a little bit more than half the cases. A weak classifier is described mathematically as:
, , , = 1 >
0
Where x is a 24*24 pixel sub-window, p the polarity, f is the applied feature and the threshold that decides whether x should be classified as a positive (a face) or a negative (a non-face). Since only a small amount of feature values are expected to be potential weak classifiers the AdaBoost algorithm is modified to select only the best features. Viola- Jones modified AdaBoost algorithm is presented in pseudo code in the following figure
An important part of the modified AdaBoost algorithm is the determination of the best feature, polarity and threshold. There seems to be no smart solution to this problem and Viola-Jones suggest a simple brute force method. This means that the determination of each new weak classifier involves evaluating each feature on all the training examples in order to find the best performing feature. This is expected to be the most time consuming part of the training procedure. The best performing feature is chosen based on the weighted error it produces. This weighted error is a function of the weights belonging to the training examples.
-
The cascaded classifier
The major basic principle of the Viola-Jones face detection algorithm is to run the detector several times through the same image with a new size for every trials. Even if an image contains one or more faces it is obvious that an more amount of the evaluated sub-windows would still be non-faces. This approach leads to a different formulation of the problem: Instead of finding faces, the algorithm should omit non-faces. The objective behind this statement is that it is faster to discard a non-face than finding a face. By considering this, a detector consisting of only one (strong) classifier suddenly seems inefficient since the evaluation time is constant no matter the input. Hence the need for a cascaded classifier arises.
The cascaded classifier is consists of stages, each containing one strong classifier. The job of each stage is to find whether a given sub-window is definitely a face or non face. When a sub-window is classified to be a non-face by a given stage then it is immediately discarded. Conversely if a sub-window classified as a maybe-face is passed on to the next stage in the cascade. It follows that the more stages a given sub-window passes, the higher the chance the sub-window actually
contains a face. The concept is illustrated with two stages in the following figure
In a single stage classifier one would normally accept false negatives in order to reduce the false positive rate. However, for the first stages in the staged classifier false positives are not considered to be a problem since the succeeding stages are expected to sort them out. Therefore Viola-Jones prescribe the acceptance of many false positives in the initial stages. Consequently the amount of false negatives in the final staged classifier is expected to be very small. Viola- Jones also refer to the cascaded classifier as an attentional cascade. This name implies that more attention (computing power) is directed towards the regions of the image suspected to contain faces. It follows that when training a given stage, say n, the negative examples should of course be false negatives generated by stage n-1.
-
Principal Component Analysis (PCA)
Principal component analysis (PCA) was invented in 1901 by Karl Pearson. PCA is a variable reduction procedure and useful when obtained data have some redundancy. This will result into reduction of variables into smaller number of variables which are called Principal Components which will account for the most of the variance in the observed variable. The major advantage of PCA is using it in eigenface approach which helps in reducing the size of the database for recognition of a test images. The images are stored as their feature vectors in the database which are found out projecting each and every trained image to the set of Eigen faces obtained. PCA is applied on Eigen face approach to reduce the dimensionality of a large data set.
-
Eigen Face Approach
It is adequate and efficient method to be used in face recognition due to its simplicity, speed and learning capability. Eigen faces are a set of Eigen vectors used in the Computer Vision problem of human face recognition. They refer to an appearance based approach to face recognition that intends to capture the variation in facial images and use this information to encode and compare images of individual faces in a holistic manner.
The Eigen faces are Principal Components of a distribution of faces, or equivalently, the Eigen vectors of the covariance matrix of the set of the face images, where an image with N by N pixels is considered a point in N 2 dimensional space. Previous work on face recognition ignored the issue of face stimulus, assuming that predefined measurement were relevant and sufficient. This suggests that
coding and decoding of face images may give information of face images emphasizing the significance of features. These features may or may not be related to facial features such as eyes, nose, lips and hairs. We want to extract the relevant information in a face image, encode it efficiently and compare one face encoding with a database of faces encoded similarly. A simple approach to extracting the information content in an image of a face is to somehow capture the variation in a collection of face images
We aim to find Principal Components of the distribution of faces, or the Eigen vectors of the covariance matrix of the set of face images. Each image location contributes to each Eigen vector, so that we can display the Eigen vector as a sort of face. Each face image can be represented exactly in terms of linear combination of the Eigen faces. The number of possible Eigen faces is equal to the number of face image in the training set. The faces can also be approximated by using best Eigen face, those that have the largest Eigen values, and which therefore account for most variance between the set of face images. The primary reason for using fewer Eigen faces is computational efficiency.
-
Face Image Representation
Training set of m images of size NxN are represented by vectors of size N 2.
Each face is represented by 1, 2, 3, M.
Feature vector of a face is stored in an N ×N matrix. Now, this two dimensional vector is changed to one dimensional vector.
-
Eigen Face Space
The Eigen vectors of the covariance matrix AAT are AXi which is denoted by U i. U i resembles facial images which are likely called Eigen faces. Eigen vectors correspond to each Eigen face in the face space and discard the faces for which Eigen values are zero thus reducing the Eigen face space to an extent. The Eigen faces are ranked according to their usefulness in characterizing the variation among the images.
A face image can be projected into this face space by
k = U T (k ); k=1, M, where (k ) is the mean centered image.
Hence projection of each image can be obtained as 1 for projection of image1 and 2 for projection of image2 and hence forth.
-
Recognition Step
The test image, , is projected into the face space to obtain a vector, as
= U T ( )
The distance of to each face is called Euclidean distance and defined by
2k = || k ||; k = 1, M where k is a vector describing the kth face class.
A face is classified as belonging to class k when the minimum k is below some chosen threshold c. otherwise the face is classified as unknown.
c, is half the largest distance between any two face images:
c = (1/2) maxj, k ||j k ||; j, k = 1, M
We have to find the distance betwen the original test image and its reconstructed image from the Eigen face f 2 = || f ||2, where f = U +
-
If c then input image is not even a face image and not recognized.
-
If < c and k for all k then input image is a face image but it isRecognized as unknown face.
-
If < c and k < for all k then input images are the individual face image associated with the class vector k.
ACKNOWLEDGMENT
This research is supported by the BMS College of Engineering, Bangalore. The authors wish to thank BMS college of Engineering for supporting this work by encouraging and supplying the necessary tools and also for supporting the work through TEQIP grants
REFERENCES
-
Zhiming Liu and Chengjun Liu A Hybrid and Frequency Feature Method for Face Recognition IEEE Transactions on Image Processing. vol. 17, no.10, October 2008
-
Martinez and Olac Fuentes, Face Recognition using Unlabeled Data, Journal of Computer Science Research, vol. 7, no. 2, pp. 123-129, 2003.
-
Hui-Cheng Lain and Bao-Liang Lu, Multi-View Gender Classification using Local Binary Patterns and Support Vector Machines, International Conference on Neural Networks, pp. 202-209, 2006.
-
Jing Wu, W.A.P. Smith and E.R. Hancock, Gender Classification using Shape from Shading, International Conference on Image Analysis and Recognition, pp. 925-934, 2008.
-
RyotatsuIga, Kyoko Izumi, Hisanori Hayashi, GentaroFukano and TestsuyaOhtani, Gender and Age Estimation from Face Images, SICE Annual Conference, pp. 756-761, August, 2003.
-
Kazuya Ueki, Teruhide Hayashida, and Tetsunori Kobayashi, Subspace-based Age-group Classification using Facial Images under Various Lighting Conditions, Seventh International Conference on Automatic Face and Gesture Recognition, pp. 43-48, vol. 1, April 2006.
-
Hui-Cheng Lain and Bao-Liang Lu, Age Estimation using a Min-Max Modular Support Vector Machine, International Conference on Neural Information Processing, pp. 83-88, November 2 2005.
-
Ye Jihua, Hu Dan, Xia Guomiao and Chen Yahui, An Advanced BPNN Face Recognition Based On Curvelet
Transform and 2DPCA , International Conference on Computer Science & Education, April26- 28,2013.
-
Maria De Marsico, Michele Nappi, Daniel Riccio, and Harry Wechsler, Robust Face Recognition for Uncontrolled Pose and Illumination Changes, IEEE Transactions On Systems, Man, And Cybernetics: Systems, Vol. 43, No. 1, January 2013.
-
Jian Yang, Delin Chu, Lei Zhang, Yong Xu, and Jingyu Yang, Sparse Representation Classifier Steered Discriminative Projection With Applications to Face Recognition, IEEE Transactions On Neural Networks And Learning Systems, Vol. 24, No. 7, July 2013.
-
Shan Du, Mohamed Shehata and Wael Badawy, A Novel Algorithm For Illumination Invariant DCT-Based Face Recognition, Twenty fifth IEEE Canadian Conference On Electrical And Computer Engineering ,pp.1-4, May 2012.
-
Meng Yan, Lei Zhang, Jian Yang, and David Zhang, Regularized Robust Coding For Face Recognition, IEEE Transactions On Image Processing, Vol. 22, No. 5, May 2013.
-
G. Prabhu Teja and S. Ravi, Face Recognition Using Subspaces Techniques, IEEE International Conference On Recent Trends In Information Technology , pp 103-107, 2012.
-
Manzoor Ahmad Lone, S. M. Zakariya and Rashid Ali, Automatic Face Recognition System by Combining Four Individual Algorithms, International Conference on Computational Intelligence and Communication Systems, pp 222-226, 2011.
-
Wonjun Hwang, Haitao Wang, Hyunwoo Kim, Seok-Cheol Kee, and Junmo Kim, Face Recognition System Using Multiple Face Model of Hybrid Fourier Feature Under Uncontrolled Illumination Variation , IEEE Transactions On Image Processing, VOL. 20, NO. 4, April 2011.