- Open Access
- Total Downloads : 11
- Authors : Vaibhav Mehta , Jyoti Tiwari , Joel Shaji
- Paper ID : IJERTV8IS070178
- Volume & Issue : Volume 08, Issue 07 (July 2019)
- Published (First Online): 20-07-2019
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Face Recognition- Advanced Techniques
Vaibhav Mehta
Computer Science and Engineering Vellore Institute of Technology Chennai, Tamil Nadu
Joel Shaji
Computer Science and Engineering Vellore Institute of Technology Chennai, Tamil Nadu
Jyoti Tiwari
Computer Science and Engineering Vellore Institute of Technology Chennai, Tamil Nadu
Abstract With the advancement of technology, there is a vast demand of technological development in the sector of security. One of the best way possible for ensuring security is the use of human body identification. Face biometrics is one of the hottest topic in the market. It has now received more attention. Many big organisations and multinational technological companies are special attention towards this topic including Tesla, etc. as this technology will be beneficial for multiple fields from security to transportation to solving crimes. Face biometrics, useful for a persons authentication is a simple and non- intrusive method that recognizes face in complex multidimensional visual model and develops a computational model for it. Having developed many technological advances in this sector there is still a lot of scope of improvement. In this paper, first, we are going to present an overview of face recognition and discuss the methodology and its functioning and the different techniques involved in the same. Thereafter we will represent the most recent face recognition techniques listing their advantages and disadvantages. Some techniques specified here also improve the efficiency of face recognition under various illumination and expression condition of face images.
Keywords – Face recognition, Eigen faces, neural network, elastic bunch method, graph matching, feature matching and template matching, biometrics, 3-d morph able model, CNN, ANN.
INTRODUCTION:
Computer since its inception has never lagged in proving itself as an important asset for humankind. This led to coinage of term HCI i.e. Human Computer Interaction. In the computer human interaction (HCI) tasks, face identification and recognition systems have received massive importance ever since the security related concerns has reached up to its peak. Face Recognition becomes one of the most advanced biometrics authentication techniques from the past few years. It has gained a lot of popularity over the years due to its vast implementation field. Face recognition is an interesting and successful application of Pattern recognition and Image analysis. Face recognition system has two main tasks: verification and identification. Face verification means a 1:1 match that compares a face images against a template face images whose identity is being claimed. Face identification means a 1:N problem that compares a query face image against all image templates in a face database. Machine recognition of faces is gradually becoming very important due to its wide range of commercial and law enforcement applications, which include forensic identification, access control, border surveillance and human interactions and availability of low cost recording devices. Various biometric features can be used for the purpose of human recognition like fingerprint,
palm print, hand geometry, iris, face, speech, gaits, signature etc. The problem with finger print, iris palm print, speech, gaits are they need active cooperation of person while face recognition is a process does not require active co-operation of a person so without instructing the person can recognize the person. Therefore, face recognition is much more advantageous compared to the other biometrics. Face recognition has a high identification or recognition rate of greater than 90% for huge face databases with well- controlled pose and illumination conditions.
LITERATURE SURVEY:
I/P ——--> FACE DETECTION ———> FACE
EXTRACTION ——--> FACE RECOGNITION ———>
O/P
-
FACE RECOGNITION BASIC
The first step in face recognition system is to detect the face in an image. The main objective of face detection is to find whether there are any faces in the image or not. If the face is present, then it returns the location of the image and extent of each face. Pre-processing is done to remove the noise and reliance on the precise registration. There are various factors that makes the face detection is a challenging task. Pose presence or absence of structural components, Facial expression, Occlusion, Image orientation. The facial feature detection is the process to detect the presence and location of features, like nose, eyebrow, eyes, lips, nostrils, mouth, ears, etc. this is done with the assumptions that there is only a single face in an image. In the Face recognition, process the input image is compared with the database. The input image is also known as probe and the database is called as gallery. Then it gives a match report and then the classification is done to identify the sub-population to which new observations belong [2]. There are three approaches for face recognition:
-
Feature base approach
In feature based approach the local features like nose, eyes are segmented and it can be used as input data in face detection to easier the task of face recognition.
-
Holistic approach
In this approach, the whole face region is taken into account for the face detection purpose. They are all based on principal component analysis (PCA) techniques that can be used to simplify a dataset into lower dimension while retaining the characteristics of dataset. Some of the examples are Eigen faces, fisher faces etc.
-
Hybrid approach
Hybrid approach is combination of feature based and holistic approach. In this approach, both local and whole face is used as the input to face detection system.
-
-
TECHNIQUES FOR FACE RECOGNITION
-
Eigenface
The Eigenface method is one of the generally used algorithm for face recognition. Karhunen-Loeve is based on the eigenfaces technique in which the Principal Component Analysis (PCA) is used. This method is successfully used to perform dimensionality reduction. Principal Component Analysis is used by face recognition and detection. Mathematically, Eigenfaces are the principal components divide the face into feature vectors. The feature vector information can be obtained from covariance matrix. These Eigenvectors are used to quantify the variation between multiple faces. The faces are characterized by the linear combination of highest Eigenvalues. Each face can be considered as a linear combination of the eigenfaces. The face can be approximated by using the eigenvectors having the largest eigenvalues. The best M eigenfaces define an M dimensional space, which is called as the face space. Principal Component Analysis is also used by L. Sirovich and M. Kirby to efficiently represent pictures of faces. They defined that a face images could be approximately reconstructed using a small collection of weights for each face and a standard face picture. The weights describing each face are obtained by projecting the face image onto the eigenpicture. Eigenface is a practical approach for face recognition. Because of the simplicity of its algorithm, implementation of an eigenface recognition system becomes easy. It is efficient in processing time and storage. PCA reduces the dimension size of an image in a short period of time. There is a high correlation between the training data and the recognition data. The accuracy of eigenface depends on many things. As it takes the pixel value as comparison for the projection, the accuracy would decrease with vrying light intensity.
Preprocessing of image is required to achieve satisfactory result. An advantage of this algorithm is that the eigenfaces were invented exactly for those purpose what makes the system very efficient. A drawback is that it is sensitive for lightening conditions and the position of the head. Disadvantages-Finding the eigenvectors and eigenvalues are time consuming on PPC.
The size and location of each face image must remain similar PCA (Eigenface) approach maps features to principle subspaces that contain most energy.
-
Neural Networks
The neural networks are used in many applications like pattern recognition problems, character recognition, object recognition, and autonomous robot driving. The main objective of the neural network in the face recognition is the feasibility of training a system to capture the complex class of face patterns. To get the best performance by the neural network, it has to be extensively tuned number of layers, number of nodes, learning rates, etc. The neural networks are non-linear in the network so it is widely used technique for face recognition. So, the feature extraction step may be more efficient than the Principal Component Analysis. The authors achieved 96.2% accuracy in the face recognition process when 400 images of 40 individuals. The classification time is less than 0.5 second, but the training time is as long as 4 hours features in a hierarchical set of layers and provides partial invariance to translation, rotation, scale, and deformation. The disadvantage of the neural network approach is that when the number of classes increases. Multi-Layer Perceptron (MLP) with a feed forward learning algorithms was chosen for the proposed system for its simplicity and its capability in supervised pattern matching. It has been successfully applied to many pattern classification problems. A new approach to face detection with Gabor wavelets & feed forward neural network was presented. The method used Gabor wavelet
transform and feed forward neural network for both finding feature points and extracting feature vectors. The experimental results have shown that proposed method achieves better results compared to other successful algorithm like the graph matching and eigen faces methods. A new class of convolutional neural network was proposed where the processing cells are shunting inhibitory neurons. Previously shunting inhibitory neurons have been used in conventional feedforward architecture for classification and non-linear regression and were shown to be more powerful than MLPs i.e. they can approximate complex decision surfaces much more readily than MLPs. A hybrid neural network was presented which is combination of local image sampling, a self-organizing map neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, therefore providing dimensionality reduction and invariance to minor changes in the image sample. The convolutional neural network (CNN) provides for partial invariance to translation, rotation, scale, and deformation. PCA+CNN & SOM+CNN methods are both superior to eigenfaces technique even when there is only one training image per person. SOM +CNN method consistently performs better than the PCA+CNN method. A new face detection method is proposed using polynomial neural network (PNN). The PCA technique used to reduce the dimensionality of image patterns and extract features for the PNN. Using a single network the author had achieved fairly high detection rate and low false positive rate on images with complex backgrounds. In comparison with a multilayer perceptron, the performance of PNN is superior. To best reflect the geometry of the 3D face manifold and improve recognition, Spectral Regression Kernel Discriminate Analysis(SRKDA) based on regression and spectral graph analysis introduced in proposed method. When the sample vectors are non-linear SRKDA can efficiently give exact solutions than ordinary subspace learning approaches. It not only solves high dimensional and small sample size problems, but also enhances feature extraction from a face local non-linear structure. SRKDA only needs to solve a set of regularized regression problems and no eigenvector computation involved, which is a huge saving in computational cost.
-
Fisherfaces
Fisherfaces is one the most successfully widely used method for face recognition. It is based on appearance method. In 1930 R.A Fisher developed linear/fisher discriminant analysis for face recognition. It shows successful result in the face recognition process. All used LDA to find set of basis images which maximizes the ratio of between-class scatter to within-class scatter. The disadvantage of LDA is that within the class the scatter matrix is always single, since the number of pixels in images is larger than the number of images so it can increase detection of error rate if there is a variation in pose and lighting condition within same images. So to overcome this problem many algorithms has been proposed. Because the fisher faces technique uses the advantage of within-class information so it minimizes the variation within class, so the problem with variations in the same images such as lighting variations can be overcome. The fisherface method for face recognition uses both principal component analysis and linear discriminant analysis which produce a subspace projection matrix, similar as used in the eigenface method. However, the fisherface method is able to take advantage of within-class information, minimising variation within each class, yet still maximising class separation. Like the eigenface construction process, the first step of the fisherface tecnique is take each (NxM) image array and reshape into a ( (N*M)x1) vector. Fisherface is similar to Eigenface but with enhancement of better classification of different classes image. With FLD, one can classify the training set to deal with different people and different facial expression. We have better accuracy in facial expression than Eigen face approach. Besides, Fisher face removes the first three principal components which are responsible for light intensity changes; it is more invariant to light intensity. The disadvantages of Fisher face are that it is more complex than Eigen face to finding the projection of face space. Calculation of ratio of between-class scatter to within-class scatter requires a lot of processing time. Besides, due to the need of better classification, the dimension of projection in face space is not as compact as Eigen face, results in larger storage of the face and more processing time in recognition.
-
Elastic bunch Graph matching
Face recognition using elastic bunch graph matching is based on recognizing faces by estimating a set of features using a data structure called a bunch graph. Same as for each query image, the landmarks are estimated and located using bunch graph. Then the features are extracted by taking the number of instances of Gabor filters which is called face graph. The matching percentage (MSEBGM) is calculated on the basis of similarity between face graphs of database and query image. In 1999, Elastic Bunch Graph Matching was suggested by LaurenzWiskott, Jean-Marc Fellous, Norbert Kruger and Christoph von der Malsburg of University of Southern California. This approach is totally different to Eigenface and Fisherface. It uses elastic bunch graph to automatically locate the fiducial points of the face such as eyes, nose, mouth, etc. and recognize the face according to these face features. Elastic Bunch Graph Matching (EBGM) uses the structure information of a face
which reflects the fact that the images of the same subject tend to translate, scale, rotate, and deform in the image plane. It uses the labelled graph, edges are labelled the distance information and nods are labelled with wavelet coefficients in jets. After that this model graph can be used t generate image graph. The model graph can be rotated, scaled, translated and deformed during the matching process. The gabor wavelet transformation is used to produce the local features of the face images. Gabor wavelets are biologically motivated convolution kernels in the shape of plan waves restricted by a Gaussian envelop function, the set of convolution coefficients for kernels of different orientations and frequencies at one image pixel is called a jet. In the Elastic graph matching the basic process is to compare graphs with images and to generate new graphs. In its simplest version a single labelled graph is matched onto an image. A labelled graph has a set of jets arranged in a particular spatial order. A relative set of jets can be selected from the Gabor wavelet transform of the image. The image jets initially have the same relative spatial arrangement as the graph jets, and each image jet relatives to one graph jet. The similarity of the graph with the image then is simply the average jet similarity between image and graph jets. For increase similarity it allows some translation, rotation and distortion up to some extent. In contrast to eigen faces the elastic bunch graph matching technique treat one vector per feature of faces. The advantage of this is that change or missing any one feature it does not mean that the person will not recognized. The stored data can be easily extended to a database for storage. When a new face images is added, no additional effort is need to modify templates, as it already stored in the database. It is possible to recognized person up to rotation of 22 degrees. Disadvantage of this algorithm is that it is very sensitive to lightening conditions and a lot of graphs have to be placed manually on the face. When the changes in lighting are large, the result will have a significant decrease in the recognition rate.
-
Template matching
In template matching, we can exploit other face templates from different prospects to characterize single face. Primarily, grey levels that match the face image can also be processed in proper format (Bichsel, 1991). In Bruneli and Poggio (1993) the Pop and Bruneli is available for all aspects of developing automatic four template features i.e., eyes, nose, mouth, face and selecting the entire set. The system is evaluated by comparing results from geometrical based algorithms on 188 images of 47 subjects. The pattern matching algorithm is a very practical approach, very simple to use and approximately achieves 100% recognition rate. The Principal Component Analysis using Eigenface provides the linear arrangement of templates. The main advantage of this approach is that it is easy to implement and is less expensive than any other feature classifier. Comparatively, template based algorithms are more expensive and cannot be easily processed. However, the recognition process is easily handled between the given template and input image. The complexity arises only during the extraction of template. Generally, template based techniques outperform as compared to feature based
methods. In Karungaruet al. (2004) uses template based genetic algorithm and exposes different results on target image by adjusting the size of the template as preprocessing. The edge detection and YIQ color templates are exploited. The results are taken around the distance measure face recognition approach and comparison is performed with existing methods. In Anlonget al. (2005) the author works on the grid to construct reliable and proper infrastructure. This method is highly effective for larger databases that solve the problem of face recognition under reasonable computational cost. In Sao and Yegnanarayana (2007) an algorithm is proposed for Person verification using template based face recognition method. Primarily, the edginess based face representation is calculated to process one dimensional images. The system is somehow associated with Neural Networks to test the images under varying pose and illumination conditions. Similarly, in Wang and Yang (2008) a face detection algorithm is proposed rather than face recognition algorithm as preprocessing steps. Now the advantage is taken from template based algorithm for face detection by constructing a general frame work for hierarchical face detection. The features are extracted using PCA from 2D images. At the end, it concludes that it is good to use template algorithms for face detection because it gives highest recognition rate. Similarly, in Leva da et al. (2008) Dynamic Time Warping (DTW) and Long Short Term Memory (LSTM) are investigated under the Neural Network classification in which a single feature template is large enough for feature extraction. It actually implements the gradient based learning algorithm by handling associated gradient problems. The experimental result reveals that both methods perform well for face recognition while the learning strategy gives robust recognition rate. The working of this approach is summed up by saying that further improvements are still required in order to solve the recognition problem that seems to be very common in real world. A simple version of template matching is that a test image represented as a two-dimensional array of intensity values is compared using a suitable metric, such as the Euclidean distance, with a single template representing the whole face. There are several other more sophisticated versions of template matching on face recognition. One can use more than one face template from different viewpoints to represent an individual's face. A face from a single viewpoint can also be represented by a set of multiple distinctive smaller templates. The face image of gray levels may also be properly processed before matching. In Bruneli and Poggio automatically selected a set of four features templates, i.e., the eyes, nose, mouth, and the whole face, for all of the available faces. They compared he performance of their geometrical matching algorithm and template matching algorithm on the same database of faces which contains 188 images of 47 individuals. The template matching was superior in recognition (100 percent recognition rate) to geometrical matching (90 percent recognition rate) and was also simpler. Since the principal components (also known as eigen faces or eigenfeatures) are linear combinations of the templates in the data basis, the technique cannot achieve better results than correlation, but it may be less computationally expensive. One drawback of template
matching is its computational complexity. Another problem lies in the description of these templates. Since the recognition system has to be tolerant to certain discrepancies between the template and the test image, this tolerance might average out the differences that make individual faces unique. In general, template-based approaches compared to feature matching are a more logical approach. In summary, no existing technique is free from limitations. Further efforts are required to improve the performances of face recognition techniques, especially in the wide range of environments encountered in real world.
-
Geometrical feature matching
Geometrical feature matching techniques are based on the computation of a set of geometrical features from the picture of a face. The overall configuration can be described by a vector which representing the position and size of the main facial features like eyes and eyebrows, nose, mouth, and an outline of face. The primary works on automated face recognition by using geometrical features was done in 1973. Their system achieved 75% recognition rate on a database of 20 people using two images per person, one as the model and the other as the test image. In 1993 R. Bruneli and T. Poggio, automatically extracted a set of geometrical features from the picture of a face, such as nose width and length, mouth position and chin shape. There were 35 features extracted form a 35 dimensional vector. The recognition was then performed with a Bayes classifier. They achieved recognition rate 90% on a database of 47 people. I.J. Cox el at. introduced a mixture-distance technique which achieved 95% recognition rate on a query database of 685 individuals. Each face was represented by 30 manually extracted distances. Gabor wavelet decomposition to detect feature points for each face image which reduced the storage requirement for the database. Typically, 35-45 feature points per face were generated. Two cost values, the topological cost, and similarity cost, were evaluated. The recognition accuracy of the right person was 86% and 94% of the correct person's faces were in the top three candidate matches. In summary, geometrical feature matching based on precisely measured distances between features may be useful for finding matches in a large database. However, it will be dependent on the accuracy of the feature location algorithms. Disadvantage of current automated face feature location algorithms do not provide a high degree of accuracy and require considerable computational time. In 2006 Basavaraj and Nagaraj proposed a geometrical model for facial feature extraction. The basic process includes improvement of frontal face images including ears and chin and also of potential features because it enhances the development of methods in face recognition process. The face model proposed by the ability to identify is divided into four steps. The starting step is pre-processing. The main aim of this step is to reduce the noise and the input image is converted into a binary one. The second step contains labeling of facial features and then finding the origin of these labeled features. Finally, it calculates the estimated distance used for matching purpose. In Khalid et al. (2008) the author tries to reduce the search space by minimizing the facial features informationThe information is limited by extracting
60 fiducially control points (nose, mouth, eyes etc) of face with different light and expression images. The functional classification of these features is large-scale point of distance and angle measurement. This process achieve 86% recognition rate. In Huiyu and Sadka (2011) the diffusion distance over the calculation of face images is produced. These images describe the shape of Gabor filters which includes the size and extent. Gabor filter results for the discriminatory image are used to distinguish between face representations in the database. In Zhen et al. (2011) presented a recognition approach based on facial geometry. In this approach, first the face image is segmented into multiple facial geometrical domains such as image space and image orientation at different scale. In second step LBP is calculated. The presented approach provides good face representation by exploring facial information from different domains which gives efficient face recognition systems. Similarly in Pavanet al. (2011) presented a geometry based face recognition method which makes use of subspace based models. These models provide geometrical properties of the face space which can assist efficient recognition system for number of image applications.
-
3D Morphable Model
Construction, shape and texture of any example of a convex combination of vector describe a real face. Accessories of 3D image deformation model can be identified in two ways in different screening environment. Model 1: A Model accessory confirms that the model can be based on the coefficient representing the shape and texture inherent in the face and independent of imaging conditions.Currently, (Zhao and Chellappa, 2000) amalgamates 3D Morphable model with computer aided system. As a single image, the algorithm repeatedly calculates three- dimensional shape, texture and all relevant consideration of three-dimensional scene. Lambertian reflection is limited to lighting, specular reflections and shadows having a significant impact on the appearance of human skin that should not be considered into account. This method is based on three-dimensional facial deformation model to confine the exact properties of faces that can be routinely learned from the data set. Deformable model actually constitutes geometry and texture of the face and includes probability density function as face space. In recent development, in Bustard and Nixon (2010) uses ear identification as a biometric. Everyone has a unique ear pattern. In this paper, the author focuses on 3D Morphable model for head and ear. Facial expressions are handled using the same approach of Morphable model in order to produce and synthesize animation. For this the author introduced a model of Weighted Feature Map. The experimental result reveals high performance and robustness of the system against existing methods. In Unsangs model (2010) the 3D aging model is presented to overcome facial aging problem. Experimental results reveal improved performance for face recognition systems with tackling facial aging problem. Similarly in Utsav et al. (2011), presented a face recognition system based on 3D generic elastic model for tackling the problem of pose variation during recognition of face. The presented 3D model comprises a database of 2D pose views which are further adjusted for matching process.
Experimental results reveal high recognition accuracy under controlled as well as uncontrolled real-world scenarios.
CONCLUSION:
Face recognition is a challenging problem in the field of image processing and computer vision. Because of lots of application in different fields the face recognition has received great attention. In this paper different face recognition algorithms are mentioned with their advantages and disadvantages. We can use any of them as per our requirement and application. We can also work over to improve the efficiency of the discussed algorithms and improve the performance thus obtaining new are more advanced techniques.
REFERENCES:
-
Jigar M. Pandya, Devang Rathod, Jigna J. Jadav, A Survey of Face Recognition approach, International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 1, January -February 2013, pp.632- 635
-
INSODE 2011 A Face Recognition System Based on Eigenfaces Method Müge Çarkç a , Figen Özen a * Science Direct
-
Sushma Jaiswal, Dr. (Smt.) Sarita Singh Bhadauria, Dr. Rakesh Singh Jadon, COMPARISON BETWEEN FACE RECOGNITION ALGORITHM-EIGENFACES, FISHERFACES AND ELASTIC BUNCH GRAPH MATCHING, Volume 2, No. 7, July 2011 Journal of Global Research in Computer Science
-
Ming-Hsuan Yang, David J. Kriegman and NarendraAhuja, Detecting Faces in Images: A Survey, IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 24, NO. 1, JANUARY 2012
-
Lin-Lin Huang, Akinobu Shimizu, Yoshihiro Hagihara, Hidefumi Kobatake ,Face detection from cluttered images using a polynomial neural network, Elsevier Science 2002
-
Yue Ming, Qiuqi Ruan, Xiaoli Li, Meiru Mu, Efficient Kernel Discriminate Spectral Regression for 3D Face Recognition,
Proceedings Of ICSP 2014
-
International Journal of Computer Applications (0975 8887) Applications of Computers and Electronics for the Welfare of Rural Masses (ACEWRM) 2015 16 Survey Paper on the Timeline of Face Detection Techniques
-