- Open Access
- Total Downloads : 587
- Authors : Neha Tapase, Priyanka Verma
- Paper ID : IJERTV3IS050274
- Volume & Issue : Volume 03, Issue 05 (May 2014)
- Published (First Online): 22-05-2014
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
A Face Recognition Review based on Principal Component Analysis and Local Binary Patterns
Neha V. Tapase
Student, Electronics & Telecommunications Dept.
MPSTME, NMIMS University Mumbai, India
Priyanka Verma
Asst. Prof., Electronics & Telecommunications Dept.
MPSTME, NMIMS University Mumbai, India
Abstract Face Recognition has been an area of vast interest to researchers for the past few decades because of its varied scope of application which ranges from entertainment to security & surveillance. Ease of acquiring data is another reason for face recognition being preferred over any other human feature. Though a lot of research has been carried out in this field, it still offers a great scope of improvement to overcome challenges with respect to pose and/or expression variations, occlusions, image acquisition problems like illumination, blurring etc. In this paper, we have discussed two methods for Face Recognition, namely: Principal Component Analysis (PCA) which has been in use for quite some time and Local Binary Patterns (LBP) which is relative new. The algorithms for both these methods have been explained in detail. The main feature of each method and its effectiveness has also been discussed in brief.
Keywords Face Recognition, Principal Component Analysis, Local Binary Pattern, Review
-
INTRODUCTION
Human beings have been using various physical and behavioral traits to correctly identify each other and their surroundings for a long time. The human brain is so advanced that it only takes a few seconds to perform this task efficiently. There has been a lot of research going on to replicate these results using machine intelligence for several years in various fields of applications.
Fig. 1. Configuration of generic face recognition system [1]
Many methods are available for person identification which make use of unique human characteristics such as fingerprint analysis, iris recognition, DNA matching, voice recognition etc. Some of these such as fingerprint analysis, retinal scans or DNA scans are considered very reliable too. But these methods rely on the cooperation of the participants, whereas a personal identification system based on analysis of frontal or profile images of the face is often effective without the participants cooperation or knowledge as it is a non-intrusive process [1], [2].
-
Face Detection
Face detection is part of the larger problem of detecting objects in a photo or video. One of the criteria of success in detecting objects is that their orientation shouldnt matter. This is very important in the case of identifying faces in images and videos. It rarely occurs that the faces in two photos would have the same orientation. Another important factor is independence of illumination levels. If there is a bright source of light behind a face then the face appears dark, while if it is illuminated by a light source placed behind the camera then it appears brighter than it actually is. The detection algorithm should consider both the cases. A robust real-time face detection algorithm has been developed by the authors of [3].
-
Face Recognition
The task of face recognition in still images consists of identifying persons in a set of test images with a system that has been previously trained with a collection of face images labelled with each person identity.
Face recognition can be divided in following basic applications:
-
Identification: an unknown input face is to be recognized, matching it against faces of different known individuals database. It is assumed that the person is in the database.
-
Verification: an input face claims an identity and the system must confirm or rejects it. The person is also a member of the database.
A general statement of the problem of machine recognition of faces can be formulated as follows: given still or video images of a scene, identify or verify one or more persons in the scene using a stored database of faces [1]. The solution to this problem is presented in Fig. 1.
-
-
-
FACE RECOGNITION TECHNIQUES
Face recognition techniques can be broadly classified as below:
-
Holistic method: In this method, the whole face is used as a characteristic feature for recognition.
-
Feature-based methods: Here non-holistic methods like identifying structural face characteristics such as the eyes, nose and mouth, and the geometric relations between them are used to make the final decision.
-
Hybrid methods: These kind of approaches try to take advantage of both holistic and non-holistic methods.
In this paper, we discuss one technique based on holistic method and another technique based on the hybrid approach. We first explain the concept of Principal Component Analysis (PCA) [4], [5], [6] & [7] which considers the whole face as a feature-vector. Then we move on to Local Binary Patterns (LBP) [8], [9] & [10], based face recognition which makes use of local texture information of the entire face as a feature vector.
-
PCA using Eigenfaces
Eigenfaces are basically a set of eigenvectors of the covariance matrix computed for a set of known face images in the field of computer vision for recognition of human faces. Specifically, the Eigenfaces are the principal components of a distribution of faces [6].
-
Face Recognition using Eigenfaces
The process of face recognition explained [5] and implemented by the authors of [6] is given below.
-
Initialization process:
-
Acquire a set of training images.
-
Transform face images into the eigenfaces with only the best M images (defining the face space) with the highest eigenvalues.
-
Calculate the weights for each known image in the training dataset, by projecting their face images onto the face space.
-
-
Recognition process:
-
Given an image to be recognized, obtain its eigenface components and calculate the corresponding weight projecting it onto each of the Eigenfaces.
-
Find the Euclidean distance between the weights.
-
Recognition is then done by minimizing the Euclidean distance. If the minimum Euclidean distance is below the threshold value, face is recognized.
-
Fig. 2. Flowchart for Face Recognition using Eigenfaces Fig. 2 shows a flowchart describing this process.
Steps to create a set of Training Images for analysis: [6]
-
Read all the M face images in the training set database. Transform each of these images by resizing and reshaping them into a 1-D matrix of size N (= number of rows of the image × number of columns of the image) and placed into the set.
(1)
-
Normalize this set and calculate the mean image ,
(2)
-
The difference between the input image and the mean image is given by,
(3)
-
Find a set of M orthonormal vectors, un, which have the largest possible projection onto the data. The kth vector, un, is chosen such that
(4)
is maximized with the orthonormality constraint:
(5)
Here, uk and k are given by the eigenvectors and eigenvalues of the covariance matrix C.
-
Calculate the covariance matrix C in the following way
-
AT
-
Find the eigenvectors, vl, ul
(6)
(7)
(8)
Steps to Recognize Input Test Image: [6]
The recognition procedure involves the following steps which are as follows,
-
-
Read the input test image. Resize and transform it into its Eigenface components. Compare the input image with the calculated mean image. Find the corresponding weight by multiplying this difference with each eigenvector of the L matrix. Each weight is placed in a vector .
In other words, given a pixel position (xc, yc), the resulting LBP can be expressed as follows: [9]
(11)
where gc and gn are, respectively, gray-level values of the central pixel and n surrounding pixels in the circle neighbourhood with a radius R, and function s(k) is defined as
(12)
(9)
(10)
-
After computing the Euclidean distance between the input test image and the eigenfaces of the training data, determine which face class provides the best description for the input image by minimizing the Euclidean distance
(11)
-
-
Local Binary Patterns Extended:
Later, the LBP operator was extended to use neighbourhoods of different sizes or scales.
In this process, P sampling points on the circumference of a circle is with radius R from the center pixel are compared with the value of the center pixel. Bilinear interpolation is required to compute the values of all sampling points in the neighbourhood of the center pixel for any given radius and number of pixels. For neighbourhoods the notation (P, R) is used [8]. See Fig. 4 for an examples of circular
-
If k
is below an established threshold , i.e. k
< ,
neighbourhoods.
then the input test image is considered to be a known face and it belongs to the class that achieves shortest Euclidean distance.
-
If k is above threshold , but bellow a second threshold 1, i.e. < k < 1 the input test image is considered as an unknown face.
-
If the difference k is above these two thresholds and 1, the input test image is NOT considered as a face.
-
-
-
LBP based Face Recognition
The Local Binary Pattern (LBP) operator is essentially a discriminative feature space. Its key advantages are:
-
invariance to monotonic gray-level changes
-
computational efficiency
These properties of LBP can be exploited for face description as a face can be seen as a composition of micro texture patterns.
-
Local Binary Patterns
-
The LBP operator assigns a value to every pixel of an image in a 3×3 neighbourhood of a center pixel by comparing it with the center pixel value. If the center pixel is greater than the neighbourhood pixel, it assigns 1, otherwise 0. It takes this result of binary number and converts it to decimal number as shown in Fig. 3. Then, the histogram of the assigned value can be used as a texture descriptor. [8]
Fig. 4. Some examples of E-LBP with different sampling points and Radius
-
Improved Local Binary Patterns:
Another variation to the standard LBP process is Improved LBP (ILBP). Here, the mean intensity of all the pixels in a block is computed and then compared with each pixel. An example of this process is explained in Fig. 5. First, the mean intensity of all the pixels including the center pixel is calculated. Then, if any pixel value is greater than the mean value, it is assigned 1, otherwise 0.
Fig. 5. I-LBP Illustration
-
Uniform Patterns:
Uniform Pattern is yet another extension to the original LBP operator. If the binary pattern LBP result (for a circular pattern) consists of at most two bitwise transitions from 0 to 1 or vice versa, it is known as uniform patterned LBP. While computing the LBP histogram, histogram of every uniform pattern is stored in separate bin for every uniform pattern and all non-uniform patterns are assigned to a single bin.
Fig. 3. The Basic LBP Operator
Table 1 give a precise list of the types of LBP pattern based on the number of bitwise transitions.
Table 1: Uniform and Non-Uniform LBP
Fig. 6 shows the representation of an image using uniform LBP patterns as well as the non-uniform patterns. We can see that the uniform patterns can successfully be used to represent
(16,2)
the original image. Fig. 6 uses LBPU2 where U2 implies
uniform LBP and (16, 2) show that the radius is 2 and 16 sampling points are taken. [8]
(16, 2)
Fig. 6. Face image split in an image with only pixels with uniform patterns and in an image with only non-uniform patterns, by using LBP 2 . [8]
-
Face Description with LBP
The procedure consists of using the texture descriptor to build local descriptions of the face and combine them into a global characteristic feature. The face image is divided into several blocks and texture descriptors are extracted independently from each block. These descriptors are further concatenated to form a global feature of the face. Fig. 7 provides a visual representation of the LBP process.
Fig. 7. LBP based Face Description [1]
In Fig. 8, the process of face description is explained with the help of a flowchart.
Fig. 8. Flowchart of LBP Process
-
LBP Face Recognition Algorithm
The algorithm used for face recognition by the authors of [8] is given below.
Input: Training Image set.
Output: Feature extracted from face image and compared with centre pixel and recognition with unknown face image.
-
Obtain the training set database.
-
FOR each image I in the training image set, compute the histogram by concatenating all the labels obtained by applying LBP to each block in the image.
-
Save each histogram as a feature vector for each image in the training set into a single training feature database vector.
-
Obtain the input test image. Perform feature extraction using LBP.
-
Compare with test face image with all images in the training feature database vector
-
Face is successfully recognized if the test image LBP matches with those from the training images.
Fig. 9. Flowchart of LBP based Face Recognition [8]
A flowchart illustrating the above algorithm is shown in Fig. 9.
CONCLUSIONS
In this paper, we have discussed face recognition using a simple and widely used Eigenfaces along with PCA method and a relatively new LBP approach.
Eigenfaces along with PCA is a holistic approach, where the whole facial image is taken as a feature vector. While, the LBP method is a hybrid approach which makes use sum of local texture descriptors over the entire facial image as the feature vector.
PCA is successful in both detection as well as recognition of faces. It is the simplest and the most popular method for face recognition. Frontal face images with proper illumination, gray-scale normalization of all images, resistance to pose variation and occlusions are some of the constraints in this approach [7].
The LBP approach provides better tolerance to monotonic gray-scale changes. It has also proved to give better computational efficiency [8], [9] & [10]. Normalization of images before application of the LBP operator is not required.
REFERENCES
-
W. Zhao, R. Chellappa, P. Phillips and A. Rosenfeld, "Face Recognition: A Literature Survey," ACM Computing Surveys, vol. 35, no. 4, pp. 399- 458, December 2003.
-
A. K. Jain, A. Ross and S. Prabhakar, "An Introduction to Biometric Recognition," IEEE Transactions on Circuits and Systems for Video Technology, vol. 14, no. 1, pp. 4-20, Jan. 2004.
-
P. Viola and M. J. Jones, "Robust Real-Time Face Detection," International Journal of Computer Vision, vol. 57, no. 2, pp. 137-154, 2004.
-
P. J. Phillips, P. J. Flynn, T. Scruggs, K. W. Bowyer, J. Chang, K. Hoffman, J. Marques, J. Min and W. Worek, "Overview of the Face Recognition Grand Challenge," in IEEE Computer Society Conference on Computer Vision and Pttern Recognition, 2005.
-
A. Singh, B. Singh and M. Verma, "Comparison of Different Algorithms of Face Recognition," VSDR-IJEECE, vol. 2, no. 5, pp. 272-278, 2012.
-
K. Arora, "Real Time Application of Face Recognition Concept," International Journal of Soft Computing and Engineering (IJSCE), vol. 2, no. 5, pp. 191-197, Nov., 2012.
-
W. A. Barrett, "A Survey of Face Recognition Algorithms and Testing Results," IEEE, vol. 305, p. 301, 1998.
-
A. Rahim, N. Hossain, T. Wahid and S. Azam, "Face Recognition using Local Binary Patterns," Global Journal of Computer Science and Technology Graphics & Vision, vol. 13, no. 4, pp. 1-9, 2013.
-
D. Huang, C. Shan, M. Ardabilian, Y. Wang and L. Chen, "Local Binary Patterns and Its Application to Facial Image Analysis: A Survey," IEEE Transcations on Systems, MAN and Cybernetics-Part C: Applications and Reviews, vol. 41, no. 6, pp. 765-781, November, 2011.
-
T. Ahonen, A. Hadid and M. Pietikainen, "Face Description with Local Binary Patterns: Application to Face Recognition," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 12, pp. 2037- 2041, December, 2006.