- Open Access
- Total Downloads : 23
- Authors : Sandeep Mishra, Anupam Dubey, Nisha Bhatt
- Paper ID : IJERTCONV3IS20076
- Volume & Issue : ISNCESR – 2015 (Volume 3 – Issue 20)
- Published (First Online): 24-04-2018
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Face Recognition System based on Subspace Linear Discriminant Analysis
Sandeep Mishra
-
Tech Scholar, Digital Electronics Deptt of E&Tc Engg, LNCT Jabalpur
Nisha Bhatt
Professor & Head,
Anupam Dubey
Assistant Professor,
Deptt of E&Tc Engg, LNCT Jabalpur
Deptt of E&Tc Engg, LNCT Jabalpur
Abstract- This paper is based on subspace linear discriminant analysis in which facial features are extracted by using Principal Component Analysis followed by Linear Discriminant Analysis based dimension reduction techniques. On the basis of literature review methodology is designed which includes sequence as preprocessing, dimension reduction by PCA, feature extraction for class separability by LDA and finally classification. ORL face database is used whose experimental results is extracted as recognition rate of 96.35% and hence giving efficiency of the proposed method over traditional methods of automatic face recognition systems.
Keywords- Biometrics, PCA, LDA, Recognition Rate
-
INTRODUCTION
Face recognition technologies have been widely used in commercial, law enforcement, and military applications. An automatic system performs face detection, face verification and finally face recognition in various high alert areas such as airport security, traffic supervision and monitoring, HCI and environments like cars, mobiles. But challenges associated with this technology of biometrics are deeper under variations in illumination conditions, occlusion, poses, expression, aging, and disguises. Researchers had proposed methods till now, and hence found that appearance-based approaches yield good outcome, which is operated directly on images or appearances of face, where complete face image is operated and processed as two dimensional matrix. The result face image features are then used to represent the class in which it is lying. Features are used such that they have high separability when processed further and lower separability is discarded [1][2].
To decrease computational time, database, generated after various transformations, are symbolize in a lower dimensional space. Efficient appearance-based methods like Principal Component Analysis, Linear Discriminant Analysis and Independent Component Analysis, make use of mathematical techniques like eigenfaces, fischerfaces and independent components respectively. PCA gives class representations which are in an orthogonal linear space. However LDA generates class discriminatory information in a linear separable space which is not necessarily orthogonal [3]. These techniques give good results under varying condition. In these techniques dimensions are first reduced by
eigen faces and then applying fischer space for class separability.
-
RELATED WORK AND BACKGROUND
In feature based approach key information of facial features such as eyes, nose, mouth, and chin, are gathered with the help of deformable templates and extensive mathematics and converted into a feature vector. Yuille et. al. [4] proposed deformable templates techniques, where face and its features are determined by interactions with the face image.In Image based approach information theory concepts like class information, class separability, independent components, energy contents of whole face image are utilised. As whole face image is used so this method is also termed as holistic method. Turk et. al. [5] developed Principal Component Analysis based face recognition using eigenface techniques. The term eigenface is used because mathematical algorithms using eigenvectors represent the primary components of the face. Weights are used to represent the eigenface features so a comparison of these weights permits identification of individual faces from a database. Zhao t. al. [6] uses linear discriminant analysis to maximize the scatter between different classes and minimize the scatter between the input data in the same class. While PCA tries to generalize the input data to extract the features, LDA tries to discriminate the input data by dimension reduction. Utilizing both this techniques face recognition is performed by subspace LDA.
-
FEATURE EXTRACTION ALGORITHM AND CLASSIFICATION TECHNIQUES
In this section PCA and LDA techniques are explained mathematically. Principal Component Analysis is used to reduce dimension of dataset while Linear Discriminanat Analysis is used to increase differences between duffrent class so as to increase accuracy and reduce computational cost.
A. Principal Component Analysis
Principal Component Analysis method is dimension reduction technique that maximizes the scatter of all projected samples. It is achieved by utilizing eigenspace.
In PCA, one dimensional feature vectors of each image in training dataset are extracted and hence training dataset
image space is generated. PCA categorize images according to distance between feature vectors of image space [7]
For N number of sample images (x1, x2, x3xN) with an n- dimensional image space, if each image belong to one of C classes {X1, X2,..,XC}, then in PCA, linear transformation of the original n dimensional image space into an m- dimensional feature space will be performed, where m < n.
The new feature vectors yk will be then given by
yk = WT xk (k = 1, 2, 3.N) (1)
Where yk m is a linear transformation matrix & W n x
m is a matrix with orthonormal columns. From here the total scatter matrix,
where w is the set of generalized eigenvectors of SB and SW corresponding to the m largest generalized eigenvalues , such that
SBWi = iSWWi ( i = 1,2,..m) (9)
Also, there should be eigenvectors corresponding to at most (C-1) largest eigenvalues [9] [10].
-
Classification Techniques
In order to recognize any image from dataset, either distance between the images in N-dimensional space is used or similarity between them is adopted. For best match either distance measure should be minimum or similarity measure should be maximum for high level of similarity [11]. Some technique are discussed below:
ST = 1 (x)(x)T
(2)
-
L1 norm: Also known as the city block norm or the sum norm. It sums up the absolute difference between pixels.
where m is the mean image training database, after
The L1 norm of an image X and an image Y is:
applying the linear transformation WT, and transformed
L (X, Y) = N
|X Y |
(10)
feature vectors (y1,y2,y3yN) will be given by WTSTW.
In PCA, the optimal projection matrix Wopt is selected to maximize the determinant of the total scatter matrix of the projected samples, i.e.
1 i=1 i i
-
L2 norm: Also known as the Euclidean norm or the Euclidean distance when its square root is calculated. It sums up the squared difference between pixels. The L2 norm of an image X and an image Y is:
opt = arg max |T ST | (3)
L (X, Y) = N
(X Y )2
(11)
2
i=1 i i
Wopt = [W1 W2..Wm] (4)
where W is the is the set of n-dimensional eigenvectors of total scatter matrix, ST , corresponding to the m largest
-
Covariance: Calculates the angle between two normalized vectors. Taking the dot product of the normalized vectors. The covariance between images A and B is:
-
eigenvalues [8].
-
Linear Discriminant Analysis
cov(A, B) = A
||A||
-
B
||B||
(12)
Linear Discriminant Analysis make use of fischer space method to achieve maximum discrimintion of classes and also to achieve dimensionality reduction. In LDA within- class and between-class scatter matrices are defined [6].
For a set of N sample images (x1, x2, x3xN), taking values in an n-dimensional image space, and suppose that each image belongs to any one of C classes {X1,X2,..,XC}.
Hence Within class scatter matrix is given by
i= k
i= k
SW = C 1 x (xk i)(xk i)T
(5)
The similarity measurement technique, covariance is also known as the angle measure.
-
-
-
PROPOSED FACE RECOGNITION ALGORITHM PCA performs dimension reduction by projecting the data
onto the eigenface space upon while LDA performs class separability by classifying the eigenface space projected data.
The method consists of four stages:-
-
Preprocessing
-
PCA
-
LDA
-
Classification.
And Between class scatter matrix is given by
i=1
i=1
SB = C
Ni(i )(i )T
(6)
Flowchart of the subspace linear discrminant analysis is shown below.
where µi is the mean image of class Xi, and Ni is the number of samples in class Xi.
In LDA, the optimal projection Wopt is selected to maximizes the ratio of the determinant of the between-class scatter matrix of the projected samples to the determinant of the within-class scatter matrix of the projected samples.
= arg max |T SB | (7)
opt
|T SW |
Wopt = [W1 W2..Wm] (8)
Fig. 1 Flowchart for Face Recognition by Subspace LDA method
-
-
EXPERIMENTAL RESULTS
In this work, Olivetti and Oracle Research Laboratory (ORL) face database is used which provided 400 face images of 40 individuals in JPG format. Out of 10 face images of each individual, 8 face images of each person are taken for training and 2 images are taken for testing dataset. These images are of size 92×112 pixels with 256 gray levels. For some subjects, the images were taken at different times, varying the lighting, facial expressions (open / closed eyes, smiling / not smiling) and facial details (glasses, moustaches, and beard). All the images were taken same background as shown n fig. 4.1.
Fig 4.1 Preview of individuals face image from ORL Face database
Original images are resized to 64×64 pixels so that the input space has the dimension of 4096 followed by histogram equalization. After vectorization column wise, mean image is calculated as shown in fig. 4.2.
Eigenface Data base mean image
Fig 2 Mean Image
Some highest Eigen face of training dataset are represented in shown in fig. 4.3.
Eigenfaceno.1 Eigenfaceno.2 Eigenfaceno.3 Eigenfaceno.4 Eigenfaceno.5
Eigenfaceno.6 Eigenfaceno.7 Eigenfaceno.8 Eigenfaceno.9 Eigenfaceno.1
Eigenfaceno.11 Eigenfaceno.12 Eigenfaceno.13 Eigenfaceno.14 Eigenfaceno.1
Eigenfaceno.16 Eigenfaceno.17 Eigenfaceno.18 Eigenfaceno.19 Eigenfaceno.2
Eigenfaceno.21 Eigenfaceno.22 Eigenfaceno.23 Eigenfaceno.24 Eigenfaceno.2
Fig 3. 16 Eigen Faces
These eigenfeatures from eigenface space projection matrix is derived and over it LDA is applied. LDA further reduces dimension of projection matrix and increases recognition rate and reduces computation time. Classification space projection matrix, derived after LDA, is classified with various techniques. Result after classification is shown in fig. 4.4
Face Image Under Test Matched Face Image
Fig 4 Recognized Face Image after Subspace LDA
PCA based face recognition is performed by extracting 70, 100, 250 & 300 eigen faces and compared their recognition rates by L1, L2 and covariance.
S.
No
No. of PCA features
Classification Techniques
L1 Norm
L2 Norm
Covariance
1
70
95.00%
93.75%
95.00%
2
100
96.25%
96.25%
93.75%
3
250
96.25%
97.50%
93.75%
S.
No
No. of PCA features
Classification Techniques
L1 Norm
L2 Norm
Covariance
1
70
95.00%
93.75%
95.00%
2
100
96.25%
96.25%
93.75%
3
250
96.25%
97.50%
93.75%
TABLE I RECOGNITION RATE OF PCA BASED FACE RECOGNITION WITH VARYING PCA FEATURES
4
300
97.50%
97.50%
92.50%
LDA followed by PCA and hence Subspace LDA based face recognition is performed by extracting 40, 70 & 100 PCA features and corresponding 15, 25 & 39 LDA features and classified by City Block Distance Calculation. Various recognition rates are then compared.
TABLE II RECOGNITION RATE OF SUBSPACE LDA BASED FACE RECOGNITION WITH VARYING PCA & LDA FEATURES
S.No.
PCA Features
LDA Features
15
25
39
1
40
77.50%
86.25%
96.35%
2
70
72.50%
82.50%
90%
3
100
66.50%
65%
77.50%
100
95
90
Recognition Rate (%)
Recognition Rate (%)
85
80
75
70
65
60
55 40 PCA Features
70 PCA Features
100 PCA Features
50
15 25 39
LDA Features
Fig 5 Graphical Representation of Recognition Rate with varying LDA Features
TABLE III COMPARISON OF RECOGNITION RATE OF PROPOSED ALGORITHM WITH THAT OF OTHER METHODS
Method
Recognition Rate
PCA with L2 Norm [5]
84.00%
DCT Based Face Recognition [14]
84.50%
K-means [15]
86.75%
PCA & LDA with L2 Norm [4][6]
94.80%
Fuzzy Ant with fuzzy C-means [15]
94.82%
Proposed Algorithm
96.35%
-
CONCLUSION
-
This work is subspace LDA which is PCA followed by LDA hence classification is performed by City Block Distance Calculation. ORL Face Database is used for experiment, which provided face images under varying facial condition [16]. Experimental results are giving recognition rate equal to 96.35 % at 40 numbers of PCA features. Hence it can be concluded that even if PCA features is less LDA is giving better result.
REFERENCES
-
Zhao W., Chellappa R., Phillips P. J. and Rosenfeld A., Face Recognition: A Literature Survey, ACM Computing Surveys, Vol. 35, No. 4, pp. 399458, 2003,
-
Patil A.M., Kolhe S. R. and Patil P.M, 2D Face Recognition Techniques: A Survey, International Journal of Machine Intelligence,
ISSN: 09752927, Volume 2, Issue 1, pp-74-8, 2010
-
Belhumeur P.N., Hespanha J.P., and Kriegman D. J., Eigen faces vs. Fisher faces: Recognition using class specific linear projection, IEEE Trans. Pattern Anal. Machine Intel., vol. 19, PP. 711-720, 1997.
-
Yuille A. L., Cohen D. S., and Hallinn P. W., "Feature extraction from faces using deformable templates", Proc. of CVPR, 1989
-
Turk M., Pentland A., "Eigen faces for face recognition", Journal cognitive neuroscience, Vol. 3, No.1, 1991.
-
Zhao W., Chellappa R., Krishnaswamy A,, Discriminant analysis of principal component for face recognition, .IEEE Trans. Pattern Anal.
Machine Intel., Vol 8, 1997
-
Turk M. and Pentland A., Eigenfaces for recognition. J. Cogn.
Neurosci. 3, 7286, 1991
-
Lee S. J., Yung S. B, Kwon J. W., and Hong S. H., Face Detection and Recognition Using PCA, pp. 84-87, IEEE TENCON, 1999.
-
Swets D.L. and Weng J.J., Using Discriminant Eigen features for image retrieval, IEEE Trans. Pattern Anal. Machine Intel, vol. 18, PP. 831-836, 1996.
-
Etemad K. and Chellappa R., Face Recognition Using Discriminant Eigenvectors, IEEE Transaction for Pattern Recognition, pp. 2148- 2151,1996.
-
Phillips P. J., and Moon H., Comparison of Projection-Based Face Recognition Algorithms, , IEEE Transaction for Pattern Recognition, pp. 4057-4062, 1997.
-
Sirovich L. and Kirby, M., "Low-dimensional procedure for the characterization of human faces", International Journal of Optical Society4, 3,pp. 519-524, 1987
-
Pang S., Ozawa S., Kasabov N., Incremental linear discriminant analysis for classification of data streams, IEEE Trans. on Systems, Man and Cybernetics, vol. 35, no. 5, pp. 905-914, 2005.
-
Zhao S., Grigat R. R., Multi block fusion scheme for face recognition, Int. Con. on Pattern recognition (ICPR), Vol. 1, pp. 309- 312, 2004.
-
Makdee S., Kimpan C., Pansang S. , Invariant range image multi pose face recognition using Fuzzy ant algorithm and membership matching score Proceedings of 2007 IEEE International Symposium on Signal Processing and Information Technology , pp. 252-256, 2007
-
Olivetti & Oracle Research Laboratory, The Olivetti & Oracle Research Laboratory Face database of Faces, http://www.camorl.co.uk/facedatabase.html