- Open Access
- Total Downloads : 99
- Authors : R. Manimala, J. Poovitha, P. Sharmila Devi, Mr. P. Omprakash
- Paper ID : IJERTCONV7IS06061
- Volume & Issue : ETEDM
- Published (First Online): 23-05-2019
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Face Authentication for High Level Security Application using LBQP
R. Manimala [1], J. Poovitha [2], P. Sharmila Devi [3], Mr. P. Omprakash
[1][2][3] Assistant Professor,Department of Electronics and Communication Engineering, Velammal College of Engineering and Technology, Madurai.
Abstract- Face recognition from the real data, capture images, sensor images and database images is challenging problem due to the wide variation of face appearances, illumination effect and the complexity of the image background. Face recognition is one of the most effective and relevant applications of image processing and biometric systems. In this paper we are discussing the face recognition methods, algorithms proposed by many researchers using artificial neural networks (ANN) which have been used in the field of image processing and pattern recognition. How ANN will used for the face recognition system and how it is effective than another methods will also discuss in this paper. There are many ANN proposed methods which give overview face recognition using ANN. Therefore, this research includes a general review of face detection studies and systems which based on different ANN approaches and algorithms. The strengths and limitations of these literature studies and systems were included, and also the performance analysis of different ANN approach and algorithm is analyzing in this research study. In most of these crimes the criminals were taking advantage of that hacking the information from commercial or academic access control system. The systems do not grant access by who we are, but by what we have, such as ID cards, keys, passwords, PIN numbers. These means they are really defining us or they just want to authenticate us. It goes without Permission of owners, duplicates, or acquires these identity means, he or she will be able to access our data or our personal property any time they want. Recently, technology became available to allow verification of true individual identity.
Keywords: Face Recognition, Biometric, Image Processing, Pattern Recognition, Artificial Neural Network.
I.INTRODUCTION
A complete review of all face recognitions systems is not a simple task. Hence, only a cluster of the most useful systems will be discussed in this paper. The reasons come from the need of automatic recognitions and surveillance systems, the interest in human visual system on face recognition, and the design of human- computer interface. Face recognition can be used for both verification and identification. In face recognition system it identifies faces present in the images and videos automatically. It is classified into two categories:
a. Face verification or Face authentication b. Face identification or Face recognition In face verification or authentication there is a one-to-one similar that relates a query face image against a template face image whose identity is being claimed. In face identification or recognition there is a one-to-many similar that relate a
query face image against all the template face images in the database to determine the identity of the query face image. Another face recognition scenario involves a watch-list check, where a query face is matched to a list of suspects. The performance of face recognition systems has improved.
THE PRINCIPLE OF THE FACE RECOGNITION:
The face area is first divided into small regions from which Local Binary Pattern (LBP) histograms are extracted and concatenated into a single, spatially enhanced feature histogram efficiently representing the face image. Extensive experimental research proves the superiority of the proposed method in respect of its simplicity and efficiency in very fast feature extraction.
The way to represent a face determines the successive algorithms of detection and identification. For the entry- level recognition (that is, to determine whether or not the given image represents a face), the image is transformed (scaled and rotated) till it has the same position as the images from the database. In the feature extraction phase, the most useful and unique features (properties) of the face image are extracted. With these obtained features, the face image is compared with the images from the database. This is done in the classification phase [4, 5]. The output of the classification part is the identity of a face image from the database with the highest matching score, thus with the smallest differences compared to the input face image. Also a threshold value can be used to determine if the differences are small enough. After all, it could be that a certain face is not in the database at all.
-
STRUCTURE OF FACE RECOGNITION SYSTEM
Every Biometric system has four main features which are shown in Figure. 1: face Detection, preprocessing, Feature Extraction, and Face Recognition.
Figure: Architecture of Face Recognition System
As Figure 1 shows the first task of the face recognition system is capturing image by video, camera or from the database and this image is given to the further step of face recognition system that is discuss in this section:
-
EXISTING SYSTEM LOCAL BINARY
PATTERN
The LBP operator is one of the best performing texture descriptors and it has been widely used in various applications. It has proven to be highly discriminative and its key advantages, namely its invariance to monotonic gray level changes and computational efficiency, make it suitable for demanding image analysis tasks.The LBP operator was originally designed for texture description. The operator assigns a label to every pixel of an image by thresholding the 3×3 neighborhood of each pixel with the center pixel value and considering the result as a binary number. Then the histogram of the labels can be used as a texture descriptor
LBP (Xc,Yc) = 7n=0 S(in-ic)2n
Where, it corresponds to the value of the center pixel (Xc, Yc) and in to the value of eight surrounding pixels. It is used to determine the local features in the face and also works by using basic LBP operator. Feature extracted matrix originally of size 3×3, the values are compared by the values of the center pixel, then the binary code is produced and also LBP code is obtained by converting the binary code into decimal.
SCALE INVARIANT FEATURE TRANSFORM (SIFT)
SIFT descriptor which is invariant to scale, rotation, affine transformation, noise, occlusions and is highly distinctive. SIFT features consist of four major steps in detection and representation; its follows:
-
finding scalespace extrema;
-
key point localization and filtering;
-
orientation assignment;
-
key point descriptor.
An ANN is composed of a network of artificial neurons also known as "nodes". These nodes are connected to each other, and the strength of their connections to one another is assigned a value based on their strength: inhibition (maximum being -1.0) or excitation (maximum being +1.0). If the value of the connection is high, then it indicates that there is a strong connection. Each face image in test set is classified by comparing it against the face images in the training set. The comparison is performed using local features obtained in the previous step in the algorithm.
BACK PROPAGATION NETWORK (BPN)
The input layer consists of six neurons the inputs to this network are feature vectors derived from the feature extraction method in the previous section. The network is trained using the right mouth end point samples. The Back propagation training takes place in three stages: Feed forward of input training patern, back propagation of the associated error and Weight adjustment. During feed forward, each input neuron receives an input value and broadcasts it to each hidden neuron, which in turn computes the activation and passes it on to each output unit, which again computes the activation to obtain the net output. During training, the net output is compared with the target value and the appropriate error is calculated.
-
-
PRE-PROCESSING
This step is working as the pre-processing for face recognition, In this step the unwanted noise, blur, varying lightening condition, shadowing effects can be remove using pre-processing techniques .once we have fine smooth face image then it will be used for the feature extraction process. Gabor wavelets representation of face images is an effective approach for both facial action recognition and face identification. Perform dimensionality reduction and linear discriminate analysis on the down sampled. Gabor wavelet faces can increase the discriminate ability. Nearest feature space is extended to various similarity measures. The Back propagation
training takes place in three stages:
-
Feed forward of input training pattern
-
Back propagation of the associated error and
-
Weight adjustment.
-
-
FEATURE EXTRACTION
In this step features of face can be extracted using feature extraction algorithm. Extractions are performed to do information packing, dimension reduction, salience extraction, and noise cleaning. After this step, a face patch is usually transformed into a vector with fixed dimension or a set of fiducial points and their corresponding locations. Using the pixels corresponding to that maximum distance, calculate the following: i. Distance from the left eyeball to the right eyeball. ii. Distance from the left mouth end point to the right mouth end point.
iii. Distance from the left eyeball to the left mouth end point. iv. Distance from the right eyeball to the right mouth end point. V.Distance from the left eyeball to the right mouth end point. vi. Distance from the right eyeball to the left mouth end point.
-
FACE RECOGNITION
Once feature extraction is done step analyzes the representation of each face; this last step is used to recognize the identities of the faces for achieving the automatic face recognition, for the recognition a face database is required to build. For each person, several images are taken and their features are extracted and stored in the database. Then when an input face image comes for recognition, then it first performs face detection, pre- processing and feature extraction, after that it compare its feature to each face class which stored in the database. There are two general applications of face recognition, one is called identification and another one is called verification. Face identification means given a face image, can be used to determine a person's identity even without his knowledge. While in face verification, given a face image and a guess of the identification, the system must to tell about the true or false about the guess. Face recognition can be largely classified into two different classes of approaches, the local feature- based method and the global feature-based method. The Human faces can be characterized both on the basis of local as well as of global features global features are easier to capture they are generally less discriminative than localized features local features on the face can be highly discriminative, but may suffer for local changes in the facial appearance.
-
PROPOSED SYSTEM:
The Proposed Discriminative Robust Local Binary Pattern:
For human detection, the contour of the human, which typically resides in high contrast regions between the
human and the back- ground, contains discriminatory information. LBP is illumination and contrast invariant. The histogram of LBP codes only con- siders the frequencies of the codes i.e. the weight for each code in the block is1. This form of histogram is unable to differentiate between similar regions of different contrast. Therefore, a weak contrast local region and a strong contrast one have similar feature representations. To mitigate this problem, a weighting scheme is proposed. Given an image window, following [6], the square root of the pixels is taken. Then, the rst order gradients are computed in the x- and y-directions.
The gradient magnitude at each pixel is then computed and used to weigh its LBP code. The stronger the contrast at the pixel, the larger the weight assigned to the LTP code at that pixel. Consider a LBP histogram for a M ×N image block. The value of the ith bin of the weighted LBP histogram is as follows:
hlbp(i) =M1 X x=0 ; N1 X y=0
x,y(LBPx,y,i), (3) (m,n) = 1, m = n 0, otherwise where x,y is the gradient magnitude at the pixel location (x,y). It is not difcult to see that the NRLBP histogram can be com- puted from (3) as follows: hnrlbp(i) = hlbp(i)+hlbp(2B 1i), 0 i 2B11 (4). where hnrlbp(i) is the ith bin value of NRLBP. In order to resolve the issue of NRLBP whereby, in the same block, all LBP codes and their complements are mapped to the same bin, the following is proposed. Consider the absolute difference between the bins representing LBP code and its complement to form Difference of LBP (DLBP) block histograms as follows: hdlbp(i) =
|hlbp(i)hlbp(2B 1i)|, 0 i 2B11 (5) where hdlbp(i) is the ith bin value of DLBP. For blocks that contain structures that have both LBP codes and their complements, DLBP assigns small or almost zero values to the bins that the codes are being mapped to. By doing so, it differentiates these structures from those having no complement codes. The two histogram features, NRLBP and DLBP, are concatenated to form Discriminative Robust LBP (DRLBP). The value of the ith bin of DRLBP histogram is as follows:
hdrlbp(i) (6) = hnrlbp(i), 0 i 2B1 1 hdlbp(i2B1), 2B1 i < 2B = hlbp(i) + hlbp(2B
1i), 0 i 2B1 1 |hlbp(i)hlbp(2B 1i)|, 2B1
i < 2B For B = 8, the number of bins is 256. Using uniform pattern rep- resentation, the number of bins is reduced to 60. Fig. 3 illustrates how DRLBP produce unique features for the structures shown ear- lier in Fig.
2. Hence, DRLBP represents the human contour more discriminatively than LBP and NRLBP.
-
EXPERIMENTS AND RESULT ANALYSIS:
We perform experiments on two challenging data sets
– INRIA [6] and Caltech Pedestrian Data Set [20]. Results are reported for both data sets using the per- image methodology suggested in [20] as the authors have shown it to be a better evaluation method. The per- image performance for dense representations of LBP and NRLBP on INRIA and Caltech, to the best of our knowledge, have not been published to date. Hence, experiments are performed for these features on the INRIA and Caltech data set. The feature parameters for LBP, NRLBP and DRLBP are set as follows. For both data sets, a block size of 16 × 16 pixels is used. A neighbourhood of 8 (B) pixels is considered using a circle of radius. Square root of L1 normalization is used as our preliminary experiments show that this gives the best results. The overlapping block features for the image window is concatenated to form the overall window feature for training the linear SVM classier.
-
CONCLUSION:
It is clear and proves that LDA is efficient for facial recognition method for images of Yale database, comparative study mention that LDA achieved 74.47
% recognition rate with training set of 68 images and out of 165 images total 123 images are recognized with higher accuracy. In future Face Recognition rate can be improved that includes the full frontal face with facial
expression using PCA and LDA. Face recognition Rate can be improved with hybrid preprocessing technique for PCA and LDA. Both feature extraction technique cannot give satisfied recognition rate for Illumination problem so it can be improved. Every rsearcher has their own approach for recognizing face from database or from video many researches has try to solve the problems associated with earlier proposed method but still there are some advantages and limitations in these discussed methods. For human detection, this is not desired as the human contour contains the most relevant information. By ignoring the contrast information, the contour is not effectively discriminated by the features. The new feature, DRLBP, considers both the gradient weighted sum and absolute difference of the bins of the LBP codes with their respective complement codes. In this way, DRLBP all eviates the problems of LBP and NRLBP for human detection.
-
ACKNOWLEDGEMENT:
Different network architectures and parameters values of BPNN will be used to determine the result in best performance values of face detection system and, we will try to use genetic algorithm (GA) as an optimization algorithm to obtain the best values of ANN algorithm parameters that result to optimal results or we will try to solve the same problem using Neuro fuzzy system, Neuro-fuzzy system incorporates the human-like reasoning style of fuzzy systems through the use of fuzzy sets and a linguistic model consisting of a set of IF-THEN fuzzy rules. The strength of Neuro-fuzzy systems involves two contradictory requirements in fuzzy modeling: interpretability versus accuracy. A new neural network model combined with BPN and RBF networks is developed and the network is trained and tested. From these results, it can be concluded that, recognition accuracy achieved by this method is very high. This method can be suitably extended for moving images and the images with varying background.
-
REFERENCES:
-
T. Ahonen, A. Hadid, M. Pietikainen and T. M aenpaa. "Face recognition based on the appearance of local regions", In Proceedings of the 17th International Conference on Pattern Recognition, 2004.
-
P. S. Penev and J. J. Atick, "Local feature analysis: A general statistical theory for object representation," Network- Computation in Neural Systems, vol. 7, no. 3, pp. 477500, August 1996.
-
B. Heisele, P. Ho, J. Wu, and T. Poggio, "Face recognition: component-based versus global approaches," Compter Vision and Image Understanding, vol. 91, no. 12, pp. 621, 2003.
-
R. Gottumukkal and V. K. Asari, "An improved face recognition technique based on modular PCA approach," Pattern Recognition Letters, vol. 25, pp. 429436, March 2004.
-
Md.Abdur Rahim, Md. Najmul Hossain, Tanzillah Wahid and Md. Shaflul Azam ," Face Recognition using Local Binary Patterns (LBP)" Global Journal of Computer Science and Technology Graphics and Vision, Volume 13 issue 4, version 1.0 Year 2013
-
T. Chen, Y. Wotao, S. Z. Xiang, D. Comaniciu, and T. S. Huang, "Total variation models for variable lighting face
recognition" IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(9):1519{1524, 2006.}
-
L. Zhe and L.S. Davis, Shape-based human detection and segmentation via hierarchical part- template
matching, IEEE
Trans.PatternAnal.Mach.Intell.,vol.32,no.4,pp.604 618,
Apr. 2010.
-
O. Tuzel, F.M. Porikli, and P. Meer, Pedestrian detection via classication on riemannian manifolds, IEEE Trans. Pattern Anal. Mach. Intell., vol. 30, no. 10, pp. 17131727, Oct. 2008.
-
M. Yadong, Y Shuicheng, L. Yi, T. Huang, and Z. Bingfeng, Discriminative local binary patterns for human detection in personalalbum, inProc.IEEEInt.Conf.Comput.Vis.Pattern Recognit., June 2008, pp. 1 8.
-
T.X. Wang, X. Han and S. Yan, An hog-lbp human detec- tor with partial occlusion handling, in Proc. IEEE Int. Conf. Comput. Vis., 2009, pp. 32 39.
-
A. Satpathy, X.D. Jiang, and H-L. Eng, Extended histogram of gradients feature for human detection, in Proc. IEEE Int. Conf. Image. Process., Sept. 2010, pp. 3473 3476.
-
A. Satpathy, X.D. Jiang, and H-L. Eng, Extended his- togram of gradients with asymmetric principal component and discriminant analyses for human detection, in Proc. IEEE Canad. Conf. Comput. Robot. Vis., May 2011, pp. 64 71.
-
S.TangandS.Goto, Histogram of template for human detection, in Proc. IEEE Int. Conf. Acous. Speech. Sig. Process., Mar. 2010, pp. 2186 2189.
-
G. Zhenhua, Z. Lei, and D. Zhang, A completed modeling of local binary pattern operator for texture classication, IEEE Trans. Image Process., vol. 19, no. 6, pp. 1657 1663, June 2010.
-
Ming-Hsuan Yang, D. Kriegman, and N.Ahuja, Detecting Faces in Images: A Survey , IEEE Trans On Pattern Analysis and Machine Intelligence,Vol.24, No.1, pp. 34-58,
January 2002
-
Lin-Lin Hunag, Akinobu Shimizu, Yoshihoro Hagihara, and Hidefumi Kobatake, Gradient feature extraction fro classification-based face detection, A journal of pattern recognition society, Pattern Recognition 36,pp.2501-2511, 2003
-
H. A. Rowley, S. Baluja and T. Kanade, Neural Network based Face detection, IEEE Trans. On Pattern Recognition and Machine Intelligence, 20(1),23-28, 1998.
-
S. Lawrence, C. L. Giles, A. C. Tsoi and A. D. Back, Face Recognition: A Convolutional Neural Networks Approach, IEEE Trans. on Neural Networks, Special Issue on Neural Networks and Pattern Recognition, 8(1), 98-113, 1997.
-
J. Haddadnia, K. Faez, Neural network human face recognition based on moment invariants, Proceeding of IEEE International Conference on Image Processing, Thessaloniki, Greece, 1018-1021, 7- 10 October 2001.
-
Haddadnia, K. Faez, "Human Face Recognition Using Radial Basis Function Neural Network", Third Int. Conf. On Human and Computer, pp. 137-142, Aizu, Japan, Sep. 6-9, 2000.