Feature Fusion method based on Fisher Discriminant Analysis for Face and Ear for Multimodal Recognition

DOI : 10.17577/IJERTV1IS5423

Download Full-Text PDF Cite this Publication

Text Only Version

Feature Fusion method based on Fisher Discriminant Analysis for Face and Ear for Multimodal Recognition

Arti A. Tekade .S.P.Narote

Sinhgad College of Engineering .Vadgaon, Pune

Abstract: Multimodal biometrics has drawn lot of attention in recent days as it provides more reliable scheme for person verification Multimodal biometrics includes the fusion of information from different modalities. Multi biometrics combines more than one Biometric trait; hence fusion of these traits plays a central role in designing multimodal biometric system. fusion can be performed at data level, feature level, match score level or at decision level. Multimodal biometric systems elegantly address several of the problems present in unimodal systems. By combining multiple sources of information, these systems improve matching performance, increase population coverage, deter spoofing, and facilitate indexing. Incorporating user- specific parameters can further improve performance of these systems. With the widespread deployment of biometric systems in several civilian and government applications. In this paper feature level fusion of face and ear is done .. we take face and ear as the input images ,features is extracted from both the images using fisher linear discriminant analysis, extracted features are then combined using average rule, PCA and stored in the database, from the user side feature of face and ear are extracted and compare with the database, Euclidian distance is measured for comparisons.

Keywords: Multimodal Biometrics, PCA, FLD etc.

  1. Introduction

    Personal identity refers to a set of attributes e.g., name, social security number, etc. that are associated with a person. Identity management is the process of , maintaining and destroying identities of individuals in a population. A reliable identity management system is urgently needed in order to combat the epidemic growth in identity theft and to meet the increased security requirements in a variety of applications ranging from international border crossing to accessing personal information. The three basic ways to establish the identity of a person are something you know" e.g., password, personal identification number, something you carry" e.g., physical key, ID card and something you are" e.g., face, voice. Surrogate representations of identity such

    as passwords and ID cards can be easily misplaced, shared or stolen. Passwords can also be easily guessed using social engineering and dictionary attacks . Biometric systems automatically determine or verify a person's identity based on his anatomical and behavioral characteristics such as fingerprint, face, iris, voice and gait. Since biometric systems require the user to be present at the time of authentication, it can also deter users from making false repudiation claims. Moreover, only biometrics can provide negative identification functionality where the goal is to establish whether a certain individual is indeed enrolled in the system although the individual might deny it. Due to these reasons, biometric systems are being increasingly adopted in a number of government and civilian applications either as a replacement for or to complement existing knowledge and token-based mechanisms. Humans have used body characteristics such as face, ear, voice, gait, etc. for thousands of years to recognize each other. In this paper we combine the face and ear for identification of person.

  2. Related Work

Yong-Mei- Zhang, Li Mai, Boli proposed new approach in decision fusion. The fusion based on face and ear recognition is a meaningful attempt to explore a novel method of biometric recognition.[1] Md. Maruf Monwar and Marina Gavrilova develop a multimodal biometric system, FES, based on Principal Component Analysis (PCA) and Fishers Linear Discriminant (FLD) methods that will use face, ear and signature for identity identification and rank level fusion for consolidate the results obtained from these mono modal matchers. The ranks of individual matchers are combined using the Borda count method and the Logistic regression method[3].Theoharis Theoharisa, Georgios Passalisa proposed two types of errors associated with biometrics. The accuracy is analyzed for multimodal biometric systems utilizing two commonly used fusion rules. [3]. S. K. Dahel and Q. Xiao proposed that multimodal biometrics includes the fusion of information from different modalities[4]. Raghavendra.R and Hemantha Kumar G. presents a novel biometric sensor generated evidence fusion of

1

face and palm print images using wavelet decomposition for personnel identity verification. SIFT operator are then used for feature extraction and the recognition is performed by adjustable structural graph matching between a pair of fused images by searching corresponding points using recursive descent tree traversal approach[5]. Nedeljko Cvejic, David Bull, and Nishan Canagarajah presents a novel multimodal image fusion algorithm in the independent component analysis (ICA) domain[6]. Kartiki Nandkumar propose a fusion methodology based on the Neyman-Pearson theorem for combination of match scores provided by multiple biometric matchers. The likeli- hood ratio (LR) test used in the Neyman-Pearson theorem directly maximizes the genuine accept rate (GAR) at any desired false accept rat [7]. Fernando Alanso Fernandez focused on the relationship between human and automatic quality assessment as well as role of quality measures within biometric system is then analyzed[8].Julian Fierrez Aguilar focused on the combination of multiple biometric traits for automatic person authentication. For example a pool of user, and then adjusted considering input information such as user dependant score or test dependant quality measures. [9].J.Kittler , F. Roli compare different existing pattern recognition algorithm on the specific problem studied and selecting best them[10]. A.K. Jain, R.P.W. Duin, J. Mao proposed that in general it may have different classifier outputs because of different feature sets, different training sets, different classification method, different parameter in classification method, or different training session[11]. D. Wolpert, Stacked generlisation that the outputs of the different classifier can be classified into three levels 1) abstract

2) rank and 3)measurement [12]. A.K. Jain, K. Nandakumar, A. Ross present aggregation procedure can be first classified according to train ability and adaptively . The trained combiners may lead to better performance at the cost of additional training data and additional training[13].M.I. Jordan, R.A. Jacobs present the scheme for multiple classifier combination can also be grouped according to their architecture into three main categories, 1) hierarchical, 2) cascading, 3) parallel. In hierarchical classifier combination scheme, the different classifier is combined into tree like structure[14].

  1. Proposed work

    1. Image Acquisition:

      Face and Ear images are taken separately, which act as a input to the system. Both the images have the same size.

    2. Feature Extraction:

      Feature extraction

      face

      In feature extraction face and ear images are treated separately , features is extracted using fisher linear discriminant analysis

      Feature extraction

      ear

      Fusion Algo

      Registration Phase

      Database

      From User

      Feature extraction

      Fusion Algo

      Authentication

      Accept/Reject

      Decision Module

      Matching Module

      Feature extraction

      Figure1. Block diagram of Proposed Method

    3. Image fusion Algorithm

Three Image fusion rule scheme were implemented using t transform based image fusion:

  1. Average scheme: In this scheme extracted features face and ear are fused by taking average of feature vector.

  2. PCA: This Scheme creates a binary decision map to choose between each pair of using majority filter. The data obtained from each sensor is used to compute a feature vector. As the features extracted from one biometric trait are independent of those extracted from the other, it is reasonable to concatenate the two vectors into a single new vector. The new feature vector now has a higher Dimensionality and represents a persons identity in a different hyperspace.

3.4. Matching Module

Euclidian distance is the most commonly used distance function or measure of dissimilarity

between two feature vector . Euclidian distance are used to measure difference between two features. Euclidian distance in n-dimensional feature space which is a usual distance between two points a=(a1,an)and b=(b1,..bn) defined by

difference two features The results shows that pca gives better results than average rule.

S.N

.

Input

Recognation(%)

1

Face & ear

67%

2

Face

75%

3

Ear

25%

Results of fusion algorithm using Average Rule

the number of features.

Fig(1)Test images

where n is

S.N

Input

Recognation(%)

1

Face & ear

90%

2

Face

100%

3

Ear

45%

Results of fusion algorithm using PCA Rule

Fig(2)Equivalent images

  1. Fisher linear discriminate analysis

    The fisherspace method uses both PCA and LDA to produce subspace projection matrix. However the fisherspace method take advantage of within-class information, minimizing variation within each class yet still maximizing class seperation

  2. Database used

    Training database plays a very important role in achieving better recognition performance from a biometric system. We have used olevetti research lab database which contain 400 images, 10 for every 40 different subject. For ear we have used iit delhi database which contain 375 images.

    Fig(3) ORL face Database

    Fig(4) IIT delhi ear Database

  3. Experimental Results:

    In our experiment there are two phases Registration phase and Authentication phase In Registration phase we take face and ear as input images, and then after extracting the feature from face and ear the features are fused using the average rule and PCA ,the fused feature vector are stored in the database. In Authentication phase The features from input user are extracted and compared with the database. Euclidian distance are used to measure the

  4. Limitation with face and Ear recognition

    Facial features are susceptible to many factors such as mood, health, facial hair, and facial expressions. Facial hair affects feature extraction, another example were facial expressions may affect the feature extraction for an individual. Although some feature extraction techniques may be resilient to a point to facial expression changes, but this is still considered an obstacle in the reliability of any face recognition system.Sources of occlusion may be long hair, earrings and multiple piercings, To overcome the occlusions for Ear the recognition systems presented in our work give a partial solution to overcome these occlusions. This can be achieved by using the segmentation method and using a separate classifier for each segment.

  5. Applications of Biometric Systems

    The applications of biometrics can be divided into the following three main groups:

    Commercial applications such as computer network login, electronic data security, ecommerce, Internet access, ATM, credit card, physical access control, cellular phone, PDA, medical records management, distance learning, etc.

    Government applications such as national ID card, correctional facility, drivers license, social security, welfare-disbursement, border control, passport control, etc.

    Forensic applications such as corpse identification, criminal investigation, terrorist identification, parenthood determination, missing children, etc.

  6. Conclusion

    Multimodal biometric systems elegantly address several of the problems present in unimodal systems. By combining multiple sources of information, these systems improve matching performance, increase population coverage, deter spoofing, and facilitate indexing. Incorporating user- specific parameters can further improve performance of these systems. With the widespread deployment of biometric systems in several civilian and government applications, it is only a matter of time before multimodal biometric systems begin to impact the way in which identity is established. In our experiment we will compare the results of average rule and PCA . The result shows that PCA gives better performance than average.

    References

    1. Yong-Mei- Zhang, Li Mai, Boli, Face and Ear Fusion Recognition Based On Multi agent , IEEE Proceedings of the Seventh International Conference on Machine Learning and Cybernetics, Kunming, p.p. 46-51, 12-15 July 2008.

    2. Md. Maruf Monwar and Marina Gavrilova, FES: A System for Combining Face, Ear and Signature Biometrics using Rank Level Fusion , IEEE Fifth International Conference on Information Technology: New Generations , p.p.1-5, 2008.

    3. Theoharis Theoharisa, Georgios Passalisa, George Todericib, Ioannis A. Kakadiarisb, Unified 3D face and ear recognition usingwavelets on geometry images , IEEE Pattern Recognition, p.p.796 804, 2008.

    4. S. K. Dahel and Q. Xiao , Accuracy Performance Analysis of Multimodal Biometrics, IEEE Workshop on Information Assurance , pp.1-4. 2003.

    5. Raghavendra.R and Hemantha Kumar G, Qualitative Weight Assignment for Multimodal Biometric Fusion, IEEE Seventh International Conference on Advances in Pattern Recognition, p.p.193- 196, 2009.

    6. Dakshina Ranjan Kisku, Multisensor Biometric Evidence Fusion for Person Authentication using Wavelet Decomposition and Monotonic-Decreasing Graph, IEEE, p.p.200-208, 2009.

    7. Nedeljko Cvejic, David Bull, and Nishan Canagarajah , Region-Based Multimodal Image Fusion Using ICA Bases , IEEE

      Sensor Journal Vol. 7, No. 5, p.p.743-751, May 2007.

    8. Sabra Dinerstaien, Jonathan Dinerstaien , Dan Vantura, Robust multimodal biometric fusion via multiple SVM , IEEE, p.p.1530-1535, 2007.

    9. A.K. Jain, K. Nandakumar, A. Ross, Score normalization in multimodal biometric systems, Pattern Recognition p.p.2270 2285, 2005.

    10. Kartik Nandkumar, Multibiometric system:Fusion strategies and template securities, PhD thesis, Michigan state University, p.p. 1-228, 2008.

    11. Fernando Alonso Fernadez, Biometric sample quality and its application to multimodal authentication system, PhD thesis, Madrid University, P.P.1-202, 2006.

    12. Julian Fierrez Aguilar,Adapted fusion schemes for multimodal authentication , PhD thesis, Madrid University, p.p.1-161, 2006.

    13. Dakshina Ranjan Kisku, Multisensor Biometric Evidence Fusion for Person Authentication using Wavelet Decomposition and Monotonic-Decreasing Graph, IEEE, p.p.200-208, 2009.

    14. J.Kittler , F. Roli , Multiple Classifier System, First International Workshop, volume 1857 of Lecture Notes on Computer Science, Springer, p.p.18-21, 2000.

    15. A.K. Jain, R.P.W. Duin, J. Mao, Statistical Pttern Recognition : A review, IEEE transaction on pattern recognition and machine intelligence, p.p.6-12, 2000.

    16. D. Wolpert, Stacked generlisation,

      Neural Network, p.p.241-259, 1997.

    17. M.I. Jordan , R.A. Jacobs, Hierarchical mixture of experts and EM algorithm, IEEE transaction on pattern recognition and machine intelligence, p.p.181-214, 2004.

    18. http://www.cl.cam.ac.uk/Research/DTG/atta rchive:pub/data/att_faces.zip

    19. http://www4.comp.polyu.edu.hk

Leave a Reply