- Open Access
- Total Downloads : 31
- Authors : Poornima Byahatti, Sanjeevkumar M. Hatture
- Paper ID : IJERTCONV5IS06016
- Volume & Issue : NCETAIT – 2017 (Volume 5 – Issue 06)
- Published (First Online): 24-04-2018
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
A Fusion Model for Multimodal Biometric System
Poornima Byahatti*
Department of Computer Science & Engineering, Basaveshwar Engineering College, Bagalkot-587103, Karnataka, India
Sanjeevkumar M. Hatture**
Department of Computer Science & Engineering, Basaveshwar Engineering College, Bagalkot-587103, Karnataka, India
Abstract:- Biometrics is a science of recognizing an individual based on his / her behavioural or physiological features such as voice, face, fingerprint, iris, signature etc. Based on usage the number of traits biometric systems are categorized into two types. Unimodal systems are vulnerable to variety of problems such as Spoofing, noisy data, non universality, inter-class similarities, intra-class variations, etc. Inclusion of multiple sources of data for establishing identity can beat some of restrictions of unimodal systems. Biometric systems which permit the fusion of two or more types of biometric systems are known as Multimodal biometric systems. The sources of information from different characters are acquired, pre- processed, features extracted and compared with the stored templates in the database. Finally based on matching, decision about recognition is made. The fusing of information of biometric characters can take place in any of the levels. Various fusion techniques are available for multimodal biometrics such as sensor level, feature level, score level, rank level and decision level fusion.
This paper presents a fusion modal for multimodal biometric system using face and voice biometric traits. Proposed fusion modal involves feature level, match score level, rank level & decision level fusion. Log Gabor & LBP features are used for facial feature extraction and voice features are extracted using MFCC & LPC features. Matching module will be carried out by comparing the test fused feature vectors with all training data using Euclidian distance measure. KNN Classifier is used for decision making. In future it is planned to evaluate performance of various fusion techniques based on EER, FAR & FRR.
Keywords Multimodal biometric systems, feature level fusion, match score level fusion, rank level fusion and decision level fusion.
-
INTRODUCTION
In the recent advancement of information technology, there is a need to realize authentication and authorization techniques for security of resources. There are number of ways to prove authentication and authorization. But all other techniques are beaten by biometric authentication. Determining or verifying a persons identity based on his/her anatomical and behavioral characteristics such as palm print, face, fingerprint, vein pattern, voice and iris automatically is known as biometric system.
Based on the usage of number of traits, the biometric systems are categorized into two types: Unimodal biometric systems (make use of only one trait) and Multimodal biometric systems (make use of two or more traits or algorithms or samples). Unimodal biometric systems have many implicit problems in their applications. Unimodal system suffers from lot of problems like noisy data, spoof
attacks, etc. All these problems can be beaten by multimodal biometric systems. Multi- biometric authentication can be achieved in different ways like Multi-sensor systems, Multi- sample systems, Multi-algorithm systems, Multi-instance systems and Multi-modal systems. Multimodal biometric system requires integration of data of different modalities like face, fingerprint, retina, voice, iris, etc. The fusion can be done in two different ways. The first is fusion of information prior to matching and the second method is fusion after matching.
-
Fusion prior to matching
Fusion prior to matching can be achieved in two different ways such as sensor level and feature level fusion
-
Sensor level fusion: Multiple sensors are used to collect raw data needed for fusion strategy, and then new data is produced by processing and integrating raw data. Features can be extracted from new generated data. Sensor level fusion can be possible only if the multiple cues of the same biometric trait are acquired from multiple compatible sensors.
-
Feature level fusion: Features are extracted from the multiple sources of information and is further integrated into a joint feature vector. This new high dimensional feature vector represents an individual. In order to choose only useful features, some reduction technique must be applied.
-
-
Fusion after matching
Fusion after matching can be achieved in three different ways such as matching score, rank and decision level fusion.
-
Matching score level fusion: Match score is a unit of the similarity between the input biometric and template biometric feature vectors. Based on this similarity each subsystem computes its own match score value. These individual scores are finally united to obtain a total score, which is then sent to the decision module, after which recognition is performed.
-
Rank level fusion: This fusion entails consolidating the multiple ranks coupled with an identity and calculating a new rank that would aid in establishing the final decision. This fusion does not involve any normalization techniques as that of score level fusion. This fusion strategy is usually applied for the identification of the individual rather than verification.
-
Decision level fusion: When the decisions output by the individual biometric matchers are available, then
-
Pap er
Technique
Database
Results
Future scope
[1] Decision level fusion of iris and finger print(KNN, HMM and Neural
classifier)
CASIA
database
Accuracy-91.50%, FAR-2.040%, FRR-14.94%
To improve performance.
[2] Score level fusion of iris and face(min- max normalization)
CASIAv1
database, FRGCv2
database
GAR-96.81%, EER-1.48%
Similar improvement in the recognition rates.
[3] Score level fusion of iris, face and voice(min-max normalization)
CASIA
database , NIST
website , & XM2VTS
database
GAR-92%
Integrating liveness detection with multimodal biometric
systems.
[4] Score level fusion of three different finger veins(product, weighted sum,
max and min rules)
SDUMLA_ HNT
database
Sum weighted- 0.11415, Min-
0.1571, Max-
0.08571
Implementin g other score level fusion strategies can be investigated
[5] Rank level fusion of face and iris(SCR Technique)
NIST BSSR1
score database, V2.0 and LG4000
Multi- instance(Recognitio n rate)- 83.6%,multi- algorith(RR)- 92.63%,
Muti-modal(RR)- 93.4%
Combination s of more than two cues may form a nice future scope of the work.
[6] Decision module & SIFT
, DDMFCC
features
Own database
RR-90%
Combining the recognition techniques using authenticatio n methods, such as those based on finger prints , represents a goal of
future research.
[7] Feature level fusion of ear and iris(PCA technique)
IIT Delhi Ear atabase, CASIA
Version 1.0.
FAR-0.05%, FRR- 0.075% and GAR-
93%
Improvement of performance with advanced suitable feature extraction
method.
[8] Feature extraction level fusion of fingerprint and iris
CASIA-
Fingerprint V5, CASIA-
IrisV4
FAR-1%, GAR- 98%
combining data at the Feature Extraction Level, Matcher Score Level and at the Decision
Level.
[9] Rank level fusion of face and ear(Borda count and logistic regression)
Own database
GAR-98%, FAR- 0.1%
Efforts will to test proposed bimodal on the larger dataset.
[10] Match score level fusion(Sum of scores) & feature extracted
using PCA
Own database
Accuracy-97%, FAR-2.4%, FRR- 0.8%
To achieve more accuracy.
decision level fusion is carried out. Here, a separate authentication decision is computed for each biometric trait which is then united to result in a final vote.
Figure 1: Classification of fusion techniques
A generic biometric system consists of various modules namely sensor module, feature extraction module, matcher module and decision module. In a multimodal biometric system, fusion can be performed depending upon the type of information available in any of these modules. According to Sanderson and Paliwal various levels of fusion can be classified into two broad categories: fusion before matching and fusion after matching as shown in Figure 1. It is usually believed that a fusion scheme applied as early as possible in the recognition system is more effective.
In this paper a fusion modal for multimodal biometric system is proposed using face and voice traits. Features used for face are Log Gabor and LBP. MFCC and LPC features are used for voice. The organization of this paper is as follows: section II describes literature survey, section III lists the various issues & challenges of bio-metric systems, section IV explains proposed modal, and finally section V concludes the work.
-
-
LITERATURE SURVEY
In the previous years, a lot of effort has been made in the field of multimodal biometrics yielding mature hybrid biometric systems. Different fusion approaches are studied in literature. Researchers have proposed various fusion levels with different modalities. Various fusion techniques limitations and advantages are understood by literature survey. A review of literature is shown below in table 1
TABLE I: LITERATURE REVIEW
-
ISSUES & CHALLENGES OF BIO-METRIC
SYSTEMS
After performing the literature survey some of the issues and challenges are identified are discussed in the following:
-
Many barriers such as intra-class variations, restricted degrees of freedom, noisy data, non-university, spoof attacks, and unacceptable error rates may occur in unimodal biometric systems. The single trait contains single source of information for authentication which leads to high false acceptance rate (FAR) and false rejection rate (FRR). So there is necessity of system which can combine of two or more types of biometrics systems in order to overcome the boundaries provided by unimodal system.
-
There are various issues in framing multimodal biometric sensors. Sensors automatically recognize the operating environment and communicate with other system components to immediately adjust settings in order to deliver optimal data. This work of adjustment is a challenging area. The sensor should be fast in gathering quality images from a distance and should have low cost with no failures to enroll. The image captured by sensor also affects the recognition because of improper position & pressure of biometric trait on the sensor during acquisition of data. The multimodal biometric systems can be improved by enhancing matching algorithms, combination of multiple sensors, and examination of the scalability of biometric systems.
-
Biometric systems that integrate information at an early stage of processing are believed to be more effective than those systems which perform fusion at a later stage. Fusion at the feature level is expected to offer better recognition results. However, fusion at this level is difficult to achieve in practice because
-
Feature sets of the various modalities may not be compatible
-
Most commercial biometric systems do not give access to the feature sets which they utilize in their products.
-
-
Fusion at Match score level also has some boundaries such as the scores obtained from different matchers are heterogeneous. It is not compulsory that obtained scores should be within same range. Normalization schemes need to be applied and these are complex schemes.
-
Fusion at the decision level is considered to be stiff due to the accessibility of partial information.
-
-
PROPOSED MODEL
-
Proposed model
Figure 2: Block diagram of proposed model
Multimodal biometric system has all the conventional modules as that of unimodal system like capturing module, feature extraction module, comparison module & decision making module. In addition, it has a fusion technique to integrate the information from two different traits. The fusion can be done at any of the following levels during feature extraction, during comparison of samples with stored biometric templates & during decision making.
Fusion can be used to address a number of issues faced in implementation of biometric systems such as accuracy, efficiency, robustness, applicability and universality. Figure 2 shows the block diagram for all fusion levels. In that framework, both log Gabor & Local Binary Pattern (LBP) features are extracted from face biometric trait. Mel- Frequency Cepstral Co-efficients (MFCC) & Linear Predictive Coding (LPC) features are extracted from voice biometric trait. During comparision module Euclidian based method is used in testing both traits. Finally decision is made using KNN classifier.
-
Feature extraction of biometric traits
-
Steps involved in Facial Biometric Feature extraction
Figure 3: Flow of facial feature extraction
Figure 3 shows the steps involved in facial feature extraction. At start face image is read and pre-processed using histogram equalization to get enhanced image. Features are extracted using log Gabor and LBP feature extraction methods. Gabor filters are a traditional choice for obtaining localized frequency information. An alternative to the Gabor function is the Log-Gabor function proposed by Field. Log-Gabor filters can be constructed with arbitrary bandwidth and the bandwidth can be optimized to produce a filter with minimal spatial extent. LBP is a simple yet very efficient texture operator which labels the pixels of an image by thresholding the neighborhood of each pixel and considers the result as a binary number. They offer the best simultaneous localization of spatial and frequency information. Template is constructed by fusing log Gabor and LBP features. This is followed by knowledge base construction.
-
Steps involved in Voice Biometric Feature extraction
-
-
Figure 4: Flow of voice feature extraction
Figure 4 shows steps involved in voice biometric feature extraction. At start voice signal is read and pre-processed by voice activity detection using zero-crossing, noise removal and silent portion detection. MFCCs and LPC are used for feature extraction. Mel-frequency cepstral coefficients are coefficients that collectively make up an MFC. They are derived from a type of cepstral representation of the audio clip. Linear predictive coding is defined as a digital method for encoding an analog signal in which a particular value is predicted by a linear function of the past values f the signal. Knowledge base is constructed by fusing MFCCs and LPC features
Construction of knowledge base completes training phase of biometric system which is followed by testing phase of the system.
V. CONCLUSION.
Combining multimodal data proves to be a very hopeful trend, both in experiments and in real life biometric authentication applications. Multimodal biometric systems can overcome some of the restrictions of unimodal systems. For example, the problem of non-universality is addressed since multiple traits can ensure sufficient population coverage. Also, multimodal biometric systems make it difficult for an intruder to simultaneously spoof the multiple biometric traits of a registered user. Fusion of various biometric data is the key to multimodal biometrics. Fusion can occur at various levels, the most popular one is the score level where the scores output by the individual modalities are integrated.
In this paper, various fusion techniques at various levels are discussed. Limitations of these techniques are also studied. This paper presents fusion model involving face & voice identifiers. So in future work it is planned to carry out the performance evaluation of fusion techniques using multimodal biometrics by calculating accuracy in terms of FAR and FRR. This evaluation will suggest us to know about the effective fusion technique for human recognition.
REFERENCES
-
Suneet Narula Garg, Renu Vig, Savita Gupta, 2016 Analysis of Decision Level Fusion in Multimodal Biometrics using Iris and Fingerprint, 3rd International Conference on Electrical, Electronics, Engineering Trends, Communication, Optimization and Sciences pp. 409-416, 2016
-
Kirti V. Awalkar, Sanjay G. Kanade, Dattatray V. Jadhav, Pawan K. Ajmera, 2015 A Multi-modal and Multi-algorithmic Biometric System Combining Iris and Face, International Conference on Information Processing Vishwakarma Institute of Technology pp. 496-501, 2015
-
Sheetal Chaudhary, Rajender Nath, 2015,A New Multimodal Biometric Recognition System Integrating Iris, Face and Voice, International Journal of Advanced Research in Computer Science and Software Engineering, Vol. 5, Issue 4, pp. 145-150, 2015.
-
Fateme Saadat, Mehdi Nasri, 2015 A Multibiometric Finger Vein Verification System Based on Score Level Fusion Strategy, Second International Congress on Technology, Communication and Knowledge, Mashhad Branch, Islamic Azad University, Mashhad, Iran pp. 501-507, 2015
-
Renu Sharma, Sukhendu Das and Padmaja Joshi, 2015 Rank Level Fusion in Multibiometric Systems, Fifth IEEE National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics, 2015
-
Tudor Barbu, Adrian Ciobanu, Mihaela Luca, 2015 Multimodal biometric authentication based on voice, face & iris The 5th IEEE International Conference on E-Health and Bioengineering EHB, 2015
-
M.Fathima Nadheen, S.Poornima, 2013 Feature Level Fusion in Multimodal Biometric Authentication System, International Journal of Computer Applications, Vol. 69, No.18, pp. 36-40, 2013
-
David Marius Daniel, Borda Monica, 2012 A Data Fusion Technique Designed For Multimodal Biometric Systems, 10th IEEE International Symposium on Electronics and Telecommunications, pp. 155-158, 2012
-
Amioy Kumar, Madasu Hanmandlu, Shantaram Vasikarla, 2012 Rank Level Integration of Face Based Biometrics, Ninth IEEE International Conference on Information Technology- New Generations, pp. 36-40, 2012
-
Nageshkumar.M, Mahesh.PK and M.N. Shanmukha Swamy, 2009 An Efficient Secure Multimodal Biometric Fusion Using Palmprint and Face Image IJCSI International Journal of Computer Science Issues, Vol. 2, pp. 49-53 2009