- Open Access
- Total Downloads : 13
- Authors : Manasa G , Prathusha J S , Supriya Y N , Dr. Saritha Chakrasali
- Paper ID : IJERTCONV4IS29048
- Volume & Issue : ICIOT – 2016 (Volume 4 – Issue 29)
- Published (First Online): 24-04-2018
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Multimodal Biometrics-Feature Level Fusion of Face and Iris Biometrics
Manasa G Prathusha J S
Dept. of ISE Dept. of ISE
BNM Institute of Technology BNM Institute of Technology
Supriya Y N Dr. Saritha Chakrasali
Dept. of ISE Professor
BNM Institute of Technology Dept. of ISE, BNM Institute of
Bangalore, India Bangalore, India
Bangalore, India
Technology Bangalore, India
AbstractBiometrics refers to the automatic identification of a person based on his or her physical traits. Individuals under surveillance can also be identified. Various types of Biometrics such as Face, Fingerprint, Hand Geometry, Retina, Iris, Signature, Vein and Voice can be used for verification purposes. This work focuses on implementing a verification scheme comprising of two biometric traits Face and Iris. Such systems can significantly reduce the chances of false matches as both the biometrics has to be matched in comparison to systems using Unimodal Biometrics. In the training phase, a histogram is constructed for each of the face images from which the features are extracted. Similarly, the center coordinates and the radius of the pupil are first determined after which Discrete Cosine Transform (DCT) is performed on each of the cropped iris images. Fusion is then carried out by concatenating the face and iris features followed by matching using Euclidean distance to determine a person's identity.
Index Terms Histogram, Discrete Cosine Transform, Normalization, Feature level fusion.
I.INTRODUCTION
Biometrics is the science of recognizing an individual based on his/her physical or behavioral trait. A biometric system recognizes pattern which operates by obtaining data from biometric systems, extracting a feature set from the data and comparing this with the template set in the database. Multimodal Biometric systems combine two or more biometric modalities for verification or identification purposes. They provide more accuracy, complexity and less false positive and negative rates when compared to unimodal systems. A generic biometric system has four modules namely a sensor module, a quality assessment and feature extraction module, matching and decision making module and finally a system database module. A sensor module acquires the raw biometric data of the individual. In the second module, quality of the raw data obtained in enhanced and the features extracted are stored in a database commonly referred to as a template. The third module compares the extracted features with the stored template and validates the identity. The system database acts as a repository for biometric information [1].
A multimodal system can operate in either serial or parallel mode. In the serial mode, the output of one modality is used to reduce the number of possible identities before the next modality is used. In parallel mode, the information from multiple modalities is used simultaneously to perform
recognition. There are three levels used for information fusion-fusion at feature level, match score level and at decision level. In feature level extraction, the data obtained is encoded into a joint vector which is then compared to the enrollment template stored in a database. At match score level, based on the proximity of feature vectors and template, each subsystem computes its own matching score [1]. For decision level fusion, separate authentication decision is made for each biometric trait. Feature level fusion can be carried out using PCA and Discrete wavelet transform [2], score level fusion is based on PCA, Subspace LDA, spPCA, mPCA and LBP [3].
-
PROPOSED SYSTEM
In the proposed work, initially a histogram was constructed for each of the face images from which the features were extracted. Similarly, for the iris features, the center coordinates and the radius of the pupil were first determined after which Discrete Cosine Transform (DCT) was performed on each of the cropped iris images. Fusion was then carried out by concatenating the features [7] and then performing matching based on Euclidean distance to determine whether a person was authenticated. The multimodal biometrics was performed on both Standard database (Face94) and Real database consisting of face and iris images of real subjects for which the results were obtained successfully. The lists of modules that have been implemented are as follows:
-
Face feature extraction
In this work, Histogram-based feature extraction is employed. A histogram generally refers to the distribution of numerical data. In the context of image processing, a histogram deals with pixel intensity values. Every image has different intensity values. A histogram is a graph that represents the number of pixels in an image at each of the intensity values [4].
In the proposed work, initially the facial images were converted into gray scale images. A total of 256 different intensity values are possible for an 8-bit gray scale image. Thus the maximum histogram level was set to 256. Histograms bins are created as patch objects and always plotted with a face color that maps to the first color in the current color map (by default, blue) and with black edges. The numbers of bins was set to 9. So, the total number of bins was
approximately 29. Thus for every 9th pixel values, the feature values were extracted. Figure 2.1 shows the histogram distribution for one sample each from standard and real database. The distribution shown in Figure 2.1(a) is of the sample image 1 of Table I and that of Figure 2.1(b) is of the image 1 of Table II.
(a)
the iris and pupil boundaries of the iris [5]. After the localized iris image was obtained, the next step was to crop the iris image appropriately. For this, the parameters such as row start, row end, column start and column end were calculated to crop the iris image. Figure 2.2 shows the cropped iris of a sample image for both standard and real database of subjects 9 and 1 from the Tables I and II respectively.
(a)
(b)
Figure 2.2 Cropped iris image
-
For Standard database (b) For Real database
The Disceret Cosine Transform (DCT) was then applied to each of the cropped images. DCT is used to calculate the frequency complexity of an image. It transforms an image from the time domain to frequency domain [6].
C. Normalization of face and iris features
In the proposed work, feature level fusion [7] was employed. Normalization is a process of varying the pixel intensity values. Feature level fusion fuses two kinds of features after performing feature extraction for both face and iris. Normalization eliminates the order of magnitude and the distribution between the iris and face features. It is also performed to obtain good performance. In this work, the features of face and iris were normalized using z-score model.
j
j
Let ai be a d-dimension iris feature of the jth iris training
j
j
sample from the ith class, and bi denotes a d-dimension face
feature of the jth face training sample from the ith class. The iris
-
feature set and the face feature set are shown in equations 1 and 2 respectively [7]:
-
= 1 , , 1 , 2, , (1)
Figure 2.1 Histogram distribution of a sample image.
(a) For Standard database (b) For Real database
and
1 1
= 1, , 1 , 2, , (2)
The x-axis shows the range of pixel intenstiy values and the y-axis shows the counts of these intensities. This extraction of feature values is repeated for both the training and testing face image.
Let Ak be the kth row of the iris feature set A. To compute the normalized component, first compute as shown in equation 3[7]:
B. Iris feature extraction
=
(3)
The first step in iris feature extraction is segmentation of iris which is performed by Daugmans integrodifferential operator that returns the center and radius coordinates of both
where denotes the mean value of and is the standard deviation of . The normalized component is computed as shown in equation 4:
=
(4)
where and denote the minimum value and the maximum value of respectively. The normalized iris feature set is X = (X1, , XD). In the similar way, the normalized face feature set Y = (Y1, , YD) is obtained[7]. Figure 2.3 shows the distribution of the original components i.e.; before normalization for both standard database and real database. Figure 2.4 shows the distribution of the normalized components.
(a)
(b)
Figure 2.3 Iris and face features before normalization
(a) For Standard database (b) For Real database
(a)
(b)
Figure 2.4 Iris and face features after normalization
(a) For Standard database (b) For Real database
-
Feature level fusion
Concatenating the extracted features is one of the simplest forms of feature level fusion. For homogeneous feature vectors, a single feature vector can be calculated with and, or, xor or other operations. For non-homogeneous feature vectors, they are concatenated to form a single vector. Equation 5 shows the iris feature set X = (X1, , XD) and the face feature set Y = (Y1, ,YD) concatenated into a long vector as follows [7]:
= [1 , ; 1 , ] (5)
Finally, the Euclidean distance was selected to classify the fusion features of face and iris.
-
Results
-
-
-
RESULTS AND DISCUSSIONS
TABLE II. REAL DATABASE
SUBJECT NO
FACE IMAGE SET 1
FACE IMAGE SET 2
IRIS IMAGE
1
2
3
4
5
6
7
8
9
10
11
SUBJECT NO
FACE IMAGE SET 1
FACE IMAGE SET 2
IRIS IMAGE
1
2
3
4
5
6
7
8
9
10
11
The face images of the Face94 database were used as the standard database. The iris images were obtained from Google Images for the standard database. Table I shows the face and the iris images considered for the standard database. Table II shows the face and the iris images considered for the real database.
TABLE I. STANDARD DATABASE
SUBJECT NO
FACE IMAGE SET 1
FACE IMAGE SET 2
IRIS IMAGE
1
2
3
4
5
6
7
8
9
10
SUBJECT NO
FACE IMAGE SET 1
FACE IMAGE SET 2
IRIS IMAGE
1
2
3
4
5
6
7
8
9
10
Figure 3.1 User Interface
Figure 3.1 shows the Graphical User Interface (GUI). All the images were trained when the user clicked on Training. To perform the testing, the user had to input the face and iris images by clicking on Input Image Facial and Input Image IRIS. When the user clicked on the Testing button, the system displayed if the person was Correctly Recognized or Incorrectly Recognized.
Figure 3.2 shows the testing phase for a correctly recognized image of subjects 8 and 4 taken from the standard database (Table I) and real database (Table II) respectively.
(a)
(b)
Figure 3.2 Correctly recognised phase
(a) For Standard database (b) For Real database
Figure 3.3(a) shows the testing phase for an incorrectly recognized image from the standard database as shown in Table I. The face image of subject 5 and the iris image of subject 3 were chosen for the incorrectly recognized phase.
(a)
Figure 3.3(b) shows the testing phase for an incorrectly recognized image from the real database. The face image of subject 7 and the iris image of subject 10 as shown in Table II were chosen for the incorrectly recognized phase.
The mean and standard deviation were considered for each of the face and iris images which were further normalized. Using these normalized features, fusion was performed by converting the normalized feature sets into a long vector. Finally, Euclidian distance was used for matching the training and the testing images for recognition.
(b)
Figure 3.3 Incorrectly recognised phase
(a) For Standard database (b) For Real database
-
Discussions
The proposed work correctly recognized the face and iris images of different individuals which were considered. The face and the iris images for the real database were taken with the help of a smart phone. It was also observed that the recognition was not performed correctly if the iris images were taken without flash. Face images can be taken with or without flash. The face images were taken focusing mainly on the face area. Iris images were taken at a very close range. The maximum iris radius considered was 12 by a series of trial and error experiments.
-
-
CONCLUSIONS AND FUTURE ENHANCEMENT The proposed work performed the recognition by
concatenating both face and iris biometrics. The recognition was done correctly if the face and the iris images belong to the same individual. If not, the recognition was not successful.
Initially, the features of face were extracted using Histogram based processing which is the distribution of data values. The radius of pupil and iris were considered in order to crop the iris image. Next, the features of iris were extracted using DCT.
This work combined both face and iris images for authentication by concatenating both thefeatures. This application was first successfully implemented on the standard database. It was then extended to real database consisting of about 11 individuals. The results were proven positive for both standard and real databases. It was observed that the image captured using a camera of resolution 10MP and above is sufficient for implementing this work. Further enhancement is suggested in the area of capturing one image for both face and iris biometrics.
ACKNOWLEDGMENT
We would like to acknowledge BNM Institute of Technology, Bangalore and Department of Electrical Engineering, Indian Institute of Sciences (IISc), Bangalore for guiding and motivating us to complete this paper.
REFERENCES
-
Arun A Ross, Karthik Nandakumar, Anil K Jain, Biometrics: When Identity Matters, in Handbook of Multibiometrics, 1st ed. New York: Springer, 2006.
-
S. Anu H Nair, P.Aruna & M.Vadivukarassi, PCA BASED Image Fusion of Face And Iris Biometric Features, CSE Department, Annamalai University, Annamalai Nagar, Chidambaram, Tamil Nadu, India, vol 1, Issue-2, 2013.
-
Maryam Eskandari, Onsen Toygar, Hasan Demirel, A new approach for face-iris multimodal biometric recognition using score fusion, Dept of computer science, Eastern Mediterranean University, vol 27, Issue 03, May 2013.
-
A. Marion, An Introduction to Image Processing, Chapman and Hall, 1991.
-
Anirudh Sivaraman, Iris Segmentation Using Daugmans integrodifferential operator, July 2007.
-
Syed Ali Khayam, "The Discrete Cosine Transform (DCT): Theory and Application", Department of Electrical & Computer Engineering Michigan State University, March 10th 2003.
-
Zhifang Wang, Erfu Wang, Shuangshuang Wang and Qun Ding, Multimodal Biometric System Face-Iris Fusion Feature, Journal of Computers, vol. 6, no.5, May 2011.