- Open Access
- Total Downloads : 895
- Authors : Manisha M. Khaladkar, Sanjay R. Ganorkar
- Paper ID : IJERTV1IS4124
- Volume & Issue : Volume 01, Issue 04 (June 2012)
- Published (First Online): 30-06-2012
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Comparative Analysis for Iris Recognition
Manisha M. Khaladkar and Sanjay R. Ganorkar
Department of ETC Engineering, Sinhgad College of Enigeering, Pune, M.S., India.
Associate Professor Department of ETC Engineering, Sinhgad College of Enigeering, Pune, M.S., India.
Abstract
As the need for security systems going up, Iris recognition is emerging as one of the important methods of biometrics-based identification systems. Iris biometry has been proposed as a sound measure of personal identification. Iris biometry has been proposed as a sound measure of personal identification. This project basically explains comparison of the purposed algorithm with the existing algorithms for Iris recognition. In proposed method, image preprocessing is performed using Daugmans Integro-differential operator and Hough transform followed by extracting the iris portion of the eye image using Haar transform and Gabor Filter. The extracted iris part is then normalized, with Daugmans rubber sheet model. Finally two Iris Codes are compared to find the Hamming Distance and Euclidian distance which is a fractional measure of the dissimilarity.
-
Introduction
A biometric system provides automatic recognition of an individual based on some sort of unique feature or characteristic possessed by the individual. Iris recognition, as an extremely reliable method for identity authentication, is playing a more and more important role in many mission-critical applications, such as assess control, national ID card, border crossing, welfare distribution, missing children identification, etc. The uniqueness of iris pattern comes from the richness of texture details in iris images, such as freckles, coronas, crypts, furrows, etc. It is commonly believed that it is impossible to find two persons with identical iris patterns, even they are twins. The randomly distributed and irregularly shaped microstructures of iris pattern make the human iris one of the most informative biometric traits. Although the human visual system can observe the distinguishing iris features effortlessly, the computational characterization
and comparison of such far from a trivial task and has attracted much attention for the past decade.
-
Segmentation
Initially eye images must be segmented to extract only the iris region by locating the inner (pupil) and outer boundaries of the iris. Occluding features must also be removed and the iris pattern normalised. Segmentation is important with only accurately segmented images suitable for proceeding to the later stages of iris recognition. Daugman [1] implements integro-differential operators to detect the limbic boundary followed by the pupil boundary. An alternative segmentation method, proposed by Wildes [5], implements an edge detection operator and the Hough transform. Maseks algorithm [8] implements Canny edge detection and a circular Hough transform to segment the iris. Further techniques have been developed employing the same approach but with slight variations [4, 7, 9-11]. In contrast Kennell et al [12] proposed a segmentation technique with simple binary thresholding and morphological transformations to detect the pupil. Mira and Mayer [13] also implement thresholding and morphological transformations to detect the iris boundaries. Here, we have done the segmentation with Daugmans integro-differential operator. Two databases, MMU and Bath are used for experimentation. The range of radius values to search for is set manually, depending on the database used. For the MMU and Bath database, values of the iris radius range from 70 to 140 pixels, while the pupil radius ranges from 20 to 50 pixels. To reduce the processing time of the image shown in figure 1a, only the region of interest is only taken for further processing by cropping image which is done by statistical calculations of the coordinates as shown in figure 1 b.
-
Daugmans Integro-differential operator
The integro-differential operator is defined as-
Figure 2 a) Thresholded image b) Segmented Iris using Daugmans IDO
-
Hough Transform
(1)
where I(x,y) is the eye image, r is the radius to search for, G(r) is a Gaussian smoothing function, and s is the contour of the circle given by (r, x0, y0). The operator searches for the circular path where there is maximum change in pixel values, by varying the radius and centre x and y position of the circular contour. To segment the Iris using Daugmans integro-differential operator, first the cropped image is thresholded for course estimation of iris boundary. The thresholded image is shown in figure 2a. By statistical calculations the pupil of the image is located which will help to remove reflections present inside the pupil boundary, caused by the light intensity. The center of the pupil and Iris are located by scanning cropped image and the circles are drawn with the radius for iris and pupil which is obtained after locating the centers. The same principle is also used for location of limbus. The black pixels in the thresholded image are removed and the circles are drawn. Now, these circles are superimposed on the original image whose features are then extracted with different algorithms for extraction. The figure 2b shows segmented iris using Daugmans integro- differential operator.
-
(b)
Figure 1 a) Original Image 1b) Cropped Image to reduce processing time
(a) (b)
To segment the iris using Hough transform, centre of pupil & Iris are located first as shown in figure 2.This is obtained by scanning the pupil vertically and horizontally and then finding the maximum number of pixels in horizontal and vertical lines which will be considered as the diameter of circular boundary of pupil. After this, circular Hough transform is applied to segment the iris and pupil by drawing the circles. The segmented Iris using Hough transform is as shown in figure 3a.
Figure 3 Segmented Iris using Hough Transform.
-
-
-
Normalization
Once the iris region is successfully segmented from an eye image, the next stage is to transform the iris region so that it has fixed dimensions in order to allow comparisons. The dimensional inconsistencies between eye images are mainly due to the stretching of the iris caused by pupil dilation from varying levels of illumination. Other sources of inconsistency include, varying imaging distance, rotation of the camera, head tilt, and rotation of the eye within the eye socket. The normalization process will produce iris regions, which have the same constant dimensions, so that two photographs of the same iris under different conditions will have characteristic features at the same spatial location.
-
Daugmans Rubber Sheet Model
The homogenous rubber sheet model devised by Daugman [1] remaps each point within the iris region to a pair of polar coordinates (r,) where r is on the interval [0,1] and is angle [0,2] as shown in figure 4a. The experimentation is done with constant radius
and variable centre which draws lines around the circle. These lines are then converted into linear region. All the lines are having same length; as if any line is small padding is done to have equal length. The following figure shows the normalized iris image. The lower black portion is removal of eyelids, eyelashes and noise which is obtained with normalization.
(a)
(b)
Figure a) Daugmans Rubber sheet model
b) Normalization by Daugmans rubber sheet model
-
-
Feature Extraction
In order to provide accurate recognition of individuals, themost discriminating information present in an iris pattern must be extracted. Only the significant features of the iris must be encoded so that comparisons between templates can be made. Most iris recognition systems make use of a band pass decomposition of the iris image to create a biometric template. Different methods for feature extraction are used in this experimentation.
-
Haar Wavelet
In mathematics, the Haar wavelet is a sequence of rescaled "square-shaped" functions which together form a wavelet family or basis. Wavelet analysis is similar to Fourier analysis in that it allows a target function over an interval to be represented in terms of an orthonormal function basis. The Haar sequence is now recognized as the first known wavelet basis and
extensively used as a teaching example. The Haar wavelet is as shown in figure 4.
Figure 4 The Haar wavelet
Here, to transform the image with Haar transform, we have used a 4*4 matrix as it reduces the number of calculations & gives more precise results. The matrix is moved on normalized image and pixel to pixel intensity variation is observed. The intensity variance pixel values are encoded for feature matching. The 4*4 Haar transform matrix used is as shown below. The figure 5 shows feature plot and Haar transformed image of the normalized image.
Figure 5 Haar wavelet Output
-
Gabor Filter
-
Gabor filters are able to provide optimum conjoint
representation of a signal in space and spatial frequency. A Gabor filter is constructed by modulating a sine/cosine wave with a Gaussian. Decomposition of a signal is accomplished using a quadrature pair of Gabor filters, with a real part specified by a cosine modulated by a Gaussian, and an imaginary part specified by a sine modulated by a Gaussian. Daugman
[1] employs wavelet based analysis of iris features using a quadrature pair of 2D Gabor filters. Masek implements a similar technique but instead uses log Gabor filters [8]. Gabor filters are deficient in encoding natural images as they over-represent low frequency components and under-represent high frequency components [18]. Here, we used Gabor filter to extract the feature information, which takes the phase information of the features. The phase information is having real and imaginary parts. Depending on in which quadrant it lies, it is encoded as shown in figure 5a. The figure 5b shows the real and imaginary parts. These real and imaginary parts are then encoded for feature matching purpose.(a)
(b)
Figure 5a) Phase Quantization b) Gabor Filter Output
-
Matching
The template that is generated in the feature encoding process will also need a corresponding matching metric, which gives a measure of similarity between two iris templates. This metric should give one range of values when comparing templates generated from the same eye, known as intra-class comparisons, and another range of values when comparing templates created from different irises, known as inter-class comparisons. These two cases should give distinct and separate values, so that a decision can be made with high confidence as to whether two templates are from the same iris, or from two different irises.
-
Hamming Distance
Daugman devised a test of statistical independence between two iris codes [2] and this has been implemented by many other authors including Masek
[8] and Monro [4]. The Hamming Distance (HD) between the two irides to be compared is calculated. HD measures the number of identical bits between two binary bit patterns. A decision criterion based on the distribution of HDs of irides that are the same and the distribution of those that are different is determined. The overlap in these distributions determines the decision criterion. If the calculated HD between two images falls below the decision criterion, the irides are from the same person. If the calculated HD is higher, the irides are from different people. For comparing two iris codes, a nearest-neighbor approach is taken, where the distance between two feature vectors is measured using the product-of-sum of individual subfeature Hamming distances (HD). This can be defined as follows:Here, we consider the iris code as a rectangular block of size M * N, M being the number of bits per sub feature and N the total number of sub features in a feature vector. Corresponding sub feature bits are XORed and the resultant N-length vector is summed and normalized by dividing by N. This is done for all M sub feature bits and the geometric mean of these M sums give the normalized HD lying in the range of 0 to
1. For a perfect match, where every bit from Feature 1 matches with every corresponding bit of Feature 2, all M sums are 0 and so is the HD, while, for a total opposite, where every bit from the first Feature is reversed in the second, MN/Ns are obtained with a final HD of 1.
-
Euclidian Distance
This is another measure to find out the matching between two images. The Euclidean distance between points p and q is the length of the line segment connecting them. In Cartesian coordinates, if p = (p1, p2,…, pn) and q = (q1, q2,…, qn) are two points in Euclidean n-space, then the distance from p to q, or from q to p is given by:
(3)
TABLE 1 COMPARISON FOR DIFFERENT METHODS.
-
-
Conclusion
The experimentation of Iris recognition system is tested on two databases, Bath and MMU. Initially, the preprocessing is carried with Daugmans Integro- differential operator and Hough transform localized the iris region followed by the normalization carried out by implementing a version of Daugmans rubber sheet model which eliminates dimensional inconsistencies between iris regions and itself removes eyelid, eyelash
and reflection areas. Then the features of the iris are encoded by convolving the normalized iris region with Haar wavelet and Gabor filter and phase quantizing the output in order to produce a bit-wise biometric template. The Hamming distance and Euclidian
Segmenta tion |
Feature extraction |
Accuracy |
FAR |
FRR |
Dougman s IDO |
Haar transform |
99.97% |
0.005 |
0.01 |
Gabor Filter |
99.94% |
0.0065 |
0.013 |
|
Hough transform |
Haar transform |
99.93% |
0.01 |
0.02 |
Gabor Filter |
99.91% |
0.012 |
0.024 |
distance are chosen as a matching metric, which is a measure of how many bits disagreed between two templates.
The accuracy of obtained by different algorithms is as shown in above table. Segmentation by Daugmans integro-differential operator followed by the Haar transform for feature extraction and matching by Hamming distance gives maximum accuracy of 99.97%. Also, with number of experimentations, the FAR and FRR are calculated and compared for different algorithms as shown in above table.
References
-
J. Daugman, "How iris recognition works," IEEE Trans. On Circuits and Systems for Video Technology, vol. 14, pp. 21-30, 2004.
-
J. G. Daugman, "High confidence visual recognition of persons by a test of statistical independence," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 15, pp. 1148- 1161, 1993.
-
J. Daugman, "New Methods in Iris Recognition," IEEE Trans. on Systems, Man, and Cybernetics, Part B, vol. 37, pp. 1167-1175, 2007.
-
D. M. Monro, S. Rakshit and Dexin Zhang, "DCT-Based Iris Recognition," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 29, pp. 586-595, 2007.
-
R. P. Wildes, "Iris recogniion: an emerging biometric technology," Proc IEEE, vol. 85, pp. 1348-1363, 1997.
-
W. W. Boles and B. Boashash, "A human identification technique using images of the iris and wavelet transform,"IEEE Trans. on Signal Processing, vol. 46, pp. 1185-1188, 1998.
-
L. Ma, T. Tan, Y. Wang and D. Zhang, "Personal identification based on iris texture analysis," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 25, pp. 1519- 1533, 2003.
-
L. Masek.(2003). "Recognition of human iris patterns for biometric identification"
[Online].Available:http://www.csse.uwa.edu.au/~pk/studentp rojects/libor/LiborMasekThesis.pdf -
J. Cui, Y. Wang, T. Tan, L. Ma, and Z. Sun, "A fast and robust iris localization method based on texture segmentation," in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, pp. 401-408, 2004.
-
C. Tisse, L. Martin, L. Torres and M. Robert, "Person identification technique using human iris recognition," in Proceedings of Vision Interface, pp. 294-299, 2002.
-
W. K. Kong and D. Zhang, "Accurate iris segmentation based on novel reflection and eyelash detection model," Proceedings of 2001 International Symposium on Intelligent Multimedia, Video and Speech Processing, pp. 263-266, 2001.
-
L. R. Kennell, R. W. Ives and R. M. Gaunt, "Binary Morphology and Local Statistics Applied to Iris Segmentation for Recognition," IEEE International Conference on Image Processing, pp. 293-296, 2006.
-
J. De Mira Jr. and J. Mayer, "Image feature extraction for application of biometric identification of iris – a morphological approach," XVI Brazilian Symposium on Computer Graphics and Image Processing, 2003, pp. 391- 398, 2003.