Detection and Removal of Noises in Iris Recognition System- A Review

DOI : 10.17577/IJERTV2IS4314

Download Full-Text PDF Cite this Publication

Text Only Version

Detection and Removal of Noises in Iris Recognition System- A Review

1Amrita, 2Jaspreet Singh Cheema, 3Nirvair Neeru

1,2Student of M.Tech Computer Science, Punjabi University, Patiala

3Assistant Professor, Department of Computer Engineering, Punjabi University, Patiala

Abstract

Iris segmentation is an essential module in iris recognition because it defines the effective region used for feature extraction, and therefore is directly related to the recognition accuracy. Eyelids, eyelashes and shadows are three major challenges for effective iris segmentation. In this paper, we discuss various novel methods to localize each of them. In first method, a novel coarse-line to fine- parabola eyelid fitting scheme is developed for accurate and fast eyelid localization. A smart prediction model is established to determine an appropriate threshold for eyelash and shadow detection. In second method includes two parts mainly. In the first part, eight eyelids or eyelash models are presented and the second part is iris enhancement .In third method, a new noise- removing approach based on the fusion of edge and region information. The whole procedure includes three steps: 1) rough localization and normalization, 2) edge information extraction based on phase congruency, and 3) the infusion of edge and region information and fourth method discuses a novel eyelash removal method for preprocessing of human iris images in a human iris recognition system is presented

  1. Introduction

    Biometrics technology plays important role in public security and information security domains .Various physiological characteristics of human, such as face, fingerprint, iris, retina, hand geometry etc. But Iris recognition is one of important biometric recognition approach in a human identification is becoming very active topic in research and practical application. The human iris is reputed to be the most accurate and reliable for person identification [7]. With the

    increasing demands of security in our daily life, the systems for person recognition based on biometric features have broad applications in both commercial and security areas [8]

    1. Iris recognition system

      Iris recognition is gaining acceptance as a robust biometric for high security and large-scale applications. Iris recognition system includes iris capturing, image pre-processing, feature extraction and matching. Iris recognition is a particular type of biometric system that can be used to reliably indentify a person by analyzing the patterns found in the iris. The iris is so reliable as a form of identification because of the uniqueness of its pattern. While early work has focused primarily on feature extraction with great success. The preprocessing task has received less attention However; the performance of a system is greatly influenced by the quality of captured images. [3]

      An important issue involved in iris segmentation is the localization of eyelids, eyelashes and shadows (EES). The iris is almost always partially occluded by eyelids, eyelashes and shadows which will increase the danger of false acceptance and false rejection if not properly excluded. However, Efficient EES localization is quite difficult. Firstly, the shape irregularity of eyelids makes accurate eyelid localization challenging. Second, the variation of the intensity and amount of eyelashes and shadows (ES) in individual iris images often make it hard to determine a proper threshold for ES detection. Although EES occlusion can be partially avoided by excluding a predefined EES region, this is insufficient and will inevitably cause loss in recognition accuracy. Therefore, an efficient EES localization method is highly desirable [1]. Fig1.

      showing iris image occluded and without occluded by eyelashes, eyelids and without occlusion [1].

      1. (b)

        (c) (d)

        Fig.1 (a) Iris image occluded by Eyelashes, (b) Iris image without occlusion, (c) The eyelids occlude the upper, and lower parts of the iris, (d) Without occlusion

    2. Factors that can affect the quality of iris images:

      Eyelashes, Eyelids and shadows (EES) Occlusion by eyelid and eyelashes can degrade iris images either during enrolment or verification and Intensity the variation of the intensity and amount of eyelashes and shadows (ES) in individual iris images often makes it hard to determine a proper threshold for ES detection [1]. This problem is especially serious for the small eye persons such as Chinese with dense eyelashes because the percentage of classifying eyelashes as iris is large [4]

      Improvement can be done through EES localization and Eyelid localization [1]

      1. EES localization:

        EES occlusion can be partially avoided by excluding a predefined EES region, this is insufficient and will inevitably cause loss in recognition accuracy; therefore, an efficient EES localization method is highly desirable.

      2. Eyelid localization:

      Two things make effective eyelid localization difficult. One is the eyelash occlusion, and the other is the shape irregularity of eyelids .

      AS reported by Guangzhu Xu et al (2006) real eyelids/ eyelashes areas can be detected by comparing the variation of every sub-block of each eye lids/eyelashes model. For iris enhancement the background illumination of normalized iris image is estimated and subtracted from it. Then histogram equalizing and viener filtering are implemented to enhance the normalized iris image. In order to evaluate the necessity of this method an iris recognition algorithm based on iD gabor filter was developed.

      D. Zhang, D. M. Monro et al (2006) proposes a novel eyelash removal method for preprocessing of human iris images in a human iris recognition system. The method filters each occluded pixel along an axis perpendicular to the eyelash direction, and accepts the filtered value if it changes by more than a certain threshold. This allows partially occluded regions of the iris to be included in iris coding which would previously have been excluded

      Richards Youmaran et al (2008) proposed a novel method called as Houghs Transform for iris localization as well as intensity based gradient detection method for eyelash detection using local region statistics of the image.

      A novel coarse-line to fine-parabola eyelid fitting scheme for accurate and fast eyelid localization has been developed by Zhaofeng He et al (2008). They have used a smart prediction model to determine an appropriate threshold for eyelash and shadow detection.

      Junzhou Huang, Yunhong Wang et al (2010) proposes a new noise-removing approach based on the fusion of edge and region information. The whole procedure includes three steps: 1) rough localization and normalization, 2) edge information extraction based on phase congruency, and 3) the infusion of edge and region information.

  2. Various Models to be used:

  1. Eyelid Localization

    Eyelid curvature model is statistically established to remove the noisy points. In earlier work they have

    proposed a method based on a 1-D rank filter to tackle the eyelashes. The eyelashes are mostly vertical thin and dark lines, and therefore can be weakened or even eliminated by a 1-D horizontal rank filter. After rank filtering, edge detection is performed on the result iris image along vertical direction. Only one edge point is reserved in each column so that most noisy edge points can be ignored. As a result, a raw eyelid edge map Eraw is obtained [1]

    Fig. 2 The raw eyelid edge map

    Fig. 3 Result of parabolic curve fitting

    p>The shapes of eyelids possess a common arc structure. On joining the two intersection points (e.g. points A and B) of the upper eyelid and the two vertical lines bounding the iris (l1 and l2) with a straight line, all the genuine eyelid points should be above this line [1]

    This arc structure can be estimated and subtracted from the raw eyelid Eraw the result should resemble a straight line, which can be easily fitted with, for example, simple line Hough transforms [1]

    Once the eyelid curvature models are established, the upper eyelid is localized as follows [1]

    1. Calculate a raw eyelid edge map. Filter the iris image with a 1-D horizontal rank filter, and then perform vertical edge detection

    2. Subtract Eupper from the detected Eraw. The result of ErawEupper resemble a straight Line

    3. Fit Eraw Eupper with line Hough transforms. Only the points that are in accordance with the best fitting line are reserved as genuine eyelid points, while other points are eliminated. This

      strategy is called as curvature noise elimination (CNE).

    4. Fit the remaining points of Eraw with a parabolic curve .

      Fig.4 The learned eyelid curvature model [1]

      Fig.5 Curvature noise elimination after line fitting on Eraw-Eup model [1]

      Fig.6 Parabolic curve fitting on Eraw

      after noise elimination [1]

      1.2. Eyelash and Shadow detection

      Divide the candidate iris region into two parts: ESfree and EScandidate. Then, the intensity histograms of both regions are calculated. If EScandidate region is occluded by eyelashes and shadows, its histogram should be different from that of ESfree region. We can estimate the amount of the ES occlusion according to the level of difference between the two histograms. Considering that eyelashes and shadows are usually the darkest points in the candidate iris region, we can easily get a proper detection threshold [1].

      Fig.7 Adaptive iris division

      The following fig. 8 shows the relationship between the amounts of ES occlusion and the difference level between histograms of ESfree and EScandidate (estimated on CASIA-IrisV3-Lamp image database) [1]

      FIG.8 Relationship between the amounts of Es occlusion and the difference level between histograms of ESfree and EScandidate [1]

      2 distance is adopted to measure the difference between two considered histograms p and p as follows:

      Fig.9 The ES detection result

  2. Eyelids, Eyelashes detection and Iris image enhancement

    The aim of eyelash or eyelid detection is to find the fittest eyelid/eyelash model for the iris image obstructed by eyelashes and eyelids [2]

    Fig.10 Iris location and normalization

    The rectangular block with smallest average intensity is considered as a part of pupil. Then the inner boundary of iris is located by extending the rectangle to the pupil boundary. In order to detect the outer boundary of iris summary derivate is used. In this paper, the iris is normalized has the radial resolution of 80 and angular resolution of 360 pixels as shown

    2 = (1+2)2

    (1)

    in fig 10 [2]

    1+2

    The solid line in Fig.8 is the prediction model learned by fitting these raw data points with a cubic polynomial curve. According to this prediction model, we can get an appropriate threshold for ES detection [1].

    Thus, an adaptive ES detection threshold is obtained, shows detection results in fig.9 [1]

    After normalization, iris image is unwrapped to be a rectangular block with fixed size. Then eyelid and eyelash area area are detected using eight eyelid and eyelashes models as shown in fig 11 [2]

    (a) N=0 (b) N=1 (c) N=2 (d) N=3

    (e) N=4 (f) N=5 (g) N=6 (h) N=7

    Fig.11 Eight eyelid/eyelash models

    The unwrapped iris image is divide into eight blocks with fixed size of 9×360, eight sub blocks with fixed size of 9×90 are selected in each block (as shown in fig 12) [2].

    Fig.12 Sub blocks of eyelids/ eyelashes areas

    The max deviation of each sub block in the iris up and bottom is calculated using equation 1 and 2:

    =×9+

    =×9+

    =45+

    =45+

    = ×9+9 1 135 + , + 2 , +

    (a)

    (b)

    (c)

    Fig. 13 Eyelid/eyelash detection results are:

    (a)Original iris normalization image (b)Eyelids/Eyelash detection results (c)Eyelids/Eyelash models

    2.1. Iris enhancement

    Processing steps are implemented as: [2]

    1. Divide iris image into 320 sub- blocks with fixed size of 9*9 and calculate the mean of each block to estimate the background elimination.

    2. Extend the corse estimation of illumination above to the same size as the normalized iris image using bicubic interpolation.

      =×9+1

      =×9+1

      +20, =9,89 ( = 0,1 7) (2)

      = ×9+9

      135 +

      =225 +

      , + 2 , +

    3. Subtracted the background illumination from the iris image to get the uniform brightness.

      +20, =9,89 ( = 0,1 7) (3)

      The maximum n of (n) which is smaller than the threshold is consider as the sequence of number of eyelids/ eyelashes model (equation 3 and 4) [2]

      Nup= max (arg(up (n)))(up (n) < threshold) (4)

      Nbottom=max(arg(bottom(n)))(bottom(n)< threshold) (5)

      If the variation of 7th sub block is lower than the threshold the iris up and bottom part will not be considered as eyelid/ eyelash occlusion. [2]

      Then the nth eyelid/eyelash model is moved toward left and right to find the best location to cover eyelids/ eyelashes using equation 6 [2]

      = 9 =9 , ( , + )2 , +

      + 20 (6)

      Avoiding the appearance of appearance of negative value we add 80 to each pixel.

    4. Using weiner filter to eliminate the noises come from capture devices and circumstance

    (a)

    (b)

    (c)

    Fig.14 (a) Initial normalized iris image, (b) Estimation of background illumination, (c) Enhanced iris image

  3. Localization and normalization

    = log

    2

    2 (9)

    To speed iris segmentation, the iris is first roughly localized by filtering, edge detection and Hough Transform. The localized iris is then normalized to a rectangular block with a fixed size. Fig 15 shows an example. [5]

    (a) (b)

    2 log

    where 0 represents the center frequency of the filter and k determines the bandwidth of the filter in the radial direction.

    The angular filter has the Gaussian transfer function:

    2

    2

    = 2 (10)

    2

    where represents the orientation angle of the filter, T is a scaling factor, and _ is the orientation spacing between the filters.[5]

    3.2. The infusion of edge and region information

    They set the pupil and eyelash noises to noises of version. They are detected as follows:

    (c)

    , = , +

    1 ,

    1

    Fig. 15 (a) Original image, (b) Localized image,

    2 1 255

    1 (11)

    (c) Normalized image

    3.1. Edge extraction based on phase

    1 ,

    = >= 0 = 1

    < 0

    (12)

    congruency

    Phase congruency is a dimensionless quantity to describe the significance of image features and invariant to changes in intensity or contrast [5] Kovesi represented it as follows:

    where, f(x, y) represents the normalized intensity image, PC2(x, y) gives the edge information based on phase congruency, W1 is used to adjust the relative importance of two constraints, and T1 is a threshold. [5].After the pupil, reflection and eyelash noises were detected; the remaining edge information would be the boundary between the iris and eyelids noises and

    2

    2

    =

    +

    = cos

    (7)

    gives the resticted regions where eyelid noises exist. Hough transform is used in the restricted regions for accurately localizing the eyelid noises, which can speed up the whole process [5]

    sin (8)

    where, W(x) is a factor that weights for frequency spread, is incorporated to avoid division by zero, T is a threshold for estimating noises, and the symbol

    denotes that the enclosed quantity is equal to itself when its value is positive. [5]

    They obtain edge information based on phase congruency by a bank of Log-Gabor filters whose kernels are suitable for noise detection. [5]

    It comprises two components, namely the radial filter component and the angular filter component. The radial filter has the following transfer function:

  4. Eyelash removal

    In this work author take images, resampled into a rectangular 512 x 80 image. For processing, use the 48 rows of pixels nearest the pupil, which shows fig 15 [3]

    Fig.15 Iris images occluded by eyelashes

    -1

    -2

    -1

    0

    0

    0

    1

    2

    1

    -1

    -2

    -1

    0

    0

    0

    1

    2

    1

    Z1

    Z2

    Z3

    Z4

    Z 5

    Z6

    Z 7

    Z8

    Z9

    Z1

    Z2

    Z3

    Z4

    Z 5

    Z6

    Z 7

    Z8

    Z9

    -1

    0

    1

    -2

    0

    2

    -1

    0

    1

    -1

    0

    1

    -2

    0

    2

    -1

    0

    1

    Acquire Image

    Detect Edge

    Detect Edge

    Choose Pixel in Original Image

    Choose working Window [m n] Centered at above pixel

    Choose working Window [m n] Centered at above pixel

    Get Edge Direction Distribution Variance var – Grad

    Get Edge Direction Distribution Variance var – Grad

    No

    Var _Grad<

    threshold

    Use non linear filtering along its Gradient

    Use non linear filtering along its Gradient

    Yes

    No

    Intensity

    change> Preset Threshold

    Yes

    No

    Last point

    End

    End

    Yes

    (a) (b) (c)

    Fig.17 Sobel edge filter

    (a) X derivative (b) Y derivative C) Z derivative

    The estimated gradients in the X and Y directions are [Gx ,Gy ].

    The magnitude of the gradient at the center point of the mask, called Grad, is computed [3]

    Gx = (Z7+ 2Z8+Z9)-(Z1+2Z2+Z3) Gy = (Z3+2Z6+Z9)-(Z1+2Z2+Z7)

    Grad = (Gx2+Gy2)1/2

    Undo Filteri ng

    Undo Filteri ng

    The local gradient direction is:

    = arctan (Gy/Gx) (13)

      1. Eyelash Area Decision

        To check if a pixel is occluded, we define a window of size [m n] centred at the pixel and compute a gradient direction variance over those r pixels for which Grad > 15:

        _ = 1

        1( )2

        (14)

        Figure 16 shows the proposed eyelash removal algorithm based on nonlinear conditional

        1

        =

        directional filtering [3]

        4.1. Edge detection

        In order to detect an eyelash estimate its direction. A 3 x 3 Sobel edge filter is applied to the normalized image, Fig. 17 [3]

        If the gradient direction has a small variance, a strong

        edge is indicated, as can be seen in Figure 4, and this pixel is affected by eyelash [3]

        (a) Eyelash area [3] (b) Non-eyelash Area [3] Fig. 18 Gradient Direction Distribution

      2. Non-Linear filtering

    For each pixel classified as an eyelash pixel, a 1D median filter of length L is applied along the direction , to estimate the value of the image with the eyelash removed. Bilinear interpolation of the four nearest pixels has been calculated. As every pixel is not occluded by eyelash, so author change the intensity if the intensity difference exceeds a threshold related to the total variance of the image [3]

    Recover = Diff k * Var(Image ) (15)

    Diff is the difference in intensity between the filtered and unfiltered pixel and Var(Image) is the intensity variance of the whole (unfiltered) image. K is the parameter used to tune the threshold [3].

    If Recover is positive, the pixel is replaced by the filtered value, otherwise the filter is not applied [3]

  5. Comparison with other methods

    Compared with Daugmans method (2007), this method is advantageous because: a more refined division of the candidate iris region is used .The prediction model is more efficient in determining an appropriate ES detection threshold. The refinement further guarantees the accuracy of ES detection [1]

    Accuracy Illustrations

    (a) EESDaugman (b) EESWildes

    (c) EESLiu (d) EESZhaofeng

    Fig. 19 (d) compare with (a), (b), (c)

    EESZhaofeng obtains more accurate eyelid localization result than EESLiu because EESLiu uses only two simple lines to fit the eyelid while EESZhaofeng use more refined parabolic curve fitting. EESzhaofeng also slightly outperforms EESDaugman and EESWildes .This is because the integrodifferential operator in EESDaugman tends to be sensitive to local intensity change, while the Hough transforms in EESWildes are brittle to noisy edge points. Under the proposed eyelid localization framework, the 1-D rank filter removes most of the eyelash noise and the curvature noise elimination scheme deals with the shape irregularity very well, which together guarantee the localization efficiency [1]

    In terms of the accuracy of eyelash and shadow detection EESZhaofeng achieves better results than EESDaugman due to the efficient prediction model in determining the threshold for eyelash and shadow detection. Shadow detection, for the first time, acts as an independent module in iris segmentation, which enables more precise labeling of the invalid iris region for subsequent encoding and matching modules [1]

  6. Conclusion

    Iris recognition gets more and more attention for its high accuracy rate. The iris images are often occluded by eyelids eyelashes and shadows and if these noises can't be removed the performance of iris recognition system will be degraded badly. In this paper we have studied various models which detect as well as remove the noises caused by eyelashes, eyelids and eyeshadows in iris image. Of the various models studied, prediction model (Zhaofeng He, Tieniu Tan et al (2008)) is suitable model because with the help from this model all the above mentioned noises can be localized. The zhaofengs He method outperforms state-of-the-art methods in both accuracy and speed and brings a significant improvement in iris recognition accuracy

  7. References

  1. Zhaofeng He, Tieniu Tan, Zhenan Sun and Xianchao Qiu, Robust eyelid eyelash and shadow localization for iris recognition, 15th IEEE International Conference on image processing (ICIP) 12-15 Oct, pp. 265-268, 2008

  2. GuangzhuXu, Zaifeng Zhang, Yide Ma, Improving the performance of iris recognition system using Eyelids and eyelash detection and iris image enhancement, 5th IEEE International Conference on Cognitive Informatics (ICCI) , Vol 2, pp.871-876, 2006

  3. D. Zhang, D. M. Monro and S. Rakshit, Eyelash removal method for human iris recognition, Proc. IEEE International Conference on Image Prcessing, Atlanta,

    Georgia,USA, Oct 8-11, pp. 285-288, 2006

  4. Wai-Kin Kong and David Zhang, Eyelash detection model for accurate iris segmentation, 16th International Conference on Computers and their Applications (ISCA), 28-30 March, pp. 204-207, Seattle, Washington, USA, 2001

  5. Junzhou Huang, Yunhong Wang, Tieniu Tan, Jiali Cui, A new iris segmentation method for recognition, 17th International Conference on Pattern Recognition (ICIP) , 23-26 Aug, Vol.3, pp. 554 557, 2004

  6. Richard Youmaran, L.P. Xie and Andy Adler, Improved identification of Iris and Eyelash features, 2008

  7. Prateek Verma, Maheedhar Dubey, Praveen Verma, Somak Basu, Daughmans algorithm method for iris recognition- a biometric approach, International Journal of Emerging Technology and Advanced Engineering,

    Volume 2, Issue 6, pp. 177-185, june 2012

  8. Abdulsamad Ebrahim Yahya and Md Jan Nordin, Accurate iris segmentation method for non-cooperative iris Recognition System, Journal of Computer Science, Vol 6, No 5, pp. 492-497, 2010

`

Leave a Reply