Novel and Robust Iris Image Segmentation Method: Introduction to A New Feature for Iris Biometric Matching

DOI : 10.17577/IJERTV2IS60945

Download Full-Text PDF Cite this Publication

Text Only Version

Novel and Robust Iris Image Segmentation Method: Introduction to A New Feature for Iris Biometric Matching

Rupsa Bhattacharjee

Student of M.E. in Biomedical Engineering, Jadavpur University, Kolkata-700032

Abstract

Iris is one of the most efficient Biometric features that have been used for Identity Matching. In this work, a novel algorithm is developed which takes raw data Iris images as input. It performs innovative use of LPG- PCA Denoising Algorithm to filter out the noises of the raw image. A new combination of morphological top hat and bottom hat filtering is used to enhance the contrast of the image so that iris feature gets enhanced over the error parts, i.e. eyelids and eyelashes. In the segmentation module, at first iris localization is performed to eliminate the effect of the error parts and to focus on the iris portion itself. A very simple concept, known as Bits Plane Slicing is developed which tries to

code a gray scale image into 8 bit planes. The most significant 8th plane successfully extracts only the iris location eliminating all such other features. This algorithm produces and calculates a new feature for biometric matching which is known as Iris Effective Area. Depending on the iris portion segmented image,

the number of black pixels are calculated which measures the iris area in terms of pixels and can be used in identity matching purposes. This algorithm has a very low time complexity, simple but robust concepts compared to the established Daugman and Wilder Methods, yet produces effective results. This segmentation results are used as a basis platform before entering into Iris pattern matching modules.

  1. Introduction

    The coloured part of the eye is called the iris. It controls light levels inside the eye similar to the aperture on a camera. The round opening in the centre of the iris is called the pupil [1]. The iris is embedded with tiny muscles that dilate (widen) and constrict (narrow) the pupil size. The sphincter muscle lies around the very edge of the pupil. In bright light, the sphincter contracts, causing the pupil to constrict. The dilator muscle runs radially through the iris, like spokes on a wheel. This muscle dilates the eye in dim lighting.

    The iris is flat and divides the front [1] of the eye (anterior chamber) from the back of the eye (posterior chamber). Its colour comes from microscopic pigment cells called melanin. The colour, texture, and patterns of each person's iris are as unique as a fingerprint.

    Formation of the unique patterns of the iris is random and it is not related to any genetic factors [2-5]. The only characteristic that is dependent on genetics is the pigmentation of the iris, which determines its colour. Due to the epigenetic nature of iris patterns, the two eyes of an individual contain completely independent iris patterns, and identical twins possess uncorrelated iris patterns.

    The term biometrics refers to the identification of an individual based on his/her physical or behavioural characteristics [6-7]. Iris recognition has rapidly become one of the most researched biometric topics due to its high potentiality in practical applications and is probably one of the most reliable biometric identification methods [7-12]. Iris biometrics makes use of the highly rich and discriminative texture information contained in the annular region between the dark pupil and white sclera [6]. Because of the following reasons iris recognition has become one of the key features of biometrics [2].

    • Its error rate is extremely low.

    • Extremely data- rich physical structure.

    • Iris is a permanent biometric (patterns apparently stable throughout life).

    • User acceptability is reasonable.

    • Real time biometric verification.

    • Physical protection by a transparent window (the cornea); highly protected by internal organ of the eye.

    • Externally visible, so non-invasivepatterns imaged from a distance.

    • Genetic independenceno two eyes are the same.

    • Iris of a single person does not change over expressions.

      Figure 1: Sample Human Iris Images

      As because, Iris is an important feature in terms of Biometrics, therefore segmentation of iris becomes one of several major processing steps [6] in an iris recognition task. The main goal of this iris segmentation step is to determine the valid region of the iris for recognition purposes [13]. Basically this region is delimited by the pupil and sclera. This segmentation often includes errors like upper and lower eyelids, eyelashes, light reflections, shadows, etc. So the segmentation module includes the consideration of eliminating caused by the eyelashes, shadows and light reflections [6]. The quality of the adopted iris segmentation method affects directly the overall iris recognition performance. It is crucial for high quality of extracted iris features used for recognition. It is also a determinant of biometrically real-time response due to fact that it is the most time-consuming module in an iris recognition system [6].

  2. Brief Overview of Previous Works

    Since ages of Image Processing works, many researchers have developed several kinds of iris segmentation algorithms. Among many research works focusing on iris segmentation approaches, there are two basic well-known algorithms, developed respectively, by Daugman [8] and Wilders [14].

    In [8], Daugman applied an integro-differential operator. This operator was used to delimitate the circular boundaries of irises. In [14], Wilders used Hough transforms to locate iris boundaries. Both these algorithms perform well in terms of iris segmentation. But since they involve computationally complex theories and methods, they are highly time consumable. There also often traces are found where appropriate elimination of eyelashes and other errors are neglected.

    In order to overcome these challenging drawbacks, initially many subsequent research works attempted to improve or to optimize the methods based on circular modelling [15-18]. Even deeper investigations focusing on occlusion detection [6], such as eyelid, eyelash and specular reflection detection were also reported [19, 20]. Most approaches for occlusion detection were based on the edge detection of the obstruction object [6]. More recently, some scientific groups have concentrated their research

    studies on so-called non-ideal iris images [6], for instance, non-circular and non-concentric iris images [13, 21-26].

    The iris feature extraction process is roughly divided into three major categories: the phase-based methods [8], [27-30], zero-crossing representation [31] and texture analysis based method [32-35]. The well known phase based methods for feature processing are Gabor wavelet, Log- Gabor wavelet. The 1 D wavelet is known for the zero-crossing representation. The Laplacian of Gaussian filter and Gaussian- Hermite moments are used in texture analysis based method.

    But in all such algorithm the basic problem of high time complexity remains the same. The iris segmentation module is meant to be attached with the iris recognition modules in real time applications of identity checking. Therefore the time complexity problem creates a high amount of time loss in such real time cases.

  3. Developed Methodology

    In this work, an automated iris segmentation module is developed which takes into account the raw iris data images acquired from real time monitoring. Including the basic image pre processing features and algorithm specially designed for irisimages pre- processing, this automated module aims to segment the iris location independently with a very lower time complexity.

    This work introduces calculation of a new feature from iris images, i.e. Iris area calculation. Here the term

    area indicates the effective area which is free from all type of errors such as presence of eyelids and eyelashes. This area can be a new feature vector in case of real time matching applications.

    Figure 2: Flowchart of Developed Method

  4. Image Pre Processing & Enhancement

      1. Image Denoising based on LPG-PCA Algorithm

        Normally iris images acquisitioned from digital cameras are noisy due to various noise introduction factors such as Gaussian, Rayleigh, Impulse, Salt and Pepper and all other types of noises. In case of Biometric Identification, these noises may cause severe error of non-matching of iris even for the same person. Therefore, various noise reduction methods are used in practice over years to remove noise from such images. Initially there were mean, median, average and smoothing filters for image de-noising. With the course of time, Frequency domain methods, Wavelets and Ridgelet based methods are also used [36] which follow a fixed structure basis of filtering. If that particular structure basis is not matched with the type of noise present, the filtering module becomes a flop. For example Wavelet based methods always use a wavelet basis which is kind of fixed. So, some special type of de-noising is needed which does not depend on any fixed basis. It should be application specific for Iris images and should be able to de-noise any type of noise that can hamper the normalcy of the image.

        In this work, a two stage LPG-PCA method proposed in [37] is applied for de-noising of Raw data iris Images.

        PCA uses an orthogonal transformation matrix to completely de-correlate the Centralized matrix. The energy of the image is concentrated on small sub-sets of PCA transformed matrix whereas the noise is getting spread over the whole image. This process reduces the signal to noise ratio. Second stage refinement improves PSNR.

        Figure 3: LPG Algorithm

        This method consists of basic steps as mentioned below:

        1. Modelling of Spatially Adaptive PCA de- noising

        2. Grouping of Local Pixels

        3. LPG-PCA based de-noising

        4. De-noising in the second stage

          There is an amount of residual noise after step c). There are mainly two reasons for the noise residual.

          1. Because of the strong noise in the original dataset Yv, the covariance matrix is much noise corrupted, which leads to estimation bias

            [37] of the PCA transformation matrix and deteriorates the denoising performance.

          2. The strong noise in the original dataset will also lead to LPG errors, which results in estimation bias of the covariance matrix. Therefore, it is necessary to further process the denoising output for a better noise reduction.

            Since the noise has been much removed in the first round of LPG-PCA denoising, the LPG accuracy and the estimation of covariance matrix can be much improved with the denoised image [36-37]. Thus the LPG-PCA denoising procedure can be implemented for the second round to enhance the denoising results as done in step d).The flowchart of the applied method is as shown below:

            Figure 4: Flowchart of LPG-PCA Denoising

      2. Image Enhancement based on Morphological Filtering

    Top hat and bottom hat filtering are two morphological filtering procedures which are used on an image with an argument called structuring element. In this work, structuring element is a disc shaped object. Top hat and bottom hat filtering can be used together [38] to improve the visibility contrast of the image.

    Contrast Enhanced Image = (original Image + Top Hat filtered Image) Bottom Hat filtered Image

  5. Segmentation of Iris Target Location

    1. Iris Localization

      First the purpose is to localize that portion of the acquired image that corresponds to an iris. In particular, it is necessary to localize that portion of the image derived from inside the limbus [2] (the border between the sclera and the iris) and outside the pupil. It helps to eliminate the unnecessary parts such as eyelashes and eyelids.

      Desired characteristics of iris localization [2]:

      • Sensitive to a wide range of edge contrast

      • Robust to irregular borders

      • Capable of dealing with variable occlusions

    2. Bits Plane Slicing Technique

      In this work a technique based on decomposing an image in its bits planes, presented and proposed in [39], is used to do the segmentation of the Target Iris Location. In 2008, Hernandez et al. [39] proposed this Bits Plane Technique which can be used as a basis tool for Image Segmentation operations. In this technique, an intensity image can be separated into eight bits planes, as shown in Figure. 5. The intensity of each pixel in a gray scale image is considered to be of some value between 0 and 255 that is converted in a byte of 8 bits. Depending of the intensity value of each pixel on the original image, there will be a bit on one, some or all the bits planes. By checking the whole image, there will appear eight images that correspond to the eight bits planes; from the least significant B0 to the most significant B7.

      To separate the complete image into Bit Planes (as shown in Figure 5), a simple algorithm can be implemented and it is described [39] in the following steps:

      Figure 5: Bits Plane Slicing Scheme

      1. Start with the first pixel on the image: row = [i], column= [j];

      2. The intensity level of the pixel is stored: Value;

      3. If Value=0, the least significant plane has a zero: b (0) = 0;

      4. If Value 0, the planes that have ones must be determined:

        1. While Value 1

        2. The half part of Value is calculated: P = Value/2;

        3. P is rounded towards zero: stored as Rounded;

        4. The difference between P and Rounded is calculated: Gap = P Rounded;

        5. If Gap = 0, the plane gets a zero: b (q) =0;

        6. If Dif 0, the plane gets an one: b (q) =1;

        7. Value is adjusted: Value = Rounded;

        8. Move to the next most significant plane: q=q+1;

      5. Steps 4.a) to 4.h) are repeated until Value <1;

      6. Zeros and ones are collocated in the different image planes:

        If b (0) =0, B0 (i, j) =0,

        If b (0) =1, B0 (i, j) =1;

        If b (1) =0, B1 (i, j) =0, and similarly If b (7) =1, B7 (i, j) =1.

      7. Auxiliary variables are reset and the Value of the next pixel is taken as input;

      8. Steps 3 to 7 are repeated until the whole image is checked.

      The images corresponding to the different bits, from the least significant to the most significant, will be represented by the matrices: B0(i,j), B1(i,j), B7(i,j), respectively.

      Here in this work, a similar method incorporated with significant plane i.e. B7 is shown to be more efficient. It is seen from number of experimental checking that the significant 8th plane, i.e. B7 plane as shown in Figure 5.1 are the most important planes and it contain the highest amount of localized intensity information. This plane is used for automatic Iris detection.

    3. Flowchart of the Segmentation Module

Figure 6: Iris Segmentation Module

  1. Experimental Results

    In this work, I have implemented my approach using MATLAB in Windows 7 Core 2 Duo operating system environment, 2 GHz and 2 GB RAM. MATLAB 7.6.0. (Version R2008a) is used for execution. This approach has been tested with 1000 images of Bath [40], 1800 images of UBIRIS [41] and 450 images of MMU [42] iris image databases.

    Sample Image 1

    Figure 7.a) Sample 1: Noisy Data Input

    Figure 7.b) Sampe 1: Denoised by LPG-PCA

    Figure 7.c) After Morphological Pre-processing

    Figure 7.d) Sample 1: Iris Localization done on 7.c)

    Figure 7.e) Sample 1: Iris Segmented Output Image

    Sample Image 2

    Figure 8.a) Sample 2: Noisy Data Input

    Figure 8.b) Sample 2: Denoised by LPG-PCA

    Figure 8.c) After Morphological Pre-processing

    Figure 8.d) Sample 2: Iris Localization done on 8.c)

    Figure 8.e) Sample 2: Iris Segmented Output Image

  2. Calculation of Iris Area

    As mentioned earlier, the developed processing algorithm is able to extract out the effective Iris area from segmented Iris Images. Since the segmented image is a binary image which contains only the iris portion segmented, the iris area can easily be calculated from output image as shown below. It takes into account only the black coloured area. Iris area is expressed in terms of number of pixels. The main condition is we have to resize the segmented images into a same frame which is fixed. Based on that particular frame, the pixel area of iris will be calculated and hence compared.

    Therefore Effective Iris Area = Number of Black Pixels in the segmented image. For example, above shown sample segmented images 7.c) and 8.c) are of different sizes. To compare their pixel area, both the segmented images are resized into a same frame of size 376 X 300 pixels.

    Now, compared on above image frame, Iris area for above shown sample Image 1 (Image 7.e)) = 86171 pixels

    Iris area for above shown sample Image 2 (Image 8.e))

    = 102561 pixels

  3. Conclusion

In this work, the developed algorithm has successfully pre processed noisy raw data iris images and segmented the effective iris area successfully. This algorithm involves novel use of simple algorithms which reduces the time complexity. Compared to other research works mentioned earlier, this algorithm deals with simple yet effective tools. The LPG-PCA scheme is a newly introduced denoising scheme which removes the noises present in the raw data image successfully. Innovative use of Morphological Filtering helps to enhance the contrast of the image in such a way, so that it becomes easy for the segmentation module to detect the iris area. In the segmentation module, Iris localization is done in order to eliminate the error caused due to the presence of eyelids and eyelashes. Te bits plane slicing technique finds special application in segmentation of iris portion.

The 8th Significant Plane which contains most of the

localized intensity information easily extracts out the iris portion. In this work, effective iris area has been found out from Human Iris Images using number of black pixels present in the segmented image. Though some errors always influence the area calculated, but in comparison to high amount of iris area, the error is negligible. It produces a new feature in terms of Biometric Matching. Along with Iris Pattern Template Matching, a new feature Iris Area Matching can be added in Biometric Modules for Identity Recognition.

10. References

  1. http://vision.about.com/od/eyeanatomy/g/Iris.htm

  2. Aly A. Farag and Shireen Y. Elhabian, Iris Recognition, Computer Vision & Image Processing Laboratory, University of Louisville, www.cvip.uofl.edu

  3. Libor Masek, Recognition of Human Iris Patterns for Biometric Identification B.E.Thesis, The University of Western Australia, 2003.

th

[11] L. Yu, D. Zhang and K. Wang, The relative distance of key point based iris recognition, Pattern Recognit., 40: 423- 430, 2007.

[12]C. Tisse, L. Martin, L. Torres and M. Robert, Person identification technique using human iris recognition, Proc. 15thInt.Conf. Vision Interface, 294-299, 2002.

  1. J. Daugman, New methods in iris recognition, IEEE Trans. Syst. Man Cybern. Part B-Cybern., 37: 1167-1175, 2007.

  2. R. P.Wildes, Iris recognition: An emerging biometric technology, Proc. IEEE, 85: 1348-1363, 1997.

  3. X.M. Liu, K.W. Bowyer and P.J. Flynn, Experiments with an improved iris segmentation algorithm, Proc. 4thIEEE Workshop Automatic Identification Advanced Technologies, 118-123, 2005.

  4. X. Feng, C. Fang and Y. Wu, Iris localization with dual on coarse-to-fine strategy, Proc. 18thInt.Conf.Pattern Recognit., 4: 553-556, 2006.

  5. D.H. Cho, K.R. Park and D.W. Rhee, Real-time iris localization for iris recognition in cellular phone, Proc. 6thInt.Conf. Software Eng., Artif.Intell., Netw.ParallelDistrib. Comput.and 1stACIS Int.Workshop Self-Assembling Wireless Networks(SNPD/SAWN), 254-259, 2005.

  6. E. Trucco and M. Razeto, Robust iris location in close- up images of the eye, Pattern Anal.Appl., 8: 247-255, 2005.

  7. W. Kong and D. Zhang, Accurate iris segmentation based on novel reflection and eyelash detection model, Proc.Int.Symp. Intelligent Multimedia, Video and Speech Processing, 263-266, 2001.

  8. J. Huang, Y. Wang and J. Cui, A new iris segmentation method for recognition, Proc. 17thInt.Conf. Pattern Recognit., 554-557, 2004.

  9. H. Proença and L.A. Alexandre, Iris segmentation methodology for non-cooperative recognition, IEE Proc.Vision, Image and Signal Processing, 153: 199-205, 2006.

  10. E.M. Arvacheh and H.R. Tizhoosh, Iris segmentation: detecting pupil, limbus and eyelids, Proc.Int.Conf.Image Proc., 2453-2456,2006.

  11. A. Abhyankar and S. Schuckers, Active shape models for effective iris segmentation, Proc. SPIE-Biometric Technology for Human Identification, 6202, 2006.

[4] E. Wolff. Anatomy of the Eye and Orbit. 7

Lewis & Co. LTD, 1976.

edition. H. K.

[24] V. Dorairaj, N.A. Schmid and G. Fahmy, Performance evaluation of non-ideal iris based recognition system

  1. R. Wildes. Iris recognition: an emerging biometric

    technology. Proceedings of the IEEE, Vol. 85, No. 9, 1997.

  2. Lee Luan Ling & Daniel Felix de Brito, Fast and Efficient Iris Segmentation, Journal of Medical and Biological Engineering, 30(6): pp 381-392.

  3. A.K. Jain, A. Ross and S. Prabhaker, An introduction to biometric recognition, IEEE Trans. Circuit and Systems for Video Technology, 14: 4-20, 2004.

  4. J. Daugman, Statistical richness of visual phase information: update on recognizing persons by iris patterns, Int.J.Comput.Vis., 45: 25-38, 2001.

  5. J. Daugman, How iris recognition works, IEEE Trans. Circuits and Systems for Video Technology, 14: 21-30, 2004. [10]K.W. Bowyer, K. Hollingsworth and P.J. Flynn, Image understanding for iris biometrics: a survey, Comput.Vis.Image Underst., 110: 281-307, 2008.

implementing global ICA encoding, Proc.Int.Conf.Image Proc., 285-288, 2005.

  1. S. Rakshit and D.M. Monro, Pupil shape description using Fourier series, Proc. IEEE Workshop Signal Processing Applications for Public Security and Forensics, 1- 4, 2007.

  2. X. Li, Modelling intra-class variation for non-ideal iris recognition,in: D. Zhang and A.K. Jain (Eds.),Lecture Notes in Computer Science(LNCS3832): Int. Conf. on Biometrics, Berlin Heidelberg:Springer-Verlag,419-427, 2005.

  3. Somnath Dey & Debasis Samanta, Improved Feature Processing for Iris Biometric Authentication System, International Journal of Electrical and Electronics Engineering 4:2 2010, pp 127-134

  4. J. G. Daugman, High Confidence Visual Recognition of Persons by a Test of Statistical Independence, IEEE

    Transactions on Pattern Analysis and Machine Intelligence, 15(11):11481161, November 1993.

  5. John Daugman. Iris Recognition, American Scientist, 89:326333, July-August 2001.

  6. J. Daugman. How iris recognition works. IEEE Transactions on Circuits and Systems for Video Technology, 14(1):21 30, 2004.

  7. W. W. Boles and B. Boashash. A Human Identification Technique Using Images of the Iris and Wavelet Tranform, IEEE Transaction on Signal Processing, 46(4):11851188, 1998.

  8. M. Vasta, R. Singh, and A.Noore. Reducing the False Rejection Rate of Iris Recognition Using Textural and Topological Fearures International Journal of Signal Processing, 2(2):6672, 2005.

  9. Jafar M. H. Ali and Aboul Ella Hassanien, An Iris Recognition System to Enhance E-security Environment Based on Wavelet Theory, AMO – Advanced Modelling and Optimization journal, 5(2):93104, 2003.

  10. Shinyoung Lim, Kwanyong Lee, Okhwan Byeon, and Taiyun Kim, Efficient Iris Recognition through Improvement of Feature Vector and Classifier, ETRI Journal, 23(2):6170, June 2001.

  11. R. P. Wildes, Iris Recognition: An Emerging Biometric Technology Proceedings of the IEEE, 85(9):13481363, September 1997.

  12. R. Bhattacharjee, M. Chakraborty, LPG-PCA Algorithm and Selective Thresholding based Automated Method: ALL & AML Blast Cells Detection and Counting. Published in CODIS 2012 (International Conference of Communications, Devices & Intelligent Systems), Jadavpur

    University, 28-29th December 2012. Proceedings published

    by IEEE. ISBN 978-1-4673-4698-6, IEEE Catalog Number.-

    CFP1207U-CDR

  13. Z.Lei, D.Weisheng, Z. David, S.Guangming, Two-stage image denoising by principal component analysis with local pixel grouping, Pattern Recognition vol. 43, issue 4, pp. 1531-1549, Apr. 2010.

  14. R. Bhattacharjee, M.Chakraborty Image Enhancement Algorithm Developed for Detecting Retinal Vessel Occlusion Diseases: An Alternative Approach over Angiograms, Published in the proceedings of International Conference on Speech, Image, Biomedical & Information Processing Nov 1- 2, 2012 .

  15. Norma Ramirez Hernandez, José Luis Ramos Quirarte,

    Bits Planes Technique for Digital Image Processing, 2008 5th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE 2008), IEEE Catalog Number: CFP08827-CDR, ISBN: 978-1-4244- 2499-3, Library of Congress: 2008903800, 978-1-4244-2499-

    3/08/$25.00 ©2008 IEEE, pp 186-191.

  16. University of Bath iris image database, 2007. http://www.bath.ac.uk/elec-eng/research/sipg/irisw.

  17. Hugo Proenc¸a and Luis A. Alexandre. Ubiris: A noisy iris image database. In Fabio Roli and Sergio Vitulano, editors, ICIAP, volume 3617 of Lecture Notes in Computer Science, pages 970977. Springer,2005.

  18. Multimedia University iris image database. http://pesona.mmu.edu.my/ccteo/.

Leave a Reply