A Review on Feature Extraction Techniques of Iris

DOI : 10.17577/IJERTV2IS120972

Download Full-Text PDF Cite this Publication

Text Only Version

A Review on Feature Extraction Techniques of Iris

R. B. Patil, R. R. Deshmukh

Department of Computer Science & IT, Dr. Babasaheb Ambedkar Marathwada University, Aurangabad

Abstract

The biometrics is the study of physical traits or behavioral characteristics of human include items such as finger prints, face, hand geometry, gait, keystrokes, voice and iris. Among the biometrics, iris has highly accurate and reliable characteristics. A general approach of iris recognition system includes image acquisition, segmentation, feature Extraction, matching/classification. The performance of biometric system based on iris recognition depends on the selection of iris features. In this work performance of various feature extraction methods are analyzed for iris recognition. The various methods includes circular symmetric filter, Haar Wavelets, Lifting wavelet transform.

Keywords :- Iris Feature Extraction, Haar Wavelet, Wavelet transform, Symmetric Filter.

Introduction :-

A biometric system provides automatic identification of an individual based on a unique feature or characteristic possessed by the individual. Iris recognition is regarded as the most reliable and accurate biometric identification system available. In this paper, we reviewed novel techniques of feature extraction methods of iris.

The human iris recently has attracted the attention of biometrics-based identification and verification research and development community. The iris is so unique that no two irises are alike, even among identical twins, in the entire human population.

Basic steps of iris recognition :-

Image Acquisition

Segmentation

Normalization

Image Acquisition

Segmentation

Normalization

Matching

Feature Extraction

Matching

Feature Extraction

Feature extraction

Feature extraction is a key process where the two dimensional image is converted to a set of mathematical parameters. The iris contains important unique features, such as stripes, freckles, coronas, etc. These features are collectively referred to as the texture of the iris. These features are extracted using various algorithms..

The iris has a particularly interesting structure and provides abundant texture information. So, it is desirable to explore representation methods which can capture local underlying information in an iris. From the viewpoint of texture analysis, the local spatial patterns in an iris mainly involve frequency information and orientation information. But in experiments, we orientation is not a crucial find that factor when analyzing the characteristics of a small iris region such as a 10×10 region. That is, in a small iris region, frequency information accounts for the major differences of the irises from different people. We thus propose an effective scheme to capture these discriminating frequency information. Because the majority of useful information of the iris is in specific frequency band, a bank of circular symmetric filters is constructed to capture them. For a preprocessed iris image, the texture of the iris becomes coarser from top down. So, we use filters at different frequencies for different regions in the image. A feature value is obtained from each smaller region in the filtered image. A feature vector is an ordered collection of all the features from each local region. Detailed description of this method is presented as follows.

  1. Circular Symmetric filter

    In the spatial frequency domain, we can extract the information of an image at a certain scale and at a certain orientation by using some specific filters, such as multichannel Gabor filters In recent years, Gabor filter based methods have been widely used in computer vision, especially for texture analysis. Gabor elementary functions are Gaussians modulated by oriented complex sinusoidal functions. Here, we utilize a circular symmetric filter (CSF) which is developed on the basis of Gabor filters. The

    difference between Gabor filter and circular symmetric filter lies in the modulating sinusoidal functions[2]. The former is modulated by an oriented sinusoidal function, whereas the latter a circular symmetric sinusoidal function. A CSF is defined as follows:

    where M(x,y,f) is the modulating function, f is the frequency of the sinusoidal function, and are the space constants of the Gaussian envelope along the x and y axis respectively. We can obtain a bandpass filter with a specific center frequency by setting the frequency parameter f. The choice of the parameters in Equation (1) is similar to that of Gabor filter. The circular symmetric filter can capture the information of an image in specific frequency band, whereas it cannot provide orientation information because of its circular symmetry.

  2. Lifting Wavelet Transform

The lifting scheme is an algorithm to calculate wavelet transform in an efficient way. It is also a generic method to create so- called second generation wavelets[3].

Predict and Update: The lifting scheme is an efficient implementation of the filtering operations. At the jth level, input data set is transformed into two other sets: the low-resolution part Ej and the high resolution part Fj. This is obtained first by just splitting the data set into two separate data subsets (usually called the lazy wavelet transform). The next step is to recombine these two sets in several subsequent lifting steps which decorrelate the two signals.

  • A dual lifting step can be seen as a prediction: the data Fj are "predicted" from the data Ej. When the signals are still highly correlated, then such a prediction will usually be very good, and we can store only the part of Fj that differs from its prediction. Thus Fj is replaced by Fj – P (Ej ), where P represents the prediction operator.

  • However, the new representation has lost certain basic properties, like for example the mean value of the signal. To restore this property, one needs a primal lifting step, whereby the set Ej is updated with data computed from the (new) subset Fj. Thus Ej is

replaced by Ej + U(Fj), with U some updating operator.

Figure 2: Block diagram of predict and update lifting Steps.

Thus, lifting scheme contains three steps to decompose signal, that is, Split, Predict and Update, as shown in Figure 2 . The original signal is s[n]. It is transformed into approached signal in low frequency c[n]and detail signal d[n].

  1. Split: In this step, the original signal s[n] is split into two subsets which do not overlap with each other: se[n] (even sequence) and so[n] (odd sequence), that is

    se[n] = s[2n]

    so[n] = s[2n+1] (1)

  2. Predict: If the original signal is locally coherent, the subsets se[n] and so[n] are also coherent, so one subset can be predicted by another. Commonly we use even sequence to predict odd sequence,

    d[n] = so [n] – P(se )[n] (2)

    Where P is the predict operator and reflects the degree of correlation of data. P(se)[n] implies that the value of d[n] can be predicted by the value of se[n].

  3. Update: c[n] in Figure 4 is the approach signal which has been decomposed. One of the important features is that its average value should be equal to the average value of original signal s[n].So we can use detail subset d[n] to update the signal se[n], expressed by

c[n]: c[n] = se [n] + U(d)[n] (3) The decomposition of wavelet can be written as

E (z) se(z)

F(z) = M(z) Z-1so(z) (4)

If there are 2n data elements, the first step of the forward transform will produce 2n-1 averages and 2n-1 differences (between the prediction and the actual odd element value). These differences are referred to as wavelet coefficients.

The split phase that starts each forward transform step moves the odd elements to the second half of the array, leaving the even elements in the lower half. At the end of the transform step the odd elements are replaced by the differences and the even elements are replaced by the averages. The even elements become the input for the next step, which again starts with the split phase. The first element in the array contains the data average. The differences (coefficients) are ordered by increasing frequency. In our approach the original masked image is resized to [256, 256] shown in Figure 3 and then obtaining 6th level coefficient by increasing the frequency. At Kth level coarse approximation component will get reduced to (N/2)k

transform with the wavelet tree obtained using other wavelets we found that the Haar wavelet gave slightly better results[1],[4]. Our mapped image is of size 100×402 pixels and can be decomposed using the Haar wavelet into a maximum of five levels. These levels are cD1h to cD5h (horizontal coefficients), cD1v to cD5v (vertical coefficients) and cD1d to cD5d (diagonal coefficients).

We must now pick up the coefficients that represent the core of the iris pattern. Therefore those that reveal redundant information should be eliminated. In fact, looking closely at Figure 6 it is obvious that the patterns in cD1, cD2, cD3 and cD4 are almost the same and only one can be chosen to reduce redundancy.

Since cD4h repeats the same patterns as the previous

horizontal detail levels and it is the smallest in size, then we can take it as a representative of all the information the four levels carry. The fifth level does not contain the same textures and should be selected as a whole. In a similar fashion, only the fourth and fifth vertical and diagonal coefficients can be taken to express the characteristic patterns in the iris-mapped image. Thus we can represent each image applied to the Haar wavelet as the combination of six matrices:

  • cD4 h and cD5h

  • cD4vand cD5v

  • cD4dand cD5

1(M/2)k. After few levels image size can become too

small to be useful.

Figure 3: Normalize image and its resized image

Haar Wavelets

Most previous implementations have made use of Gabor wavelets to extract the iris patterns. But, since we are very keen on keeping our total computation time as low as possible, we decided that building a neural network especially for this task would be too time consuming and selecting another wavelet would be more appropriate. We obtain the 5-level wavelet tree showing all detail and approximation coefficients of one mapped image obtained from the mapping part. When comparing the results using the Haar All these matrices are combined to build one single vector characterizing the iris patterns. This vector is called the feature vector. Since all the mapped images have a fixed size of 100×402 then all images will have a fixed feature vector. In our case, this vector has a size of 702 elements. This means that we have managed to successfully reduce the feature vector of Daugman who uses a vector of 1024 elements. This difference can be explained by the fact that he always maps the whole iris even if some part is occluded by the eyelashes, while we map only the lower part of the iris obtaining almost half his feature vectors size.

COMPARISON OF RESULT :

Feature extraction method

Overall accuracy

Circular symmetric

filter

99.85%

Lifting wavelet transform

98.78%

Haar wavelets

95%

Acknowledgments:

We are grateful to Dr. K.V.Kale (H.O.D) and Department of Computer Science and Information Technology for providing us the infrastructure.

CONCLUSION:-

Every individual have unique physiological characteristics. Iris patterns may be used for reliable visual recognition. circular symmetric filter, Haar Wavelets, Lifting wavelet transform feature extraction methods for iris are studied in this paper. In this paper results of the various feature extraction methods has analyzed. The review of the techniques provides a platform for the development of the novel techniques in this area as future work.

References :-

  1. Dolly chaudhari,Shamik Tiwari,Ajay kumar Singh, A Survey: Feature Extraction Methods for Iris Recognition, International Journal of Electronics Communication and Computer Technology (IJECCT) Volume 2 Issue 6 (November 2012)

  2. Li Ma,Yunhong Wang, Tieniu Tan, Iris Recognition Using Circular Symmetric Filters, IEEE,2002.

  3. C.M.Patil,Sudarshan patilkulkarni, Iris Feature Extraction for Person Identification using Lifting Wavelet Transform, International Journal of Computer Applications, 2010

  4. Naveen singh,Dilip Gandhi,Krishna Pal singh,IRIS RECOGNITION SYSTEM USING A CANNY EDGE DETECTION AND A CIRCULAR HOUGH TRANSFORM ,International Journal of Advances in Engineering & Technology, May 2011.

  5. R. Wildes, J. Asmuth, G. Green, S. Hsu, R. Kolczynski, J. Matey, S. McBride, A system for automated iris recognition, Proceedings IEEE Workshop on Applications of Computer Vision, Sarasota, FL, pp. 121-128 ,1994.

  6. W. Kong, D. Zhang., Accurate iris segmentation based on novel reflection and eyelash detection model, Proceedings of 2001 International Symposium on Intelligent Multimedia, Video and Speech Processing, Hong Kong, 2001.

  7. C. Tisse, L. Martin, L. Torres, M. Robert, Person identification technique using human iris recognition, International Conference on Vision Interface,Canada,2002

Leave a Reply