AI Based Biometric Image Fusion

DOI : 10.17577/IJERTV2IS4101

Download Full-Text PDF Cite this Publication

Text Only Version

AI Based Biometric Image Fusion

Nilima A. Deshmukp, Gaurao Chaudhari2 Department of Electronics& Telecommunication, Maharashtra Institute of Technology ,Pune,India;

ABSTRACT

The project explores the field of image fusion which comes under the banner of data fusion. Image fusion is the integration of information contained in different images, which contain some inherent information of their own, into a more detailed representation, known as the fused image, which has the total information conveyed through both the original images. The fusion work can be carried out at three levels: pixel level, feature level and the decision level. In the work presented currently, the pixel level fusion has been done by decomposing the image into its wavelets and combining the associated coefficients using fuzzy logic and reconstructing the final image using Inverse Discrete Wavelet Transform. Feature level fusion has been mainly carried out for biometric systems by using palm prints and fingerprints as the source images. Fingerprint features such as minutiae, the Euclidean distances and angles between them as well as standard deviation. Fusion at this level has been achieved by first enhancing the images using Gabor filters and then combining the aforementioned attributes followed by the computation of standard deviation to be fed as a feature vector to a neural network classifier for the identification purpose in the biometric system.

KeywordsPixel Level Fusion, Feature Level Fusion, Palm Print, Fingerprint, Gabor Filter.

  1. INTRODUCTION

    Image fusion comes under the domain of data fusion. The concept of data fusion was presented for military fields in the 1970s [1]. In order to get the best operational results,

    the new weapon systems usually use many sensors. The idea of data fusion is widely used in image processing. There are many fusion approaches in the multi-source images fusion. The image fusion methods are divided into three levels: pixel, feature and decision level. It provides a mechanism to combine multiple images into a single representation to aid human visual perception and image processing tasks. Such algorithms aim to create a fused image containing the imported information from each source image without introducing inconsistencies.

    The proposed framework offers a Plug & Play environment for the construction of n-dimensional multi- scale image fusion methods. It also deals with building a Generalized Image Fusion Toolkit (GIFT) in LabVIEW which will cater the need of various fields like defense

    system, remote sensing, robotics, medical imaging and biometrics.

  2. TERMINOLOGIES ASSOCIATED WITH THIS MODEL

    1. Artificial Neural Network:

      Neural network is a nonlinear mapping system whose structure is loosely based on principles of the real brain. The unit is a simplified model of a real neuron. Its parts are the input vector x whose containing information is

      manipulated by the weighted nodes by weigh vector w. The node with weight _ and input 1 is a so called bias of the neuron which gives the freedom to the neuron to be able to shift the function f(·). There are many different types of the neural networks, in this thesis there are used only so called feed-forward neural networks. The neurons are structured in layers and connections are drawn only from the previous layer to the next one. A typical structure of this type of the neural network can be seen in the figure below

      Fig.1 Feed Forword Neural Network

    2. Significance of Neural Network and Back Propagation:

      Extracting minutiae features from the skeleton of the fingerprint and palm print requires a method that is able to distinguish and categorize the different shapes and types of minutiae. This is a classification problem and can be solved by constructing and training a neural network which

      work as a classifier. Training of the neural network is conducted with the back-propagation algorithm. A back- propagation is one of many different learning algorithms that can be applied for neural network training and has been used in this thesis. It belongs to a category of so called learning with the teacher. . For every input vector x that is presented to the neural network there is predefined desired response of the network in a vector t (the teacher). The desired output of the neural network is then compared with the real output by computing an error e of vector t and neural network output vector y. The correction of the

      weights in the neural network is done by propagating the error e backward from the output layer towards the input layer, therefore the name of the algorithm.

      Vol. 2 Issue 4, April – 2013

      Applying a 2D-DWT to decompose the images into lower resolution before performing feature extraction. Image decomposition using 2D-DWT is able to conserve the energy signal and redistribute them into a more compact form. Subsequently, we adopt a Gabor filter as the feature extractor for both biometrics, as they share some common characteristics such as ridges. Finally, the proposed feature level fusion method is utilized to combine the extracted fingerprint and palm print images. The model uses Haar wavelets because of its simplicity.

    3. Pixel Level Image Fusion:

      In the pixel-level image fusion method, the acquired original image data are directly fused based on pixel by pixel. The pixel-level fusion can provide the detailed information that cannot be provided by the other levels of fusion [3]. And the results still are images, which are not only more intuitive for human beings, but also more suitable for further processing. Vision Enhancement is the core task of the pixel level image fusion technology. Multi- source image fusion does not consider the physical characteristics of the original image. The purpose of the fusion is to fuse the same scene of multi-source images into one image. The commonly used methods include the weighted average method, selection and the weighted average combined method.

    4. Feature Level Image Fusion:

    Feature level fusion is basically used to integrate the attributes or features of different images and making use of the additional information so as to gain a more detailed representation of the query images. Images of the same object captured by different sensors can vary depending on their resolution, calibration, accuracy, and so

    F. Block Diagram

    Pixel Level Image Fusion

    Image Acquisition

    Pre- processing

    Feature Level Image Fusion

    ANN

    Network Testing

    Fig2. Block Diagram

    Decision Level Fusion

    on. Each of these will in turn contain information of objects in varying amounts, some more and some less. By extracting the features of each of these images and using them together (fusion), one can not only reduce the amount of storage space required, but also obtain essential and important characteristics of the images.

    This is particularly important as far as biometric systems for identification and authorization are concerned. As an added application of image fusion, we are using fusion at the pixel, feature to develop a multimodal biometric system working.

    E .Wavelets:

    In mathematics, the wavelet transform refers to the representation of a signal in terms of scaled and translated copies (known as "'daughter wavelets'") of a finite length or fast decaying oscillating waveform (known as the mother wavelet). Wavelet transforms are broadly classified into continuous wavelet transform (CWT) and discrete wavelet transform (DWT). The principal dfference between the two is that the discrete wavelet transform uses a specific subset of all scale and translation values whereas the continuous transform operates over every possible scale and translation.

  3. PIXEL LEVEL IMAGE FUSION

    Different sensors have different frequency sensitivity depending upon the kind of photosensitive material used in it. Wavelet transform provides a multi-resolution representation for registered images from each sensor with good frequency localization. Given two input images to be fused, in this method the absolute values of details in sub- band decomposition of two input images are compared and higher value is picked. As the edges represent very significant information in the images the resulting fused image after modulus maxima selection can give good fusion results. The approximation coefficients during reconstruction can be selected as one of the sub-bands, which represent image better at low frequency [3]. Wavelet transform for the images is calculated at 1st level and different wavelets are used for decomposition and reconstruction. The performances of these wavelets for the same image are compared.

    Algorithm:

    1. In this fusion technique, the processed image is fused using Discreet Wavelet Transform (DWT).

    2. We decompose the images into several sub-bands like approximate, diagonal, horizontal and vertical coefficients using discreet wavelet transform (DWT).

    3. For all components we estimate the contribution of every coefficient to the fused image. For a coefficient x in a given sub-band A of first image we define a fuzzy membership function µ0(x) of first relation as,

    Algorithm:

    1. The original size of the images is 150 by 150 pixels. The image is resized to size of 64X64 to reduce the number of computations henceforth called as resized ROI [9].

    2. The algorithm first divides the image into four non overlapping parts around center point.

    1. Fuzzy logic function µ1(x) of second relation is defined as,

      Where, p(x) means the probability of the coefficient x in sub-band A.

    2. Thus we calculate importance of coefficient with fuzzy

      Fig 4. ROI in four parts

      Fig 5. Arrangement of subimage DCTCoefficients

      reasoning as

      µ01(x) = min (µ0(x), µ1(x))

    3. For the coefficient y which has the same spatial position with coefficient x, in the sub-band A of the second image, we can also get the fuzzy membership functions of two relations µ0(y) and µ1(y).

    4. We fuse three sub-band coefficients using every coefficients µ01(x) or µ01(y). The fused coefficient z is calculated as

    5. After every coefficient has been processed, the inverse wavelet transform is applied to reconstruct the fused image.[1]

    Fig 3 Fuzzy Logic using DWT

  4. FEATURE LEVEL IMAGE FUSION

    Frequency Domain:

    1. Discrete Cosine Transform:

      The palm print database used in this work is acquired from IIT-Delhi Palmprint Database. Total of 25 different palms are selected to test the algorithm.

      1. The 2-D transform is applied on each sub-image separately. The DCT transformed coefficients are now grouped into different nine frequency bands (blocks).

      2. For each numbered block the standard deviation is calculated. Such features are calculated from four sub images and hence form a feature vector of 36 (4×9=36) which is used in enrollment as well as matching phase.

      3. The feature vector of several persons is given to the back propagation network for training.

    2. Discrete Wavelet Transform and Gabor Filter:-

      Algorithm:

      1. Gaussian low pass filter is used to smoothen the palmprint images. The ROI of each image sized 150 x 150 pixels.

      2. With the lower resolution of each component, computational complexity is reduced. WT is used to decompose the enhanced palmprint images into lower resolution representation shown in Figure6.

        Fig 6. 1-level decomposition of palm-print image using DWT

      3. An image is decomposed into four frequency sub- bands at each resolution level n by applying 2D DWT. The resulted four sub-bands are, an approximation sub-band (LLn), and three detailed sub-bands (HLn, LHn, and HHn).

      4. A bank of 2D Gabor filters is used to filter palmprint in different directions to highlight these characteristics and remove noises. A 2D Gabor filter has the following form in the image domain (x,y) [10]

        Where, x= xcos+ysin ; y= -xsin+ycos.

        f is the frequency of the sinusoidal plane wave along the direction from the x-axis,^2 is the standard deviation of the

        Gaussian envelope. For f = 10, ^2 = 16, and =/8 best combination [10]

      5. The filtered images are normalized to the same domain using the following method:

        Result Images

        Where I (x, y) denotes the pixel intensity at coordinate (x,y), denotes the intensity mean, and denotes the

        intensity standard deviation.

      6. Combine the normalized LL sub-band images and divide it into none overlapping blocks of size m×n pixels each Fig 3. The normalized images are fused as shown below:

        F

        P

        P

        F

        Fig.7. Arrangement of normalized images

        Fi: normalized LL sub-band of fingerprint image at index i. Pi: normalized LL sub-band of palmprint image at index i.

      7. Then, the resulting magnitude will be converted to a scalar number by calculating its standard deviation value. The size of each block is carefully chosen, so that no repeated feature is extracted [10].

      8. A sub-Gabor features vector is extracted from each image, by calculating the standard deviation of each dotted-line-block as shown in Figure8.

        F

        P

        P

        F

        Fig.8. Sub-Gabor Features

      9. The feature vector is given to the back propagation classifier network to perform matching.

  1. RESULTS

    1. PIXEL LEVEL FUSION

      TABLE I

      Entropy

      Mean

      Contrast

      Computation Time

      Image 1

      7.03

      79.57

      0.9956

      Image 2

      7.10

      80.34

      0.9955

      Fused Image

      sym2

      7.03

      84.01

      0.9955

      Bior1.3

      7.36

      108.90

      0.9955

      db1

      6.94

      87.23

      0.9955

      Entropy

      Mean

      Contrast

      Computation Time

      Image 1

      7.03

      79.57

      0.9956

      Image 2

      7.10

      80.34

      0.9955

      Fused Image

      sym2

      7.03

      84.01

      0.9955

      Bior1.3

      7.36

      108.90

      0.9955

      db1

      6.94

      87.23

      0.9955

      RESULT of IMAGE FUSION

      Fig 9 Visible Image Fig 10 Infrared Image

      Fig 11 Fused Image

      Conclusion :

      • Using db1 as wavelet least computation time is required but the fused image has lesser entrpy.

      • Using bior1.3 wavelet for decomposition and bior1.5 for reconstruction computation time is more but the mean and entropy of fused image is much better than the original images.

      • Using sym2 wavelet highest computation time is required and also the mean and entropy of fused images are not much better than original images.

    2. FEATURE LEVEL FUSION:

Frequency Domain:

  1. Discrete Cosine Transform:

    Initially the network was tested on the basis of the samples on which the network was trained. Later, salt and pepper noise was added to test it shown in Fig 12. At certain level of noise the classification of the sample failed and the level was 0.08. Similarly, the same testing process was applied to the palm print images. The classification rate obtained by this approach is 88%.

    Fig 12. Input image with S&P Noise image

  2. Discrete Wavelet Transform and Gabor Filter:

The images shown above are at the output of the Gabor Filter. These images are further normalized as the intensity of finger print and palm print image would not be same shown in Fig 13.

Fig .13 Preprocessing and Fusion of Palm Print and Finger Print

REFERENCES

  1. Zhu Mengyu, Yang Yuliang, A New Image Fusion Algorithm Based on Fuzzy Logic, International Conference on Intelligent Computation Technology and Automation.

  2. Youshen Xia, Mohamed S. Kamel, Novel Cooperative Neural Fusion Algorithms for Image Restoration and Image Fusion, IEEE Transactions on Image Processing, Vol. 16, No. 2, February 2007.

  3. Chaveli Ramesh, T. Ranjith, Fusion Performance Measures and a Lifting Wavelet Transform Based Algortihm for Image Fusion, ISIF 2002.

  4. Josef Strm Bartnk, Minutiae Extraction from Fingerprint with Neural Network and Minutiae based Fingerprint Verification, Masters Thesis, Blekinge Tekniska Hgskola.

  5. Jayant V Kulkarni, Jayadevan R, Suresh N Mali, Hemant K Abhyankar, and Raghunath S Holambe, A New Approach for Fingerprint Classification based on Minutiae Distribution,

    International Journal of Computer Science 1;4 2006

  6. Anil Jain, Salil Prabhakar and Sharath Pankanti, Matching and Classification: A Case Study in Fingerprint Domain, PINSA, 67, A, No. 2, March 2001, p 223-241.

  7. A. K. Jain, Fundamentals of Digital Image Processing,

    Englewood Cliffs, NJ: Prentice-Hall, 1989

  8. Manisha P. Dale, MESs College of Engineering, Pune, India,Texture Based Palmprint Identification Using DCT Features, 2009 Seventh International Conference on Advances in Pattern Recognition.

  9. Ross, A., Jain, A.K., (2003). Information Fusion in Biometrics, Pattern Recognition Letter, 24 (13), pp. 2115- 2125.

  10. Yong Jian Chin, Thian Song Ong,, Integrating Palmprint and Fingerprint for Identity Verification, 2009 Third International Conference on Network and System Security.

  11. A.K. Qin, P.N. Suganthan, Personal Identification System based on Multiple Palmprint Features, ICARCV 2006.

  12. Moussadek Laadjel, Ahmed Bouridane, Fatih Kurugollu,Palmprint Recognition using Fisher-Gabor Feature Extraction, ICASSP 2008.

Leave a Reply