Image Fusion Based on DTCWT and PCA

DOI : 10.17577/IJERTV3IS071067

Download Full-Text PDF Cite this Publication

Text Only Version

Image Fusion Based on DTCWT and PCA

T. Jayasindhuri1

PG Scholar, Department of E.C.E SKIT College

Srikalahasti,India

Sri P. Srinivasulu2

Assistant professor, Department of E.C.E SKIT college

Srikalahasti,India

Abstract the main objective of this paper is to improve the image quality by using image fusion techniques .Image fusion is a process of combining multiple images into a single composite image. A. single image with better description of the scene is generated from the collection of input images.. The output image generated should therefore be more useful for human visual perception or for machine perception .The main problem of image fusion is determining the efficient procedure for combining the multiple images. The fusion technique is very much useful in diagnosing and treating cancer in medical fields. This paper is based on fusion of input images using Dual Tree Complex Wavelet Transform and applying Principal Component Analysis(PCA) for fused image such that better image quality is obtained and estimated using various Image quality Metrics.

Keywords Image Fusion , Dual Tree complex Wavelet Transform (DT-CWT), Principal Component Analysis., Image Quality Metrics,

  1. INTRODUCTION

    The Image fusion is an integration of multiple images captured by different modality sensors into single image which is more suitable for computer processing or visual perception. The image quality metrics are useful to analyze the quality of fused image. This image fusion [2] found its application in various streams like military, medical, surveillance etc. The satellite images and remote sensing images are should be image with high spatial and spectral resolution which cannot be obtained directly by camera so the solution for this problem is Image fusion. Image fusion methods can be broadly categorized into spatial and spectral domain methods .The spatial domain fusion methods will produces spatial distortions in fused image; whereas these spatial distortion problems can be well handled by spectral domain methods.

    The Dual Tree complex Wavelet Transform (DT-CWT) [1] is a wavelet transform based image fusion .It is a spectral domain method. In this fusion [2] are performed using masks to extract information from decomposed structure. The complex transform of a signal using two separate DWT decompositions [1] i.e. tree a and b is done. In this both real and complex coefficients are used, the fused pyramid is formed using DT-CWT [8, 9, 10, 11, 12,] coefficients which are generated from decomposed pyramid of source image. The reconstruction process is inverse DTCWT to obtain the fused image [5].

    Fig. 1. DT-CWT Structure

    Fig. 2. DT-CWT based fusion

    The fused image is analyzed using various Image quality Metrics like PSNR,MSE,AD,SC,NCC,NAE,LMSE etc. The Principal Component Analysis [4] is a image subspace technique used to reduce dimensionality .It is Eigen vector based multivariate analysis. In this the number of variables in data set are reduce without loss of information

    .PCA operates by transforming a set of correlated variables into a set of uncorrelated variables that are called the principle components. PCA is a spatial method because it directly deals with pixels used with spectral domain method DTCWT. In this column vectors are extracted from respective input matrices then covariance matrix is calculated. The diagonal elements in the covariance vector will contain variance of each column vector. The vectors of covariance matrix and Eigen values are calculated and then Normalize column vector corresponding to lager Eigen value by dividing each element with mean of Eigen vector. The normalized Eigen vector value are weight vales which are multiplied with each pixel of input DTCWT fused image. The final image after application of PCA is further filtered

    i.e. better quality image is generated. The quality of image is analyzed using image quality metrics.

  2. QUANTITATIVE IMAGE QUALITY

    METRICS

    Quality is a characteristic that measures perceived image degradation i.e., in comparison with ideal or perfect image. Evaluation forms an essential part in the development of image fusion techniques. It involves Full Reference where quality is measured in comparison with ideal image and No Reference Methods, which have no reference image. Here we employ Full reference Methods. The metrics used are shown in Table1.

    Assumptions made in the following equations are as A is the image which is perfect, B is the resultant image. i, j is the pixel row and column index

    1. Mean Square Error (MSE)

      (1)

    2. Peak Signal to Noise Ratio (PSNR)

      (2)

    3. Average Difference (AD)

      (3)

    4. Structural Content (SC)

      (4)

    5. Normalized Cross Correlation (NCC)

      (5)

    6. Maximum Difference (MD)

      MD=max(Aij-Bij) i=1.2..m,j=1,2,.n (6)

    7. Normalized Absolute Error (NAE)

      (7)

    8. Laplacian Mean Squared Error (LMSE)

      (8)

  3. PROCEDURE

      1. Using DTCWT decompose input image to obtain LL bands and repeat for all input images

      2. Sequence of resolution pyramids are created

      3. Apply masks to corresponding bands and mark the filtered bands as and choose the coefficient value such that the value at spatial location is more.

      4. Apply PCA for fused image for further filtering and better image quality

      5. Analyze the fused and final image using different Image Quality Metrics

  4. EXPERIMENTAL RESULTS

      1. EXAMPLE A:

        fig.3. (a & b) satellite map images of same scene

      2. EXAMPLE B:

        Fig.4.Fused image

        Fig.8.Fused image

        TABLE 1. TABLE OF IMAGE QUALITY METRICS FOR

        Image Quality Metrics Of Example A

        Output of DTCWT

        Output of DTCWT and PCA

        Mean square error (MSE)

        0.0000002

        0.0000002

        Peak signal to noise ratio (PSNR)

        61.209269

        62.7144192

        Average Difference (AD)

        0.0100292

        0.1506952

        Structural Content(SC)

        1.0459692

        2.0919382

        Normalized Cross Correlation(NK)

        0.9755082

        0.6897892

        Maximum Difference (MD)

        0.0247072

        0.0174712

        Laplacian Mean square error(LMSE)

        0.0702012

        0.1171752

        Normalized absolute error(NAE):

        0.0622362

        0.3082562

        3.1 EXAMPLE A

        Fig.5. (a & b) CT scan and MRI scan

        Fig.6.Fused image

      3. EXAMPLE C:

        Fig.7. (a & b) images of visible camera and infrared camera

        PSNR

        TABLE 2: PERFORMANCE EVALUATION

        Performance Evaluation

        70

        60

        50

        40

        30

        DTCWT

        DTCWT with PCA

        20

        10

        0

        Example1 Example2 Example3 Example4

  5. CONCLUSION

The fusion of different images taken from different modalities sensors is done and the following conclusions are drawn:

      1. Quality of the fused image is improved after applying principal component Analysis.

      2. Better quality, information can be achieved.

REFERENCES

  1. Kingsbry, N. G. 1998a. The dual tree complex wavelet transform: anew technique for shift invariance and directional filters, proc. 8th IEEE DSP Workshop, Bryce Canyon, UT, USA, paper no. 86

  2. Fuse tool – An Image Fusion Toolbox for Mat lab 5.x, http://www.metapix.de/toolbox.htm

  3. The Online Resource for Research in Image Fusion www.imagefusion.org

  4. Lindsay I Smith, A Tutorial on Principal Component Analysis http://www.cs.otago.ac.nz/cosc453/studnent_tutorials/ principal_components.pdf

  5. Zhang Zhong, Investigations on Image Fusion, PhD Thesis,

    University of Lehigh, USA. May1999

  6. Shivsubramani Krishnamoorthy, Soman K. P, Implementation and Comparative Study of Image Fusion Algorithms, International Journal of Computer Applications, Vol. 19, no. 2, Nov. 2010

  7. Mohd. Shahid, Sumana Gupta, Novel Masks for multimodality image fusion using DT-CWT, 9th International Conference on Information Fusion, 2006

  8. C . Sydney, Burrus Ramesh, A. Gopinath and Haitao Guo , Introduction to wavelets and wavelets transforms A primer , Prentice Hall,1998.

  9. M.H Mitchell Image fusion Theories and Applications.

  10. Deepali A.Godse, Dattatraya S. Bormane (2011) Wavelet based image fusion using pixel based maximum selection rule International Journal of Engineering Science and Technology (IJEST), Vol. 3 No. 7 July 2011, ISSN : 0975-5462

  11. Susmitha Vekkot, and Pancham Shukla A Novel Architecture for Wavelet based Image Fusion. World Academy of Science,

    Engineering and Technology 57 2009

  12. Shih-Gu Huang, Wavelet for Image Fusion

Leave a Reply