Pixel-Level Image Fusion Using Wavelet Transform

DOI : 10.17577/IJERTV1IS5285

Download Full-Text PDF Cite this Publication

Text Only Version

Pixel-Level Image Fusion Using Wavelet Transform

Mrs.S.V.More1, Prof.Dr.Mrs.S.D.Apte2

Rajarshri Shahu College of Engineering, E&TC Department, Tathawade, Pune, India

Abstract

Image fusion is a process to combine information from multiple images of the same scene. The result of image fusion will be a new image which is more suitable for human and machine perception or further tasks of image processing such as image segmentation, feature extraction and object recognition. In this paper we present a wavelet-based image fusion algorithm. The images to be fused are firstly decomposed into high frequency and low frequency bands. Then, the low frequency components are combined by maximum energy rule and high frequency components are combined by variance rule. Finally, the fused image is constructed by inverse wavelet transform. We select four groups of images to simulate, and compare our simulation results with the pixel averaging method and most common wavelet method based on mean-max fusion rule.

  1. Introduction

    Due to the limitation of depth of focus all the objects in a sensor image are not clear, so multiple images of the same scene with focus on different objects are required each of these images has some important information about the scene but none of them is sufficient in terms of its information content. To understand and acquire the complete information, view the series of images is not an easy task for humans as well as for machine perception.so all these images should be fused to form a single image in such a way that the fused image has better focus on all objects and complete information[1]. Image fusion can be divided into three levels, which are pixel-level fusion, feature- level fusion and decision-level fusion. Almost all image fusion algorithms, from the simplest weighted averaging to more advanced multiscale methods, belong to pixel-level fusion [2].

    In pixel-level image fusion, some general requirements are imposed on the fused results: 1) The fusion process should preserve (as far as possible) all salient information in the source images; 2) The fusion

    process should not introduce any artefacts; 3) The fusion process should be shift-invariant performed [3]. In the field of image fusion, pixel-level fusion becomes the primary method since it can preserve original information of source images as much as possible, and the algorithms are computationally efficient and easy to implement, the most image fusion applications employ pixel level based method[4]. There are three commonly used methods of pixel-level image fusion, including simple image fusion (such as linear weighted average, HPF (high-pass-filter), HIS (intensity hue-saturation), PCA (principal component analysis), etc.), pyramid-based decomposition image fusion (such as Laplace pyramid decomposition, ratio pyramid, etc.) and wavelet transform image fusion, etc. [5]. Recently, wavelet transform becomes an important aspect of image fusion research with the merits of

    multi-scale and multi-resolution.

    This paper is organized as follows: In section 2, the proposed wavelet-based image fusion is introduced. In section 3, experiments on different focus images with the proposed method performed and compared. Finally, conclusion is provided.

  2. Proposed wavelet based image fusion technique

    1. General Procedure of Wavelet Based Image Fusion

      The information flow diagram of wavelet-based image fusion algorithm is shown in figure 1. In wavelet image fusion scheme, the source images I1(x, y) and I2(x, y) are decomposed into approximation and detailed coefficients at required level using DWT. The approximation and detailed coefficients of both Images are combined using fusion rule . The fused image If(x, y) could be obtained by taking the inverse wavelet transform (IDWT) as:

      If (x, y) =

      IDWT [ {DWT (I1(x, y), DWT (I2(x, y)))}] (1)

      The fusion rule used is vary from simple rule i.e. averages the approximation coefficients and picks the detailed coefficient in each sub band with the largest magnitude to very advance rule described below[6].

      2

      The high frequency bands contain the detail coefficients of an image, which usually have large absolute values correspond to sharp intensity changes and preserve salient information in the image. On the other hand, according to characteristic of HVS it is easy to know that for the high resolution region the human visual interest is concentrated on the detection of changes in contrast between regions on the edges separate these regions. Therefore, a good method for the high frequency bands should produce large coefficients on those edges. Based on the above analysis, we propose a scheme by computing the variance in a neighbourhood to select the high frequency coefficients. The variance of an image is defined as follows

      Figure.1 information flow diagram in image fusion

      scheme using wavelet transforms

      I ( p)

      1

      S T s

      S / 2

      S / 2 t

      T / 2

      f i

      T / 2

      s, j t

      meanI p

      (4)

    2. Selection Scheme for Low Frequency Bands

      meanI ( p)

      1 S / 2

      S T

      T / 2

      f i

      s, j t

      (5)

      s S / 2 t T / 2

      The low frequency band is the original image at the coarser resolution level, which can be considered as a smoothed and subsampled version of the original image. Therefore, most information of the source images is kept in the low frequency band. When the image has more obvious texture features (information) in a certain frequency bands or direction, the

      Where S ×T is the neighboring size and in this paper it is considered as 4×4, meanI(p), 1(p) denote the mean value and variance value of the coefficients centered at (i, j) in the window of S×T respectively. Then the fusion rule for the high frequency bands can be illustrated as following:

      corresponding wavelet channel output has larger energy. The bigger energy of corresponding pixel gives the clearer texture feature. Therefore, an energy-based

      fH i, j

      A H i, j , BH i, j

      if A

      B

      else

      (6)

      scheme is adopted for low frequency coefficients. The energy of image is described as follows:

      fH(i,j), AH(i,j), BH(i,j) respectively represents high frequency (HL,LH,HH) coefficient pixel value of fused

      M N

      f

      E i 1 j 1

      M

      i, j

      N

      (2)

      image, image A, image B at point (i, j). A and B respectively represent variance of high frequency coefficient of pixel value of image A and image B at point (i, j).

      Where f (i, j) represents pixel grey value of point (i, j). M*N is the size of image.

      Then the fusion scheme for low frequency bands can be illustrated as the following:

      2.4. Procedure of Proposed DWT Based Image Fusion

      To implement two-dimension discrete wavelet decomposition (DWT) to each source image

      fL i, j

      AL i, j , BL i, j

      if EA

      EB

      else

      (3)

      on the level of N, and obtain 3N+1 sub-image. Decompose source image A and Bs low frequency part LLA(i,j)and LLB(i,j) into 4×4

      fL(i,j), AL(i,j), BL(i,j) respectively represents low frequency coefficient pixel value of fused image, image A, image B at point (i, j). EA and EB respectively represent energy of low frequency coefficient of pixel value of image A and image B at point (i, j).

      2.3. Selection Scheme for High Frequency Bands

      sub images and calculate energy using equation (2) and using equation (3) obtain low frequency part of fused image LLF(i,j).

      A A

      B B B

      Decompose source imae A and Bs high frequency part LH K(i,j), HL K(i,j), HHAK(i,j),LH K(i,j),HL K(i,j), HH K(i,j) into 4×4 sub images and calculate variance of all 4×4 sub images using equation (4) & (5) and using equation(6) obtain high frequency part

      F F F

      of fused image LH K(i,j), HL K(i,j), HH K(i,j) respectively.

      distribution is scattered and the images contrast is large that more information can be seen. It can be

      K represents decomposition level (K=1,2,3.).

      Finally, using LLF(i,j), LH K(i,j), HL K(i,j),

      defined as Ref. [7]

      M

      N 2

      K F F F (i, j) f

      HHF (i,j), obtain fused image by inverse

      i 1 j 1

      (9)

      discrete wavelet transform (IDWT). M N

  3. RESULTS

    The proposed method has been tested on several pairs of multifocus images. Four examples are given here to illustrate the performance of the fusion process. And proposed method is evaluated by comparing with most common pixel averaging method and wavelet based method in which weighted averaging rule used for low- frequency coefficient and the rule of selection of maximum value for high frequency coefficient .In all

    cases the grey value of pixel are scaled between 0 to

    F (i, j) is the grey value of fused image at point (i, j). is the mean value of grey-scale image fusion. M×N

    is the size of image.

    3.1.3. Root mean square error (RMSE): Root mean square error (RMSE) indicates how much error the fused image conveys about the reference image. Hence, lower the RMSE, the better the fused result. The RMSE is defined as ref. [7]

    1/ 2

    255. The source images are assumed to be registered

    RMSE

    1 M N

    R i, j

    F i, j 2

    (10)

    and no pre-processing is performed.

      1. Objective Evaluation of an Image

        In addition to visual analysis, we conducted some quantitive analysis of fused image and three objective criteria are used to compare the fusion results are entropy, standard deviation and root mean square error.

        1. Entropy: Image entropy is an important indicator for measuring the image information richness. The image entropys value represents the average amount of information which is included by the image. The calculated information entropy can objectively evaluate the amount of information changes. According to principal of Shannon information theory, entropy of an image is defined as Ref. [7]

          M N i 1 j 1

          R (i, j) is ideal reference image, F (i, j) is the fused image, M×N is the size of image

      2. Visual Analysis of an Image

    Experiment is performed on a popular widely used standard image Lena of size 256×256 as shown in Fig.2 (a), which served as ideal reference image here. Then the two other source images are obtained with a Gaussian blurring method as reference [9].Fig.2 (b) is the image blurred on the lower horizontal, Fig.2(c) is the image blurred on upper.

    M N

    H

    i 1 j 1

    M

    Pij log 2 Pij

    N

    (7)

    pij f

    i, j

    f

    i 1 j 1

    i, j

    (8)

    Where H-Pixel Entropy, L-Image total grayscale Pi- The i Pixels rate to the images total ones N i.e.=Ni/N

    3.1.2. Standard deviation (): Standard deviation reflects discrete case of the image grey intensity relative to the average. The standard deviation represents the contrast of an image. If the standard deviation is large, then the image grey scale

    (a)

    TABLE 1

    ENTROPY

    STANDARD

    DEVIATION

    RMSE

    Reference image

    Lena

    7.5784

    52.3689

    Horizontally blurred Lena 1

    7.5035

    48.2143

    5.4776

    Horizontally

    blurred Lena 2

    7.5246

    49.4620

    4.9028

    Pixel averaging

    5.6372

    35.4814

    8.3216

    Wavelet based Mean- max fusion rule

    db1

    7.4509

    48.0642

    5.3936

    Sym1

    7.4509

    48.0462

    5.3936

    Coif1

    7.4469

    47.9500

    5.3418

    Bior3.3

    7.4466

    47.8966

    5.3601

    Dmey

    7.4466

    47.9093

    5.3688

    haar

    7.4509

    48.0642

    5.3936s

    Proposed method

    Db1

    7.6071

    52.4130

    3.4991

    Sym1

    7.6071

    52.4130

    3.4991

    coif1

    7.5979

    52.1534

    3.3547

    Bior3.3

    7.6228

    52.8679

    3.8543

    Dmey

    7.5900

    52.0523

    3.1029

    haar

    7.6071

    52.4130

    3.4991

    Entropy and Standard Deviation of Fig.2 (Lena image)

    (b) (c)

    (d) (e)

    (f)

    Figure 2. Simulation results of Lena images. (a) reference Lena image (b)Lena image blurred on the horizontal lower;(c) Lena image blurred on the horizontal upper; (d) fused image by pixel averaging method; (e) fused image by wavelet based mean-max algorithm; (f) fused image by proposed method using 5 level wavelet decomposition;

  4. CONCLUSION

    In this paper, we present a wavelet-based image fusion, and show the simulation result and objective evaluation. In the process of fusion we gave fusion rule based on energy and variance, which effectively conserved the energy of source images and avoided the loss of useful information. Comparing with the most common algorithm, proposed fusion method gives improved visual effect of fused image and also improved objective parameter such as, entropy, standard deviation and RMSE.

  5. References

  1. Dr. Ing. Michael Heizann, Image fusion tutorial, in IEEE international conference on multisensory fusion and integration for intelligient systems, Heidelberg,2006.

  2. SMITH M I, HEATHER J P. Review of image fusion technology, in 2005 [J]. Proc SPIE, 2005, 5783; 29-45.

  3. YANG Bo, jing zhong-liang, ZHA) Hai-tao, Review of pixel level image fusion, in J. Shangai Jiatong Univ. (Sci.), 2012, 15(1):6-12 DOI:10.1007/s12204-010-7186-y.

  4. R. Redondo, F. Sroubek, S. Fischer, G. Cristobal, Multifocus image fusion using the log-Gabor transform and a Multisize Windows technique, Information Fusion, vol. 10, no. 2, pp. 163171, 2009.

  5. Yanfen Guo, Mingyuan Xie, Ling Yang, An Adaptive Image Fusion Method Based on Local Statistical Feature of Wavelet Coefficients 978-1-4244-5273-6/9 2009 IEEE.

  6. V.P.S. Naidu and J.R. Raol, pixel-level image fusion using Wavelets and principal component analysis, in defence science journal,vol. 58, No.3, May 2008, pp. 338-352.

  7. Jionghua Teng, Xue Wang, Jingzhou Zhang, Suhuan Wang International Congress on Image and Signal Processing (CISP2010)

  8. Tian Hui, Wang Binbin, Discussion and Analyze on image fusion Technology in 2009 second international conference on machine Vision.

  9. Z. Zhang and R. S. Blum, A categorization of Multiscale decomposition-based image fusion schemes with a performance study for a digital camera application, Poceedings of the IEEE, vol. 87, no. 8, pp. 13151326, 1999.

  10. jiacnhao Zeng, Aya sayedelahl, Tom Gilmore, Mohma Chouikha, Review if image fusion algorithms for unconstrained outdoor scnes, in ICSP 2006 proceedings.

  11. Resources foe research in image fusion [online], http://www.imagefusion.org//

[12]A Akerman, "Pyramid techniques for multisensor fusion", Proc.

SPIE, Vol. 1828, 1992, pp 124-131

International Journal of Engineering Research & Technology (IJERT)

ISSN: 2278-0181

Vol. 1 Issue 5, July – 2012

  1. Footnotes

    Use footnotes sparingly (or not at all) and place them at the bottom of the column on the page on which they are referenced. Use Times 8-point type, single-spaced. To help your readers, avoid using footnotes altogether and include necessary peripheral observations in the text (within parentheses, if you prefer, as in this sentence).

  2. References

    List and number all bibliographical references in 9- point Times, single-spaced, at the end of your paper. When referenced in the text, enclose the citation number in square brackets, for example [1]. Where appropriate, include the name(s) of editors of referenced books.

    1. A.B. Smith, C.D. Jones, and E.F. Roberts, Article Title, Journal, Publisher, Location, Date, pp. 1-10.

    2. Jones, C.D., A.B. Smith, and E.F. Roberts, Book Title, Publisher, Location, Date.

  3. Copyright forms and reprint orders You must include your fully-completed, signed IJERT copyright release form when you submit your paper. We must have this form before your paper can be published in the proceedings. The copyright form is available as a Word file in author download section, <IJERT-Copyright-Agreement- Form.doc>.

Leave a Reply