Biomedical Image Fusion Using Wavelet And Curvelet Transform

DOI : 10.17577/IJERTV2IS50537

Download Full-Text PDF Cite this Publication

Text Only Version

Biomedical Image Fusion Using Wavelet And Curvelet Transform

Kokare Swati1, Premanand Kadbe2

1ME II, 2Assistance Professor VP COE Baramati

Abstract

Image fusion is an important research in medical imaging. Use of Computer Tomography (CT), Magnetic Resonance (MR) techniques in biomedical imaging has revolutionized the process of medical diagnosis in recent years. The fusion of magnetic resonance (MR) and computed tomography (CT) images is a very useful technique in various applications of medical imaging. The goal of image fusion (IF) is to integrate complementary multi sensor, and multi view information into one new image containing information the quality of which cannot be achieved otherwise. In this paper, Fast Discrete Curvelet Transform using Wrapper algorithm based image fusion technique, has been implemented, analyzed and compared with Wavelet based Fusion Technique.The wavelet- based image fusion method provides high quality of the spectral content of the fused image.

Curvelets exhibit very high directional sensitivity and are highly anisotropic. Therefore, the curvelet transform represents edges better than wavelets, and is well-suited for multi-scale edge enhancement. Since edges play a fundamental role in image understanding, one good way to enhance spatial resolution is to enhance the edges. Curvelet-based image fusion method provides richer information in the spatial and spectral domains simultaneously.

This paper presents a Curvelet based approach for the fusion of magnetic resonance (MR) and computed tomography (CT) images. The objective of the fusion of an MR image and a CT image of the same organ is to obtain a single image containing as much information as possible about that organ for diagnosis. Some attempts have been proposed for the fusion of MR and CT images using the wavelet transform. Since medical images have several objects and curved shapes, it is expected that the Curvelet transform would be better in their fusion.

Keywords Computed Tomography, Magnetic Resonance Image, Fast discrete wavelet transform, Wavelet Transform, Wrapping

1. Introduction

Medical imaging is the most important sources of anatomical and functional information, which is indispensable for today clinical research, diagnosis and treatment which is an integral

support more accurate information for physicians to diagnose diseases.

Image fusion is the process of merging two images of the same scene to form a single image with as much information as possible. Image fusion is important in many different image processing fields such as satellite imaging, remote sensing and medical imaging. The study in the field of image fusion has evolved to serve the advance in satellite imaging and then, it has been extended to the field of medical imaging. There are various methods available to implement image fusion. Basically, these methods can be categorized into two categories. The first category is the spatial domain-based methods, which directly fuse the source images into the intensity values [8]. The other category is the transformed domain-based methods, which fuse images with certain frequency or timefrequency ,algorithms such as the intensity, hue and saturation (IHS) algorithm and the wavelet fusion algorithm [1-2] have proved to be successful in satellite image fusion. The IHS algorithm belongs to the family of color image fusion algorithms [2]. The wavelet fusion algorithm has also succeeded in both satellite and medical image fusion applications. The basic limitation of the wavelet fusion algorithm is in the fusion of curved shapes. Thus, there is a need for another algorithm that can handle curved shapes efficiently. So, the application of the curvelet transform for curved object image fusion would result in better fusion efficiency. A few attempts of curvelet fusion have been made in the fusion of satellite images but no attempts have been made in the fusion of medical images [3].

The main objective of medical imaging is to obtain a high resolution image with as much details as possible for the sake of diagnosis. There are several medical imaging techniques such as the MR and the CT techniques. Both techniques give special sophisticated characteristics of the organ to be imaged. So, it is expected that the fusion of the MR and the CT images of the same organ would result in an integrated image of much more details. Researchers have made few attempts for the fusion of the MR and the CT images [1, 3]. Most of these attempts are directed towards the application of the wavelet transform for this purpose. Due to the limited ability of the wavelet transform to deal with images having curved shapes, the application of the curvelet transform for MR and CT image fusion is presented in this project.

part of modern health care. The multimodality medical imag2e. 2. Existing Method

fusion plays an important role in clinical applications which can3.

2.1 The Wavelet Fusion

The most common form of transform type image fusion algorithms is the wavelet fusion algorithm due to its simplicity and its ability to preserve the time and frequency details of the images to be fused. [4].

The 2-D DWT is an operation through which a 2-D signal is successively decomposed in a spatial multi-resolution domain by low- pass and high-pass FIR filters along each of the two dimensions. The four FIR filters, denoted as highpass-highpass

wavelet transforms of the images to be fused. The most frequently used rule is the maximum frequency rule which selects the coefficients that have the maximum absolute values [3]. The wavelet transform concentrates on representing the image in multiscale and its appropriate to represent linear edges. For curved edges, the accuracy of edge localization in the wavelet transform is low. So, there is a need for an alternative approach which has a high accuracy of curve localization such as the curvelet transform.

(HH), highpass-lowpass (HL), lowpass-highpass (LH) an4d. 3. Proposed Method

lowpass-lowpass (LL) filters, produce, respectively, the HH,

    1. The Curvelet Transform

      HL, LH and LL subband data of the decomposed signal at Aa ) 3.1.1 The Continuous Time Curvelet Transform

      given resolution level. The samples of the four subbands of thBe )

      decomposed signal at each level are decimated by a factor of two in each of the two dimensions. For the operation at the first level of decomposition, the given 2-D signal is used as input, whereas for the operations of the succeeding levels of decomposition, the decimated LL subband signal from the previous resolution level is used as input. A schematic diagram of the wavelet fusion algorithm of two registered images I1 (x1, x2) and I2(x1, x2) is depicted in Fig. 1. The registered two images undergo wavelet decomposition, at the output we get decomposed image into sub-bands. The coefficients of image

      For two dimension object, i.e., R2 with spatial variable x, with w a frequency domain variable and with r and polar coordinates in the frequency-domain. The pair of windows W(r) and V(t) called as radial window and angular window respectively. The frequency window Uj is defined in Fourier domain by [7]. Figure 2 illustrates the induced tiling of the frequency plane and the spatial Cartesian grid associated with a given scale and orientation, and shaded area represents the polar wedge by Uj

      get fused according to specific fusion scheme, and finally fused image is reconstructed using inverse transform. It can be represented by the following equation:

      (, ) = 2

      3

      p>4 2

      (

      2 /2 ) (2)

      2

      I(x , x ) = W-1 ( (W(I (x ,x )), W(I (x , x )))) (1)

      The curvelet transform as a function of {x = (x1, x2)} at scale 2-j, orientation l and position xk ( j, l) as,

      1 2 1 1 2 2 1 2

      where W, W-1 and are the wavelet transform operator, the inverse wavelet transform operator and the fusion rule, respectively. There are several wavelet fusion rules, which can be used for the selection of the wavelet coefficients from the

      j,l,k (x)= j (R l (x-xk(j,l))) (3) Where R is the rotations by radiance,

      R = cos sin (4)

      sin cos

      The curvelet coefficient is simply the inner product between an element f and curvelet j,l,k ,

      2 ,,

      2 ,,

      , , , ,, = . () (5)

      3.1.2 Fast Discrete Curvelet Transform

      Two implementations of FDCT are proposed:

      1. Unequally spaced Fast Fourier transforms (USFFT),

      2. Wrapping function.

The new implementation of curvelet transform based on Wrapping of Fourier samples takes a 2D image as an input in the form of a Cartesian array f [m, n], where 0 m < M, 0 n < N where M and N are the dimensions of the array

0<

Figure 3 illustrates the whole image represented in spectral domain in the form of rectangular frequency tiling by combining all frequency responses of curvelets at different scales and

, , 12 = , , ,

(6)

orientations. It can be seen that curvelets are needle like

0<

, 1 2

elements at higher scale. It can be seen from Figure 3 that

,,12

,,12

Each is a digital curvelet waveform, superscript D stands for digital. These approach implementations are the effective parabolic scaling law on the sub-bands in the frequency domain to capture curved edges within an image in more effective way. As mentioned earlier, wrapping based curvelet transform is a multiscale pyramid which consists of

several sub-bands at different scales consisting of different orientations and positions in the frequency domain. At a high frequency level, curvelets are so fine and looks like a needle shaped element and they are non-directional coarse elements at low frequency level.

  1. Methodology

    1. Curvelet Transform as a multiscale model

      As with the wavelet transform, the curvelet transform is a multi-resolution transform with frame elements indexed by scale and location parameters. Unlike the wavelet transform, however, the curvelet transform has directional parameters, and the curvelet pyramid contains elements with a high degree of directional specificity. In addition, the curvelet transform is based on a special anisotropic scaling principle that is quite different from the isotropic scaling of wavelets. The elements obey a special scaling law, in which the length of the support of frame elements and the width of the support are linked by the relation width=lengtp [7]. The curvelet transform therefore represents edges better than wavelets.

      curvelet becomes finer and smaller in the spatial domain and shows more sensitivity to curved edges as the resolution level is increased, thus allowing to effectively capturing the curves in an image, and curved singularities can be well-approximated with fewer coefficients. In order to achieve a higher level of efficiency, curvelet transform is usually implemented in the frequency domain. This means that a 2D FFT is applied to the image. For each scale and orientation, a product of Uj,l wedge is obtained; the result is then wrapped around the origin, and 2D IFFT is then applied resulting in discrete curvelet coefficients [7].

      Curvelet transform = IFFT {FFT (Curvelet) × FFT_Image}

      (7)

      The difficulty behind this is that trapezoidal wedge does not fit in a rectangle of size 2j × 2j/2 aligned with the axes in the frequency plane in which the 2D IFFT could be applied to collect curvelet coefficients. Wedge wrapping procedure proposed in [7] uses a parallelogram with sides 2j and 2j/2 to support the wedge data. The wrapping is done by periodic tiling of the spectrum inside the wedge and then collecting the rectangular coefficient area in the centre. The centre rectangle of size 2j × 2j/2 successfully collects all the information in that parallelogram. Figure 4 illustrates the process of wrapping wedge where the angle is in the range (/4, 3/4) and the rectangles have the same width and length as the parallelogram is centred at the origin.

      The following are the steps of applying wrapping based FDCT algorithm [7].

      Step 1. Apply the 2D FFT to an image to obtain Fourier samples

      (8)

      Step 2. For each scale j and angle l, form the product

      (9)

      Step 3. Wrap this product around the origin and obtain

      (10)

      where the range for m, n, and is now 0 m < 2j , 0 n <

      2j/2, and /4 < /4.

      Step 4. Apply IFFT to each, hence collecting the discrete Coefficients.

      5.2 Image fusion algorithm based on Wavelet and Curvelet Transforms

      Images can be fused in three levels, namely pixel level fusion; feature level fusion and decision level fusion. In pixel based image fusion, we can take operation on pixel directly, and then fused image could be obtained Multiscale singular value decomposition method is used in this paper. We can keep as more information as possible from source images. Because Wavelet Transform takes block base to approach the singularity of C2, thus isotropic will be expressed; geometry of singularity

      is ignored. Curvelet Transform takes wedge base to approach the singularity of C2. It has angle directivity compared with Wavelet, and anisotropy will be expressed. When the direction of approachable base matches the geometry of singularity characteristics, Curvelet coefficients will be bigger.

  2. RESULTS AND DISCUSSION

    1. Visual Analysis

      For the visual evaluation, the following criterion is considered: natural appearance, brilliance contrast, presence of complementary features, enhancement of common features etc.

      have a low variance. Also the remaining analysis points are correlation and mean gradient. Correlation finds out the similarity between two images that is input source image and fused image. Mean gradient (MG) indicates the directional intensity change in image.

      Transform Type

      MG

      Entropy

      PSNR

      Standard Deviation

      Correlation

      Wavelet Transform

      4.55

      5.31

      21.30

      29.52

      0.8199

      Curvelet Transform

      6.98

      6.29

      25.72

      53.15

      0.8566

  3. Conclusion

    The Fusion scheme explained above based on DWT and FDCT. The Quantitative analysis of fused image measured in terms of entropy, standard deviation, PSNR, mean gradient and correlation coefficient. As we increase the level of decomposition the information content increases at the cost of increased computations. Fused image provides the complementary features which will make the easy diagnosis for the detection of disease. Using curvelet transform there is improvement in the above results.

  4. References

5.2 Quantitative analysis

Apart from visual appearance quantitative analysis is done over the fused images. The quantitative criterion includes parameters namely Entropy and Standard deviation. Each has its importance in evaluating the image quality.

5.2.1 Entropy

The entropy of an image is a measure of information content. For a better fused image, the entropy should have a lager value.

255

H= p (i) log2 p (i) (2) g=0

5.2.2 Standard deviation

The standard deviation (SD), which is the square root of variance, reflects the spread in the data. Thus, a high contrast image will have a larger variance, and a low contrast image will

  1. Smt.G. Mamatha(Phd) 1, L.Gayatri 2 An image fusion using wavelet and curvelet Transforms Global Journal of Advanced Engineering Technologies, Vol1, Issue-2, 2012.

  2. Myungjin Choi, Rae Young Kim, Myeong-Ryong Nam, and Hong Oh Kim, Fusion of Multispectral and Panchromatic Satellite Images Using the Curvelet Transform IEEE Geoscience And Remote Sensing Letters, Vol. 2, No. 2, April 2005.

  3. F. E. Ali, I. M. El-Dokany, A. A. Saad and F. E. Abd El-Samie, Curvelet fusion of MR and CT images Progress In Electromagnetics Research C, Vol. 3, 215224, 2008.

  4. Yong Yang1, 2, Dong Sun Park 2, Shuying Huang3, Zhijun Fang1, Zhengyou Wang1 Wavelet based Approach for Fusing Computed Tomography and Magnetic Resonance Images

  5. S. Bharath and E.S. Karthik Kumar, Implementation of Image Fusion Algorithm Using 2gcurveletTransforms, International Conference on Computing and Control Engineering (ICCCE 2012), 12 & 13 April, 2012.

  6. Arash Golibagh Mahyari, Mehran Yazdi, A Novel Image Fusion Method Using Curvelet Transform Based on Linear Dependency Test, International Conference on Digital Image Processing. July 2005, revised March 2006.

  7. Emmanuel Cand`es, Laurent Demanet, David Donoho] and Lexing Ying Fast Discrete Curvelet Transforms.

  8. S. T. Li, J. T. Kwok, and Y. N.Wang, Combination of images with diverse focuses using the spatial frequency, Inf. Fusion, vol. 2, no. 3, pp. 169 176, Sep. 2001.

Leave a Reply