Frequency Domain Image Fusion using Discrete Frequency Domain Image Fusion using Discrete

DOI : 10.17577/IJERTCONV5IS06032

Download Full-Text PDF Cite this Publication

Text Only Version

Frequency Domain Image Fusion using Discrete Frequency Domain Image Fusion using Discrete

Shruthi G.K 2nd Sem M.Tech

Department of CS &E Adichunchanagiri Institute of Technology

Karnataka, India1

Dr. Pushpa Ravikumar Professor and Head Department of CS &E

Adichunchanagiri Institute of Technology Karnataka, India2

Abstract The fusion of images is the process of joining two or more images into a single image by keeping the important features from each of the images. There are many image fusion methods that can be used to produce high-resolution multispectral images from a high resolution panchromatic image and low resolution multispectral images. Fusion methods include pixel averaging to more difficult method such as wavelet transform. A systematic plan for merging of different resolution 2D gray-level images based on wavelet transform is implementing. The basic idea of wavelet transform is to divide the images into sub images using DWT under the change in the intensity in image and capturing important patterns criterion. Finally these sub images are reconstructed into the image with plentiful information.

Keywords- Merging of images,wavelet transformation,combined images,wavelet based fusion,DWT(discrete wavelet transformation)

  1. INTRODUCTION

    Image processing is a technique which improves the quality of raw images received from different devices. Various techniques have been developed to improve the images. The term digital image usually refers to an array of real numbers represented by finite bits. The main advantage of digital image processing is the preservation of original data.. Image is usually represented as a function of two variables. The basic thing is it will be in matrix form. Matrix contains N columns and M rows. The interrelated of both N columns and M rows results one pixel value. Image merging is the process by which two or more images joined into a single image and obtaining the major features from each of the original image. The obtained image will be more informative than the input images. To do image merging, the first step is image registration. The one which has to be registered is called the reference image and one which has to be matched to the reference image is called sensed image.

    Image registration is essential preliminary operation for image merging. Fusion techniques vary from simplest method to complex method. There are many approaches where images can be differentiated whether images are merged in the spatial domain or transformed into another domain. There are different types of domains we have namely spatial, frequency, time and temporal domain. This spatial domain is mainly for image enhancement. Spatial domain techniques are mainly based on direct manipulation of pixels.

    Frequency domain will transform the image to its frequency representation. It will compute the inverse transform back to the spatial domain. It will divide the frequencies into high frequency and low frequencies. High frequencies correspond to small pixel values. Low frequencies correspond to large scale features. This includes technique called Fourier transform. In image processing wavelet is used to divide the pixel into different frequency components. Wavelet transform is more suitable technique compared to Fourier transform. Wavelet transform are based on wavelets which have varying frequencies in limited time. The performance of wavelets can be analyzed into different analysis. One is time frequency analysis and another is multiresolution analysis. The preliminary processed images are divided into sub images using forward wavelet transform. This forward wavelet transform will transform the waveform in the time domain to a series of coefficients based on different types of wavelets. This transform is well suited for different resolution signals, such as images where time domain is replaced with the space domain. These sub images are getting back to original images using inverse wavelet transform technique. These reconstructed images will be having more useful information then the initial input images.

    Wavelets have good localization properties. There are different types of imaging. One is multi-focus and another is multi-modal and third is multi-sensor. This multi- focus imaging means the clarity or distinctness of the one or more objects will be more in a particular image, while other objects will be having more clarity in another image. Multi- modal imaging means obtaining the images from different devices. Such as CT (computed tomography) and MRI (medical resonance imaging). Multi-sensor imaging means obtaining the images from satellite.

  2. REVIEW OF LITERATURE

    Maitreyi Abhayankar et al.2016 [1] has discussed that image fusion is process of combining two or more images to get a single image which is very informative. For fusion they have used multisensory images, method used for this purpose is sobel operator. The aim is to reduce time complexity. Method is evaluated using different parameters. Genetic algorithm is used to calculate optimum weight of required to fuse two images.

    Hui Li et al.1994 [2] has discussed that two input images are converted into waves and they are further divided into

    wavelets and these wavelets are properly combined and new image obtained by taking the inverse wavelet transform of the fused wavelet coefficients.

    Burt,Peter J et al.1993[3] has discussed that images are captured from different sensors and these images are merged together to get the composite image which is very informative. A general method is proposed here and suggests that fusion is powerful tool for its wide variety of utility in image processing and computer vision.

    Zhu Shu-long et al.2002 [4] has discussed that multi-source images plays an very important role in remote sensed image processing, this is nowadays called as image fusion. Input image is divided into different sub images based on different frequency and sub images are combined to get the resulted image with enormous information.

    Paul Hill et al 2002 [5] has discussed that fusion is an important technique which is used in various field such as remote sensing, robotics and other medical applications and technique used here is Soft Invariant and Directionally Selective Dual Tree Complex Wavelet Transform.

    LG Brown et al.1992 [6] has discussed that registration is the initial step in image processing. This is mainly done to match the two images which is taken for fusion, then they will check the format and size of two images and if they are not same steps are taken to bring them to same format.

    S Banarjee et al.1995[7] has discussed that registration process is done prior to the fusion process and method used is point landmark based registration method. Here registration is based on canonical frame of references.

    Andre Collignon et al.1994 [8] has discussed the registration of 3D multimodality medical images data result is very accurate and it is obtained automatically without any pre- segmentation.

  3. METHADOLOGY A.REGISTRATION

    To perform merging the basic thing is to make common features at one coordinate system. In general, different devices can acquire scene features in different ways. We can combine multi-focus, multi-sensor and multi-resolution images , only if the images are done with a preliminary processing .It will determine the correspondence between the points of same scene. Some of which are area based and some are point based. Point-based are simple to use.To reduce or increase the similarity between points should use either the affine transformation or the geometric tansformation like translation, rotation and scaling. Finally, this transformation function is used for mappingthecorrespondspoints.Thebelow pseudocode tells that it will load the images. And the size of the images must be known. It will then convert from RGB to GRAY images. As am using gray scale images. Then convert both to FFT. We need to convert it into FFT because as it

    needs to transform from spatial domain to frequency domain. This transformation can be done using affine transformation. Obtain the image back by theta rotation. Again take the FFT of using these images. It will join the original as well as registered image.

    B.DECOMPOSITION

    The DWT process will remove the impurities by using successivelowpass and highpass filtering of the digital image or images. This process is called tree decomposition. The idea behind image merging is using wavelets is to fuse the wavelet decompositions of the two original images by applying merging methods to accurate coefficients and detail coefficients. The Discrete Wavelet Transform (DWT) majorly used and need to be used in many number applications which involves the image as well as video supresssing , pattern recognizing , biomedical etc. The idwt will palya important role in image merging method. The project will give the registered images which will be taken for fusion process. It is correctly straightned into pixel by pixel basis. The wavelet transform will calculate the registered image by using two different digital filters such as H0 and H1. This H0 and H1 will perform high as well as low merged wavelet coefficients.Step 1 is to take scaling. Scaling is the process of altering the size of the digital image. Scaling involves a efficiency for smoothness and sharpness. For bitmap graphics, the size of the image is reduced. This makes the image to appear soft. Wavelet filter involves prototype function which has a filter and supress the low frequency signals. Convolve computes the two dimensional matrices A and B. And then apply the technique DWT.

      1. ECONSTRUCTION

        IDWT is exactly reverse of DWT. In inverse DWT, the high and low frequency components has to be upsampled and then filtering is to be carried out.the HH, HL, LH and LL components have been first upsampled and then filtering operation has been performed. The subimages has to be added and take the result of the sum of the reconstructed image. The DWT method of image merging has to be produced naturally even if the images need to be joined and taken form different devices.Here loading of images has to be taken. So set property values to each of the graphic objects. Set vector values to each graphic objects. Scaling is the process of resizing the image. Wavelet filter will filter the signals and supress the low frequency. After filter will get the coefficients of scaling.Then making the signals periodic with regular intervals of time.Then getting wavelet coefficients. Sampling is applied which increases the sampling rate by using up sampling. Then applying the matrix operations by using convolve and join it using combine operation. Then apply idwt.

        MRI

        of brain

        CT

        scan of brain

        Figure 1:Architecture Of Image Fusion.

  4. RESULT AND ANALYSIS

    Figure 2: Snapshot for registration module

    The above snapshot image will say about that the registration process will register the images for same coordinate system. Here main thing is to detect the brain tumour. There are views of brain sagittal, coronal, axial. The first image is to register the features such as white matter. The second image is to register the features such as grey matter. The third image is the merged image which has both features of white matter and grey matter.

    Figure 3:Snapshot for decomposition

    The above snapshot will say that the first image which has white matter. This image will divided into number of images, by using db wavelet and haar wavelet using discreet wavelet transform method. The second image which has grey matter will also decompose the images into sub images.

    Figure 4: Level of decomposition

    The above snapshot will say about up to what level of decomposition is done. That can seen here by using wavelet decomposition tree. The decomposition can be done here up to N level. Here up to 8 levels can be done. It will take the wavelet coefficients of the corresponding nodes.

    Figure 5: Reconstruction process

    The above snapshot will represent the reconstruction process. Here the two decomposed images are merged into a single image. The reconstruction process can be done here using idwt function. This reconstructed image will be more informative then the original images.

  5. CONCLUSION AND FUTURE WORK

In this paper we have presented a method for combining two images into single image which gives us the enormous information.Fusion is done in frequency domain so that all edge information is retained and work is carried on intensity values ratther than pixel values.Finally by using DWT tedchnique images are devided into its wavelets and fusion prosess is performed.at last result is obtained again by applying inverse DWT.

REFERENCES

      1. Maitreyi Abhyankar, Arti Khaparde, Vaidehi Deshmukh, Spatial Domain Image Fusion Using Superimposition, 2016, Okayama,

        Japan

      2. H.Li, B.S. Manjunath, S.K. Mitra,Multisensor image fusion usingthe wavelet transform,GMIP: Graphical Models Image Process,1995.

      3. P.J. Burt and R.J. Kolczynski,Enhanced image capture throughfusion, Proceedings of the 4th International Conference on Computer,1993.

      4. Zhu Shu-long, Image Fusion Using Wavelet Transform, Symposiumon Geospatial Theory, Process and Applications, Ottawa 2002.

      5. Paul Hill, Nishan Canagarajah and Dave Bull, Image Fusion usingComplex Wavelets,BVMC 2002.

      6. L.G. Brown, A survey of image registration, ACM Computer Survey 24,1992.

      7. S. Banerjee, D.P. Mukherjee, D. Dutta Majumdar, Point landmarks for the registration of CT and MR images, Pattern ecognition Lett.1995.

      8. A. Colligon, D. Vandermeulen, P. Seutens, G. Marchal, Registrationof 3D multimodality medical imaging using surfaces and pointlandmarks, Pattern Recognition 1994.

Leave a Reply