- Open Access
- Total Downloads : 298
- Authors : Thara T D, Joyal Ulahanan
- Paper ID : IJERTV3IS090445
- Volume & Issue : Volume 03, Issue 09 (September 2014)
- Published (First Online): 19-09-2014
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
A Time Efficient Method for Image Fusion using Box Filter
Thara TD Joyal Ulahanan
Mtech Scholar Assistant Professor
Dept. Of Computer Science Dept.Of IT ICET Muvattupuzha ICET Muvattupuzha
Abstract The proposed system is a method to obtain a high quality image from a number of low quality images using image fusion .In this paper , we concentrated especially for the image inputs are from medical databases images from satellite sensors. The method can be used to merge the images of any type . The system operates on two layers of the image such as base layer and texture layer and performs operations to obtain the required fused image. The method uses several methods like weight map construction, saliency map generation and finally uses a box filter to obtain the fused image which contains the better details among the input images. The system is an improved version of existing method like guided filter in terms of speed and accuracy. KeywordsImage Fusion; Two Scale Decomposition; Guided Filter;
-
INTRODUCTION
An image for any purpose should be such that we should get the necessary information from it without any discrepancy. To get an image, which is free from noise and which is good for visual perception and computer processing, there are a number of image processing methods. Out of which we are using a type of method which is using a concept that an image of a scene from a single sensor will not give the actual information about that scene. Here we are combining different images of a scene from the same sources which having different information in it .This is the concept of image fusion
,the method which combine a number of input images such that the resultant image will contain details of the image which were absent in the individual images .Actually the method of image fusion combines the details of individual input images using several mathematical methods such that the output image is a good quality image .Actually image fusion is a branch of image processing intended for producing good quality images from a set of low quality images. The objectives of an image fusion algorithm are
Our method is aimed to obtain a high quality image from a number of low quality images using image fusion. There are a number of existing methods aimed for image fusion. proposed method is an improved version of image fusion with guided filtering in terms of speed and accuracy.
Image fusion has been used in defense applications for situation awareness, surveillance, target tracking, intelligence gathering, and person authentication. Image fusion has also been extensively used in remote sensing in interpretation and classification of aerial and satellite images .
In this paper, we are aimed to use our method to aid medical imaging by fusing different images resulting from MRI(Magnetic Resonance Imaging), computed tomography (CT), positron emission tomography (PET), and single photon emission computed tomography (SPECT).Here the input images are fused in order to obtain informative images for analysis .
-
RELATED WORK
All imaging applications that require analysis of two or more images of a scene can benefit from image fusion. Reducing redundancy and emphasizing relevant information can not only improve machine processing of images, it can also facilitate visual examination and interpretation of images. In the literature, image fusion algorithms are usually classified as pixel level image fusion, feature level image fusion and symbolic level image fusion Each of them works on different domains of the properties of the image.
Work by Diego A.Socolinsky and Lawrence B. Wolff[2] provides a method to understand multispectral images and multi sensor imagery based on first-order contrast information. The method provides a way to convert a multiband image to grey scale while preserving the contrast information. In order to obtain a high quality grade image , Rui Shen, Irene Cheng, Jianbo Shi, and Anup Basu[3] forwarded a method to fuse multi exposure images. This method preserves the optimal balance between a two quality measures, i.e., local contrast and color consistency. The algorithm provides high quality images with low computational cost. Fusion using Empirical Mode Decomposition(EMD) [4] maintains the uniqueness of the scales after multi scale decomposition. This method is used for obtaining an all in focus image from two or more multi focus images. From the perspective of fusion, features of the observed images that are to be fused can be broadly categorized in the following three classes[4],low level fusion, intermediate level fusion, high level fusion and out of which, pixel level fusion is an example of low level fusion. In pixel- level fusion, the fused pixel is derived from a set of pixels in the various inputs. The main advantage of pixel-level fusion is that the original measured quantities are directly involved in the fusion process[6].
Some fusion methods incorporates shift invariant extension of the of the discrete wavelet transform. It has been found that wavelet-based fusion techniques outperform the standard fusion techniques in spatial and spectral quality,
especially in minimizing color distortion. Again .some methods are introduced as a combination of wavelet based methods and standard PCA based methods or Intensity-hue- saturation (IHS) transform based fusion which shows better performances but complexity and cost was a problem.[Wavelet for Image Fusion Shih-Gu Huang].
Another method for fusion is based on quaternion curve let transform. This method was aimed to remove image blur which is a problem of many image fusion algorithms.
-
METHOD
The method includes a number of steps which will process the input images to build a fused image. There is particular mathematical operations on each steps for the processing.
-
Decomposition of the source image into base layer and textual layer
For the efficient fusion of images, first we have to decompose the image into two independent layers and corresponding operations are performed on that independent layers and then the fusion of the layers is performed. For every multi scale decomposition techniques, the base layer contains the large scale decomposition and texture layer or detailed layer contains the small scale decomposition. Hence it will present the detailed attributes of the image. In this paper we are using a two scale decomposition method using Gaussian filter[11]. Here, multiple images will be there for fusion. We will decompose each of these images to form base and texture layers. The Gaussian filter can effectively extract the base layer and by subtracting the base layer information from the original image, we can extract the detail layer .Let I be the input image ,and use b to denote the base layer and t to represent the texture layer .Let i, j the position of any pixel in the image. Then an image can be represented as
(1)
Algorithm for two scale decomposition of image using Gaussian filter
DecompusingGau(I, ) G = Gaussian_filter()
b (i , j) = Gaussian_filter (I , ) t (i , j) = I( i , j) – b (i , j) return b , t
-
Saliency map generation
The next step after the decomposition of input images into base and texture layers is the saliency map generation. The importance of saliency map generation is to identify the important areas of an image for further processing. In the case of image fusion ,we have to fuse images of low quality into an image of higher quality. For that we have to consider the important details of the image, by avoiding the repeated feaures. So saliency map generation helps us to build prioritized areas of an image , for ease in the further processing of an image. For saliency map generation, we use a combination of laplacian filter and Gaussian filter.
(2)
Where In is the input image and Hn is laplacian filter constructed using suitable window size. After using laplacian filtering to obtain the high frequency image ,next step is to apply the Gaussian filter to the output obtained after laplacian filtering.
Sn=In*g r, (3)
Where r, can take value approximately as 5.Thus obtained saliency map will contain the details of prioritized areas of the image.
-
Weight map generation
A weight map is generated from the output of a saliency map. A weight map constructed by comparing the values of the corresponding pixels from each image and by selecting the better pixel among them. The better pixel in the sense , the pixel having better intensity value.
The weight map of a particular pixel (I , j) is taking by finding the maximum value of the corresponding pixel in every input images.
-
Combining the weight map and the input image.
(4)
Where s1,s2,sn are the saliency values for a single pixel in n input images.By this method, weight maps of both the base layer and texture layer are calculated.
-
Fusion
We perform the guided filtering based method for the fusion of base layer and textual layer implemented using box filter inorder to attain less time complexity.
Box Filters are used to speed up the computationally intensive applications in image processing. Box filters are usually used in image processing applications in order to reduce the larger time periods in obtaining the output.In box filter , it involves the averaging of each pixel with the average in a box.
The equation for box filtering is such that ,
(5)
Guided image filtering is performed on each weight map Pn with the corresponding source image In serving as the guidance image .
(6)
(7)
-
Two scale image reconstruction
Two-scale image reconstruction consists of the following two steps. First, the base and texture layers of different source images are fused together by weighted averaging.
Finally , the base layer and texture layer are fused together to obtain the image , which contains the best features of both the images
F=B+T
Where B , is the fused base layer and T is the fused texture layer and F is the final image.
-
. EXPERIMENTS
A number of grey scale images and color images from different domains like medical , images from satellite sensors
, and natural scenes etc.. are applied for fusion in our boxfiltering based system.The important results are as follows.
Fig.1(a). MRI image of brain having tumor which is underexposed, Fig.1(b).
MRI image of brain which is overexposed
Fig.1(c).Fused MRI image which contains details ofboth (a) and (b) (a)and(b)
The experiments demonstrate that , for our methods , the grey scale images takes approximately less time that of color images .Grey scale images takes around 4-5 seconds to
process while color images take about 40 seconds to process the image. The results will be more clear with color images than grey scale
Fig.2(a). A scene having less clarity
Fig.2(b). Image of the same scene which is under exposed
Fig.2(c). Fused image of Fig.2(a) and Fig.2(b) having more details like shades on the lamp and writings of the paper
.
-
.CONCLUSION
-
The paper proposes a method for fast and effective method of image fusion using guided filtering implemented using box filter. Which improves the method of implementation of guided filter using most of the existing methods.
ACKNOWLEDGMENT
The authors would like to thank each and every one reviewers for their insightful comments and suggestions, which have greatly improved this paper.
REFERENCES
-
Shutao Li, Xudong Kang, and Jianwen Hu,Image Fusion with guided filtering, IEEE Trans. Image Process9., vol. 22, no. 7, July 2013
-
D. Socolinsky and L. Wolff, Multispectral image visualization through first-order fusion, IEEE Trans. Image Process., vol. 11, no. 8,pp. 923 931, Aug. 2002.
-
R. Shen, I. Cheng, J. Shi, and A. Basu, Generalized random walks for fusion of multi exposure images, IEEE Trans. Image Process., vol. 20,no. 12, pp. 36343646, Dec. 2011.
-
D. Looney and D. Mandic, Multiscale image fusion using complex extensions of EMD, IEEE Trans. Signal Process., vol. 57, no. 4, pp. 16261630, Apr. 2009.
-
Bernardo Rodrigues Pires, Karanhaar Singh and Jos´e M. F. Moura M. Young, Approximating image filterswith box filters.
-
M. Kumar and S. Dass, A total variation-based algorithm for pixellevel image fusion, IEEE Trans. Image Process., vol. 18, no. 9,
pp. 21372143, Sep. 2009.)
-
D. Looney and D. Mandic, Multiscale image fusion using complex extensions of EMD, IEEE Trans. Signal Process., vol. 57, no. 4, pp. 16261630, Apr. 2009..
-
K. He, J. Sun, and X. Tang, Guided image filtering, in Proc. Eur. Conf. Comput. Vis., Heraklion, Greece, Sep. 2010, pp. 114)
-
J. Clerk Maxwell, A Treatise on Electricity and Magnetism, 3rd ed., vol. 2. Oxford: Clarendon, 1892, pp.68-73.
-
S. Li, X. Kang, J. Hu, and B. Yang, Image matting for fusion of multi-focus images in dynamic scenes, Inf. Fusion, vol. 14, no. 2,
-
Sunghyun Cho, Hyunjun Lee, Seungyong Lee Image decomposition using deconvolution , Postech
-
Bernardo Rodrigues Pires, Karanhaar Singh and Jos´e M. F. Moura Approximating image filterswith box filters