Multi-focus Image Fusion

DOI : 10.17577/IJERTV4IS040610

Download Full-Text PDF Cite this Publication

  • Open Access
  • Total Downloads : 270
  • Authors : Sheetal U. Pawar, Pooja J. Kharade, Sarika B. Pachupate, Shivani D. Dalvi
  • Paper ID : IJERTV4IS040610
  • Volume & Issue : Volume 04, Issue 04 (April 2015)
  • DOI : http://dx.doi.org/10.17577/IJERTV4IS040610
  • Published (First Online): 20-04-2015
  • ISSN (Online) : 2278-0181
  • Publisher Name : IJERT
  • License: Creative Commons License This work is licensed under a Creative Commons Attribution 4.0 International License

Text Only Version

Multi-focus Image Fusion

Miss. Sheetal U. Pawar,

Assistant Professor,

Department of Electronics and Telecommunication Engineering, Dr. Daulatrao Aher college of Engineering, Karad, Dist-Satara Maharashtra , India.

Miss. Pooja J. Kharade, Miss. Sarika B. Pachupate, and Miss. Shivani D Dalvi,

B.E. student,

Department of Electronics and Telecommunication Engineering, Dr. Daulatrao Aher college of Engineering, Karad, Dist-Satara Maharashtra, India.

Abstract- Image fusion is the process of combining two or more multi-focus images into single image which contain more information than that of individual source images. This paper presents the algorithm for multi-focus image fusion in spatial domain using iterative segmentation and edge information of the source images. We have used Fixed Block Size and Adaptive Threshold and Adaptive Block Size and Adaptive Threshold algorithms for image fusion. These algorithms are performed in number of iterations. Image fusion improves quality of image. This technique has been tested on several pairs of multi-focus images.

Keywords- : Fusion of images, Multi-Focus Image Fusion, Spatial Domain, etc.

I.INTRODUCTION

Image fusion means the combining of two or more images into single image which is more informative than that of individual source images. Image fusion is widely used in the field of satellite imaging, surveillances, RADAR, biometrics. In satellite imaging and surveillances system fusion of images is obtained by using infrared and visible light cameras. In RADAR image fusion technique is used to improve the blurred image and we get clear focused image which contain maximum information.

An edge is a boundary of an image at which a significant change occurs in some physical aspect. Edge detection is used to find out the boundaries of objects within images and it filters out unwanted information. Hence it is an important tool in image analysis. Edge detection has application in the areas of image processing, computer vision and machine vision. There are various methods of edge detection such as Prewitt, Robert, Sobel, Laplacian of Gaussian and Canny Edge Detector. In this paper we have used canny edge detector to detect the edges. The canny edge detector is an edge detection operator that uses a multi-stage algorithm to detect a wide range of edges in images. Canny method applies two thresholds to the gradient. The basic idea is to detect at the zero crossings of the second directional derivative of the smoothed image in the direction of gradient where the magnitude of the gradient of the smoothed image being greater than threshold depending on image properties.

  1. LITERATURE REVIEW

    Image fusion can be divided into two types as fusion in frequency domain and fusion in spatial domain. In frequency domain method image is first transferred into frequency domain. Fourier transform is used for fusion and then Inverse Fourier transform is taken to get the result. But it is a very time consuming method so we go with spatial domain fusion. Spatial domain directly deals with image pixels. [1] [2].

    Mean and maximum methods are used both in spatial domain and frequency domain fusion. In mean method the average of two input images is calculated. After performing averaging we get fused image. In maximum method the comparison of two input images is carried out and the pixels with maximum value are selected for fused image. Also various multi-scale transforms such as wavelet transform and curve let transform are used for image fusion. In wavelet based fusion average weight of pixels is calculated. [3][4]

    When we capture three dimensional images then it is necessary to have all the objects of the scene to be in focus. But it is not possible to capture all-in-focus image with the image capturing devices because of the limitations of depth of field of camera sensors. When we use single camera for capturing an image then we get blurred image. Also we get images with unequal luminance and spatial distortion. Multi-focus image fusion technique is used to improve the blurred image and we get clear focused image which contains maximum information with equal luminance.

  2. PROPOSED ALGORITHM

    In this paper an iterative approach is used for multi-focus image fusion. First the source images are divided into smaller blocks, then edge information of each block is computed and then the block with greater edge information is selected. For this purpose adaptive threshold is taken. Block size and threshold are made adaptive in each iteration to improve the quality of fused image. In each iteration the blocks with higher edge information becomes the part of fused image and remaining blocks are passed over the next iteration. The fused image we get at the end has better visual quality.

    a)foreground in focus b)background in focus

    greater than threshold are selected as a part of fused image and remaining blocks are passed over the last iteration.

    3) The blocks passed from the second iteration for which no decision has been made are simply selected by observation. Means the blocks with higher edge information are incorporated into the final fused image.

    Figure.3 shows the resultant images after each

    Figure.1 Source images

    a)foreground in focus b)background in focus

    Figure.2 Source images with edge information

    Figure.1 shows two source images with complementary regions in focus. In this (a) shows Foreground in focus and

    (b) shows Background in focus. Figure.2 shows the edges of input images calculated using canny edge detector.

    1. FIXED BLOCK SIZE AND ADAPTIVE THRESHOLD (FBS-AT)

      The input images are divided into fixed number of blocks after calculating the edge information. We divide the input images into 16 blocks. Then we compare the edge information of two input images. The blocks with higher edge information are incorporated into final fused image and the blocks which remain unselected are passed over the next iteration.

      In Fixed Block Size and Adaptive Threshold (FBS-AT) selection is made in three iterations which are explained as follows:

      1. The input images are divided into certain number of blocks. Then the difference between edge information for two input images is calculated then the mean of all differences is computed and it is set as adaptive threshold. Then the difference for each block is compared with the threshold and the blocks for which the difference exceeds the threshold become the part of fused image and the remaining blocks are passed to the next iteration.

      2. To set a new threshold the mean of differences of the blocks which are passed from the first iteration is calculated. Then this mean is set as a new threshold. Then the differences for blocks for which the difference is

        iteration.

    2. ADAPTIVE BLOCK SIZE AND ADAPTIVE THRESHOLD (ABS-AT)

      For improving the quality of resultant image ABS- AT is used. For making the further enhancement not only the threshold but also block size is made adaptive for each iteration. In this algorithm there may be number of iterations. This algorithm can be described as follows:

      1. The input images are divided into certain blocks. Then the difference between edge information for two input images is calculated. Then the mean of all differences is computed and it is set as adaptive threshold. Then the difference for each block is compared with the threshold and the blocks for which the difference exceeds the threshold are incorporated as the part of fused image and the remaining blocks are pased to the second iteration.

      2. In the second iteration the images are divided such that each block of an image is subdivided by twice the number of divisions used in first iteration. Then the mean of all the differences is calculated and it is set as a new threshold. Then the difference for each block is compared with threshold value. If the difference is greater than threshold then those blocks are selected as the part of fused image. Remaining blocks for which no selection is made are passed over the last iteration. Second iteration may be performed in the form of number of sub-iterations.

      3. In the last iteration the blocks which passed from second iteration are simply selected by comparing the edges of input images. The blocks with higher edge information are selected as part of final fused image. Figure.4 shows the resultant images of ABS-AT.

  3. STATISTICAL PARAMETERS

    Fusion is performed to improve quality of image. Image fusion is characterized by some parameters such as entropy, mean, gradient, variance, energy, cluster shade etc.

    1] Entropy:

    Entropy is nothing but an information content of

    an image.

    It is given as,

    255

    E = Pi log2 Pi (1)

    i=0

    2] Mean:

    It is given as,

    It is given as,

    = 2 (5)

    m

    = i=1

    n j=1

    f i, j

    (2)

  4. EXPERIMENTAL RESULTS

m × n

Where f (i,j) is pixel intensity for position ( i ,j) 3] Average gradient:

Average gradient is the contrast details of reflected image.

It is given as,

f i, j f i + 1, j 2 + f i, j f i, j + 1 2

Table.1 Comparison of Quality Matrices for Mean, FBS-AT and ABS-AT methods.

G = i j

m × n

(3)

4] Variance:

Variance is used to measure the focus of the block of an image.

It is given as,

2 = i j(f i,j mean )2

m ×n

5] Standard Deviation:

(4)

Parameters

Algorithms

Mean

FBS-AT

ABS-AT

Entropy

5.1927

5.5761

5.5719

Mean

90.416

152.9635

152.9392

Variance

1210.8

139.685

139.574

Gradient

0.0186

0.009

0.009

Standard deviation

34.796

83.3608

83.3937

ClProm

1.0797

2.4058

2.0888

Clust-shade

-1.9425

-3.6993

-3.2222

Infomeas

-3.0268

-3.0309

-3.0309

LHomognty

7.6810

9.4465

9.3632

Contrast

4.1717

4.8383

4.6381

Energy

5.3801

7.8337

7.5574

It is used to weight information of an image.

a)first iteration b)second iteration c)final result

Figure.3 Fixed Block Size and Adaptive Threshold

a)first iteration b)second iteration c)final result

Figure.4 Adaptive Block Size and Adaptive Threshold

These image fusion algorithms are performed on two pairs of images namely, clock and pepsi.Table.1

VII. REFERENCES

shows the comparison of different quality matrices computed for pepsi pair of image using the three methods.

VI.CONCLUSION

By using FBS-AT and ABS-AT algorithms we are getting 7.3% improvement in entropy as compared to mean method. The contrast is improved by 16% in FBS-AT and 11% in ABS-AT.FBS-AT and ABS-AT successfully gives 45% and 40% more energy content than mean method. Thus multi-focus image fusion technique ensures to achieve clear fused image which contains maximum information with equal luminance.

  1. Parul Shah, Amy Kumar, Shabbir N. Merchant from Department Of Electrical Engineering IIT Bombay, India And Uday B. Desai From IIT Hyderabad ,India, Multi-Focus Image Fusion Algorithm Using Iterative Segmentation Based On Edge Information And Adaptive Threshold.

  2. Deepak Kumar Sahu, M.P.Parsai from Department of Electronics & Communication Engineering, Jabalpur Engineering College, Jabalpur MP, India, Different Image Fusion Techniques-A Critical Review , International Journal of Modern Engineering Research (IJMER) Vol. 2, Issue. 5, Sep.-Oct. 2012 pp-4298-4301, ISSN:2249-6645.

  3. Parul Shah, Shabbir N. Merchant from Department Of Electrical Engineering IIT Bombay, India And Uday B. Desai From IIT Hyderabad ,India, An Efficient Adaptive Fusion Scheme For Multi- focus Images In Wavelet Domain Using Statistical Properties Of Neighborhood ,14th International conference on information fusion Chicago, Illinois, USA, July 5-8, 2011.

  4. Abdul Basit Siddiqui, M. Arfan Jaffar, Ayyaz Hussain and Anwar

    M. Mirza Block-Based Pixel Level Multi-Focus Image Fusion Using Particle Swarm Optimization , International Journal of Innovative Computing, Information and Control Volume 7, Number 7(A), July 2011, ICIC International c 2011 ISSN 1349-4198.

  5. Eric A. Silva, Karen Panettaa, Sos S. Agaianb, Department of Electrical & Computer Engineering, Tufts University, 161 College Avenue, Medford, MA 02155, Quantifying Image Similarity Using Measure Of Enhancement By Entropy.

  6. S.S.Hana, b, *, H.T.Lia, H.Y. Gua, b a Institute of Photogrammetry and Remote Sensing,Chinese Academy of Surveying and Mapping, Beijing 100039,China. The Study On Image Fusion For High Spatial Resolution Remote Sensing Images.

  7. BOVIK A. [2000], HANDBOOK OF IMAGE AND VIDEO PROCESSING.

Leave a Reply