Satellite Image Registration based on SURF and MI

DOI : 10.17577/IJERTV4IS070303

Download Full-Text PDF Cite this Publication

Text Only Version

Satellite Image Registration based on SURF and MI

Sruthi Krishna Computer Science and Engineering

Adi Shankara Institute of Engineering and

Technology Kalady, India

Ramkumar P B Computer Science and Engineering

Adi Shankara Institute of Engineering and

Technology Kalady, India

Abstract- Registration of satellite imagery is key step for remote sensing applications like global change detection, image fusion, and feature classification. Manual registration is very time consuming and repetitive, an automatic way for image registration is needed. In this paper such a method for image registration is proposed which is fully automatic and computationally efficient unlike global registration in which control points are selected manually. There is a preregistration process and a fine-tuning process. The first stage includes feature selection and description using SURF and an outlier removal procedure using RANSAC. This gets the optimizer in the fine-tuning process a near optimal solution. Next, the fine-tuning process is implemented by the maximization of mutual information. The proposed scheme is tested on various remote sensing images taken at different situations (multispectral, multisensor, and multitemporal) with the affine transformation model. It is demonstrated experimentally that proposed scheme is fully automatic and much efficient than the global registration. SURF is mostly used algorithm as it is the fastest descriptor. This paper shows that by increasing the matching points, image registration can be accurately done.

Keywords: Image registration, SURF, RANSAC, Outlier removal, MI, Affine transformation model.

  1. INTRODUCTION

    The objective of image registration is to spatially transform an image into other in such a way that dissimilarity metric between two images taken at different times, from different sensors, or from different viewpoints achieves its minimum.

    There are 2 images: reference image and a sensed image. The sensed images amount of rotation and the amount of translation with respect to the reference image is determined by image registration process. The misalignment could be due to the change in viewpoint, change in the sensor position, the movement of the object or its deformation, and the illumination. [1]

    Recently many applications and data have been using image registration. In case of remote sensing imaging registration is very essential in analysis of remote sensing images. There are several applications such

    as, change detection, image fusion and etc. combining two images is the image fusion. The final fusion image is dependent on image registration accuracy. The final fusion image will appear as one image, although it integrate two images. If mis-registered result will appear blurry or show edge phenomena. [2]

    There are two categories of image registration techniques: area based approaches and feature based approaches. Area based approach apply directly on pixel intensities. It does not detect salient features. Approaches that make use of salient features extracted from two images are feature based methods and it do not work with intensities values. Therefore, feature based methods are used for remote sensing image registration in which situations where intensity values and complicated geometric deformations are encountered. Feature-based methods initially extract salient features and then match them using similarity measures to establish the geometric correspondence between two images. One of the main advantages of these approaches is that they are fast and robust to noises, complex geometric distortions, and significant radiometric differences. The commonly used features include point, edge, contour, and region.

    Major steps in feature based methods are: First, control points from the reference and sensed images are detected and matched. Second, parameters of the transformation function are estimated using the previous control points. Finally, the estimated transformation model registers the sensed image to the reference image using various deformation models such as an affine transformation or a polynomial transformation.

    In this paper, image registration has 2 stages. Preregistration and fine tuning.

    The pre-registration process employs Speeded up robust features SURF features to determine tie points and their correspondences. Next, the RANSAC algorithm is used to estimate the homography and the inlier correspondences, because many of the correspondences obtained in the previous step are incorrect. These correspondences are used to estimate

    the transformation between the input and the reference image. Affine transformation is used and to find the similarity between two images mutual information is used as a cost function.

    The main innovation of this study is a new coarse-to- fine transformation parameter solving strategy, which comprises a preregistration process and a fine-tuning process. Its uniqueness lies in the following three aspects.

    1. A modified outlier removal procedure is introduced to eliminate most false SURF matches to guarantee the preregistration results close to the solution (ground truth).

    2. An excellent initial solution selection strategy for

      PCA-SIFT

      The PCA-SIFT descriptor (Principal Components Analysis SIFT) is a variety of SIFT with two main differences: (1) the descriptor is calculated for a region of size 39 × 39 subregions instead of 4×4 used in SIFT and (2) instead of 8 bins for the orientation, PCA-SIFT calculates the orientation in the x and y directions. The result is a vector of dimension 3042 (39 × 39 × 2), which is then reduced to 36 with the principal component analysis.

      B. Similarity Measure Correlation-like methods.

      This is an area based method.

      maximization of MI is developed by using SURF with outlier removal.

      ( ())(

      (,)))

    3. Maximization of MI is utilized to refine the preregistration results to achieve the most precise registration results.

    So feature based method SURF is combined with area based methods MI hence it employs both the methods advantages.

  2. STATE OF ART

    A. Feature descriptors

    SIFT algorithm

    SIFT algorithm was proposed as a method to extract and describe feature points, which is robust to scale, rotation and change in illumination. There are five steps to implement the SIFT algorithm:

    1. Scale-space extrema detection: Difference of Gaussian (DoG) function is used to find interest points over scale space that are invariant to scale and orientation.

    2. Feature point localization: The location and the scale of each candidate point are determined and the feature points are selected based on measures of stability.

    3. Orientation assignment: One or more orientations are assigned to each feature point location based on local image gradient directions.

    4. Feature point descriptor: A feature descriptor is created by computing the gradient magnitude and orientation at each image sample point in a region around the feature point location. These samples are then accumulated into orientation histograms summarizing the contents over 4 · 4 regions with 8 orientation bins. So each feature point has a 128- element feature.

    5. The correspondence of feature points can be determined by taking the ratio of distance for the descriptor vector from the closest neighbor to the distance of the second closest.

    ( ())2(,)((,) ((,)))2

    Window pairs are selected from the sensed and reference images and siilarity measure is computed and its maximum is searched. The corresponding ones are the window pairs for which the maximum is obtained. The interpolation of cc measure values are used for obtaining the subpixel accuracy of the registration. Even though the cross correlation can used to align mutually translated images only, it can also be used when slight rotation and scaling are present.

    Sum of squared differences

    If the same type images are to be registered the image intensity at corresponding points will be similar. The sum of squared intensity differences is one of the simplest similarity measure. During registration SSD between images is minimized.

    )

    = 1 |( ) ( )|2

    ,

    Where A is the fixed image intensity function, BT represents the transformed image B that is the B image under the current transformation on consideration, T. The optimal measure of similarity will be zero. Images A and B results in large if poor matches.

    Sum of absolute differences

    It uses the sequential search approach and a computationally simpler distance measure than the cross-correlation. The accumulated sum of absolute differences of the image intensity values is calculated and a threshold criterion is applied. If the accumulated sum exceeds the given threshold, the candidate pair of windows from the reference and

    sensed images is rejected and the next pair is tested. The method is faster but it is likely to be less accurate than the cross-correlation. The sum of absolute differences, calculated by finding the absolute difference between each pixel in the original image A and the corresponding pixel in the transformed image B.

    2

    Where (, ) is the convolution result of the second order derivative of Gaussian filter 2 ()

    = 1

    |(

    ) ()|

    [3]

    with the image I in point x, and similarly for

    (, ) and (, ). SURF creates a stack without 2:1 down sampling for higher levels in the

    ,

    More similar images means smaller SAD values. But the two images should be of same modality for this similarity measure.

  3. METHODOLOGY

If there are 2 grey level mages between which there is some geometric and radiometric differences. (, ) and (, ) represent the reference and sensed images. By finding optimal geometric transformation

(. ) these two images can be registered. Affine transformation is selected here.

where the transformation origin is considered to be the upper left corner of the reference image, (a11, a12, a21, a22) represent the rotation, scale, and shear differences, and (äx, äy) are the shifts between the two images.

  1. SURF [5]

    SURF: Speeded Up Robust Features is interest point detector and descriptor which is invariant to scale and rotation. It outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster. This is achieved by:

    -relying on integral images for image convolutions

    -building on the strengths of the leading existing detectors and descriptors (using a Hessian matrix- based measure for the detector, and a distribution- based descriptor)

    – simplifying these methods to the essential This leads to a combination of novel detection, description, and matching steps.

    SURF is based on multi-scale space theory and the feature detector is base on

    Hessian matrix. Since Hessian matrix has good performance and accuracy. In image I, x = (x, y) is the given point, the Hessian matrix H(x,) in x at scale , it can be define as

    pyramid resulting in images of the same resolution. Due to the use of integral images, SURF filters the stack using a box filter approximation of second-order Gaussian partial derivatives. Since integral images allow the computation of rectangular box filters in near constant time. The Gaussian second order partial derivative box filters of Dyy and Dxy are show in Fig.

    Fig 1. The Gaussian second order partial derivative box filters in y and xy direction

    In descriptors, SIFT is good performance compare to other descriptors. The proposed SURF descriptor is based on similar properties. The first step consists of fixing a reproducible orientation based on information from a circular region around the interest point. And second construct a square region aligned to the selected orientation, and extract the SURF descriptor from it. In order to be invariant to rotation, it calculate the Haar-wavelet responses in x and y direction shown in figure

    Fig 2. Haar wavelet response in x and y direction

  2. Ransac Outlier Removal [4]

The algorithm of random sample and consensus (RANSAC) is a robust method of estimating parameter. The basic idea of the method is to eliminate deviation points using internal data constraints of data sets. When estimate the parameter, first design the objective function and then get the value of the parameters using the iterative estimation. All the data are divided into inliers and outliers, where the inliers meet to the estimation model and the outliers do not meet to the model. And the parameters of estimation model are obtained by inliers through continuous iteration. A RANSAC algorithm provides a general technique for model fitting in the presence of outliers and consists of the following steps: [5]

  1. Choose a model.

  2. The minimal number of points needed to specify the model is determined.

  3. Define a threshold on the inlier count.

  4. Fit the model to a randomly selected minimal subset

  5. Apply the transformation to the complete set of points and count inliers.

  6. If the number of inliers exceeds the threshold, flag the fit as good and stop.

  7. Otherwise repeat steps 4 to 6.

Fig 3. Linear fitting using RANSAC

3. Thin Plate Spline Transformation Model

Affine transformation is widely used in remote sensing image registration. But when it comes to multiview images with the difference in the acquisition angle and the terrain elevation, affine is not suitable. For that thin-plate spline is used. It can handle the effects introduced by the acquisition angle and terrain elevation differences. Given two images, an image is deformed so that it matches the second one. There are a set of control points and thin plate splines provides a smooth interpolation between them. It interpolates a surface that passes through each control point. A flat plane is generated by a set of 3 points. It is easy to think of the control points as position constraints on a bending surface. The ideal surface is one that bends the least. An example of such a surface with 7 control points is shown in the

figure. The surface is forced to pass through all these 7 control points.

Fig 3. A thin plate splines that passes through a set of control points

This least bent surface is given by the following equation:

=1

(, ) = 1 + 2 + 3 + (|

(, )|)

The first three terms correspond to a flat plane, the linear part that best matches all control points (this can be seen as a least square fitting). The last term corresponds to the bending forces provided by n control points. There is a coefficient wi for each control point. Also, | (, )| is the distance between the control point Pi and a position (x, y). This distance is used in the function U defined by() = 22. The coefficients a1, a2, a3, and wi for every control point are unknown. All wi forms the vector W. These unknowns are defined by

1 = (|123)

REFERENCES

  1. R.M. Ezzeldeen , H.H. Ramadan, T.M. Nazmy, M. Adel Yehia, M.S. Abdel Wahab Comparative study for image registration techniques of remote sensing images. Faculty of Computer and Information Sciences, Ain Shams University, Cairo, Egypt.

  2. Mandal, J.K., Satapathy, S.., Kumar Sanyal, M., Sarkar, P.P., Mukhopadhyay, A. Information Systems Design and Intelligent Applications Proceedings of Second International Conference INDIA 2015, Volume 2

  3. J. N. Ulysses, A. Conci Measuring Similarity in Medical Registration IWSSIP 2010 – 17th International Conference on Systems, Signals and Image Processing

  4. M. Wahed, Gh.S. El-tawel, A. Gad El-karim Automatic Image Registration Technique of Remote Sensing Images (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 4, No. 2, 2013

  5. P M Pancha,S R Panchal, S K Shah A Comparison of SIFT and SURF International Journal of Innovative Research in Computer and Communication Engineering Vol. 1, Issue 2,

    April 2013

  6. B. Zitová and J. Flusser, Image registration methods: A survey, Image Vis. Comput., vol. 21, no. 11, pp. 9771000, Oct. 2003.

  7. X. Dai and S. Khorram, The effects of image misregistration on the accuracy of remotely sensed change detection, IEEE Trans. Geosci. Remote Sen., vol. 36, no. 5, pp.15661577,

    Sep. 1998

  8. L. Cheng, J. Gong, X. Yang, C. Fan, and P. Han, Robust affine invariant feature extraction for image matching, IEEE Geosci. Remote Sens. Lett. vol. 5, no. 2, pp. 246250, Apr. 2008.

  9. A. A. Cole-Rhodes, K. L. Johnson, J. LeMoigne, and I. Zavorin, Multiresolution registration of remote sensing imagery by optimization of mutual information using a stochastic gradient, IEEE Trans. Image Process., vol. 12, no. 12, pp. 14951511, Dec. 2003.

  10. S. Suri, P. Schwind, P. Reinartz, and J. Uhl, Combining mutual information and scale invariant feature transform for fast and robust multisensor SAR image registration, in Proc. 75th ASRPS Conf., Baltimore, MD, USA, Mar. 2009, pp. 1 12.

  11. Y. S. Heo, K. M. Lee, and S. U. Lee, Mutual information- based stereo matching combined with SIFT descriptor in log- chromaticity color space, in Proc. CVPR, 2009, pp. 445 452.

  12. Suma Dawn, Vikas Saxena, and Bhudev Sharma , Remote Sensing Image Registration Techniques: A Survey , Springer-Verlag Berlin Heidelberg 2010.

  13. David G. Lowe,Distinctive Image Features from Scale- Invariant Keypoints, International Journal of Computer Vision, 2004

  14. L. G. Brown, A survey of image registration techniques

    ACM Computing Surveys 24 (1992) 326376

  15. J. B. A. Maintz, M. A. Viergever, A survey of medical image registration, Medical Image Analysis 2 (1998) 136.

  16. Fischler M A, Bolles R C., Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography, Communications of the ACM, vol. 24, no. 6, pp. 381395, 1981.

  17. Gao Ting, Xu Yu, and Xu Ting-xin ,Multi-Scale Image Registration Algorithm based on Improved SIFT , Journal of Multimedia, Vol. 8, NO. 6, December 2013.

  18. A. Wong and D. A. Clausi, Arrsi: Automatic registration of remote-sensing images,IEEE Trans. on Geoscience and Remote Sensing, vol. 45, no. 5, pp. 1483 1493, 2007.

  19. Y. Bentoutou, N. Taleb, K. Kpalma, and J. Ronsin, An automatic image registration for applications in remote sensing, IEEE Trans. on Geoscience and Remote Sensing, vol. 43, no. 9, pp. 2127-2137, 2005

  20. H. Bay, A. Ess, T. Tuytelaars, and L. Vangool, Speeded-up robust features (surf), Computer Vision and Image Understanding, vol. 110, no. 3, pp. 346-359, June 2008.

  21. Rochdi Bouchiha and Kamel Besbes,Automatic Remote- Sensing Image Registration Using SURF ,International Journal of Computer Theory and Engineering, Vol. 5, No.1,

    February 2013

  22. H. Chen, P. Varshney, and M. Arora, Mutual information based image registration for remote sensing data, Int. J. Remote Sens., vol. 24, no. 18, pp. 37013706, 2003.

Leave a Reply