Change Detection Algorithms in Multitemporal Images

DOI : 10.17577/IJERTV2IS110422

Download Full-Text PDF Cite this Publication

Text Only Version

Change Detection Algorithms in Multitemporal Images

Sreeja K.S

Pursing Mtech at college of engineering cherthala Prasanth C.R

Pursing Mtech at college of engineering cherthala Joyal joseph

Pursing Btech at college of engineering cherthala Anuraj. R

Pursing Btech at college of engineering cherthala

ABSTRACT

In this paper different change detection algorithms in multi temporal images are being discussed. The feature based and pixel based techniques can be combined to perform the change detection problems. Firstly the images are transferred in to wavelet domain and undergoing different change detection algorithms to get the final change mask. Different neural network algorithms like Expectation Maximization algorithm, genetic algorithm, K means clustering algorithm,PCAanalysis,MRF,Gaussian model etc are being used to get the final change mask and from this mask the changes in the images are being detected.

Keywords: EM expectation maximization algorithm, PCA-principal component analsyis, MRF- markoffs random field, UDWT-undecimated discrete wavelet transform, DTCWT-dual tree complex wavelet transform

INTRODUCTION

Automatic change detection based on a set of image acquired at different time instances is a fairly instrumental task to many image processing applications. Important applications of change detection include environmental surveillance, remote sensing, medical diagnosis, and infrastructure monitoring. For example, with many pressing concerns on climate changes and global warming, environmental monitoring (e.g., the status and changes of deforestation in a specific zone) has been recognized as an indispensable task to do constantly. For that, automatic change detection through images could play a very effective role in monitoring the Earths surface which has become as one of the active research topics in remote sensing. Due to undesirable sensor characteristics and other disturbing effects, certain corrections on the remote sensing images are generally required before performing any data analysis. Typical corrections include noise reduction, radiometric calibration, sensor calibration, atmospheric correction, solar

correction, topographic correction, and geometric correction. In this paper different change detection algorithms for multitemporal images is discussed.

  1. Change detection combining feature based and pixel-based techniques

    As far as a change-detection task is concerned, the availability of synthetic aperture radar (SAR) data promises high potentialities. Multitemporal SAR imagery is expected to play a relevant role, for instance, with respect to ecological and environmental monitoring applications or to disaster prevention and assessment. Leaving the sensors and considering the algorithms change-detection methods could be categorized based on their unsupervised or supervised nature. Roughly speaking, the first category consists of different ways of comparing raw multitemporal data, while the second involves supervised classifiers. In unsupervised change- detection techniques, the focus is more on the detection than on the classification of the change, which took place. Ion algorithms in location of changes in the observed area. This is especially true when using medium-resolution satellites, which provide low-cost data that may be co registered and corrected using standard techniques already implemented in commercially available software.

    To be more specific, unsupervised change detection may be obtained through very simple combinations of the raw images at two dates. The basic methods for SAR data consist of computing the ratio between the two images, but the task has been approached also using statistical tools as well as refined segmentation analysis and even, although less frequently, using fuzzy classification or neural networks. Finally, interferometric measures like phase or coherence, or simply the amplitude correlation have been explored. Even if these simple methods may be effective, they usually require the setting of thresholds, which implies subjective evaluations unless some automatic or semiautomatic approach is developed. This has

    been done where Bayesian theory is used to automatically determine the correct threshold to be applied to a difference image. In particular, this image is analyzed by considering the spatial- contextual information included in the pixel neighborhood, relying on Markov random fields (MRFs) to exploit interpixel class dependence contexts. An iterative method based on the expectationmaximization (EM) algorithm is used to estimate the statistical terms that characterize the distributions of the changed and unchanged pixels in the difference image. Three experiments on both satellite and airborne multispectral data: results appear to be good, and the robustness of the algorithm against noise is highlighted. This technique relies on the definition of the unsupervised change detection problem in terms of the Bayes rule for minimum cost (BRMC), which in turn allows the generation of change detection maps in which the more critical type of error is minimized according to end-user requirements. Most of these approaches start from the assumption that the multitemporal sequence is already co registered. Many techniques exist for co registration of optical images or multiple SAR images of the same area taken by the same sensor and the same viewing geometry. It is however much more difficult to accomplish the task of good co registration at the pixel level when images are taken from different vantage positions with respect to the target scene. The procedure starts with two or more images depicting the same area. An unsupervised feature extraction algorithm is then applied to each image, and the results of this extraction are compared, to find out where changes have taken place, if any. Finally, feature changes are fed into a fusion routine, aimed at introducing these results into the preexisting change detection map. The procedure thus requires two fusion steps, which will be more precisely delineated in next sections. 1) First, an algorithm at the feature level is implemented, i.e., a procedure for the extraction of features from the input images and consequently their comparison. 2) Then, a second step at the information level fuses the changes extracted with a feature-based technique and those extracted with an area-based technique.

    Fig .1.1 Change detection combining feature based and pixel-based techniques

  2. Multiscale Change Detection

The current satellite technology makes it possible to acquire very high resolution data. The high-resolution data provide a detailed analysis with the expense of high-computational cost to evaluate it. One approach allows an automatic selection of the decision threshold for maximizing the overall change detection error using expectation maximization (EM) under the assumption that the pixels of the difference image are spatially independent (referred in this letter as EM-based approach). The other analyzes the difference image by considering the spatial contextual information included in the neighborhood of each pixel. This approach is based on the Markov random fields (MRFs) and exploits the context of interpixel class dependence (referred in this letter as MRF-based approach). The other approaches follow

the similar methodologies. They produce promising results by analyzing the difference image in a pixel- by-pixel manner with complex mathematical models. Because of complex mathematical models for the data analysis, such methods are not feasible in very high resolution saellite images. A solution for unsupervised change detection in VHR multitemporal images is using a new method. The method uses a multilevel parcel-based approach to accomplish unsupervised change detection in VHR multitemporal images. The method consists of characterization of each pixel by a feature vector. The feature vector is obtained by computing the mean on the parcels at different scales for two multitemporal images. The final change detection is computed by applying the well-known change vector analysis technique to the multilevel parcel-based feature vectors. The algorithm produces promising results on VHR multitemporal data, but it has several disadvantages which are as follows: 1) the feature vector generation stage has dependence on the result of segmentation stage which generates parcels; 2) the number of segmentation levels is needed to be provided manually; 3) generating segmentation results at different levels is computationally expensive; and 4) because of the homogeneity criterion used in segmentation, the algorithm is prone to errors caused from noise. Recently, computationally efficient yet effective method for unsupervised change detection is using PCA. The method analyzes the difference image by using principal component analysis (PCA)

and k-means clustering with k = 2. The k-means clustering is employed on feature vectors to compute the final change detection result. The feature vector

for each pixel is extracted by projecting the local

difference data onto The eigenvector space. The number of eigenvectors determines the dimensionality of the feature vector. The eigenvector space is created by PCA of h ×h no overlapping blocks collected from the entire difference image. The algorithm produces promising results with a low computational cost. It employs PCA for dimension reduction and feature extraction, which is a canonical technique to find useful data representations in compressed space. It finds a set of eigenvectors which are uncorrelated and Gaussian. PCA can only separate pair wise linear dependences between data points; because of this reason, PCA-based approaches may fail in some situations. Moreover, it does not consider the multiscale data fusion for the difference image, so that it is prone to produce false detections. . The algorithm uses the difference image to create a multiscale feature vector for each pixel in the difference image. The feature vectors are then grouped into two disjoint classes, changed (labeled

with wc) and unchanged (labeled with wu), using k- means clustering algorithm with k = 2. The multiscale feature vectors are formed from the

multiscale decomposition of the difference image.

The multiscale decomposition of the difference image is achieved using the undecimated discrete wavelet transform (UDWT) which is essentially an undecimated version of the discrete wavelet transform. We use the UDWT because of its following characteristics: 1) it does not use down sampling so there is no aliasing problem; 2) it is shift invariant; and 3) it produces sub bands of the same size as the size of the input image.

Fig. 2.1 multiscale change detection III.Using Principal Component Analysis and k- Means Clustering

Unsupervised change detection techniques mainly use the automatic analysis of change data which are constructed using multitemporal images. The change data are generally created using one of the following:

  1. image differencing; 2) normalized difference vegetation index; 3) change vector analysis;4) principal component analysis (PCA); and 5) image rationing Several unsupervised change detection which use complicated data modeling and parameter estimation. Most of the unsupervised methods are developed based on the image differencing. Image differencing-based algorithms accomplish the change detection by subtracting, on a pixel basis, the images acquired at two time instances to produce new image called difference image. The computed difference image is such that the values of the pixels associated with land cover or land use changes present values significantly different from those of the pixels associated with unchanged areas. Changes are then identified by analyzing the difference image. The difference image is analyzed by considering the spatial contextual information included in the neighborhood of each pixel. This approach based on the Markov random fields (MRFs) exploits the context of interpixel class dependence (MRF-based thresholding). The algorithm requires high computational power and is not feasible to be applied for near real- time change detection purposes. There are many other change detection methods using the same framework for synthetic aperture radar (SAR) images and achieved satisfactory results by using complicated data modeling and parameter estimation. However, these methods are applied to the raw data domain and suffer from the inference of speckle noise.

    The non overlapping blocks of the difference image are used to extract eigenvectors by applying PCA

    .Then, a feature vector for each pixel of the difference image is extracted by projecting its h

    ×neighborhood data onto eigenvector space. The

    feature vector space is clustered into two clusters using k-means algorithm. Each cluster is represented with a mean feature vector. Finally, change detection is achieved by assigning each pixel of the difference image to the one of the clusters according to the minimum Euclidean distance between its feature vector and mean feature vector of the clusters.

    Fig 3.1 Using Principal Component Analysis and k-Means Clustering

    1. Using Genetic Algorithm Approach

      Most of the unsupervised methods are developed based on the analysis of the difference image. Transform-domain techniques are applied to reduce the effect of noise contamination and to analyze the difference image using a multi resolution structure. In themultiscale-based approach, the difference image computed in the spatial domain from multitemporal images is decomposed using undecimated discrete wavelet transform (UDWT). Then, for each pixel in the difference image, a multi scale feature vector is extracted using the sub bands of the UDWT decomposition and the difference image itself. The final change detection map is obtained by clustering the multiscale feature vectors using the k-means algorithm into two disjoint classes: changed and unchanged. This method, generally speaking, performs quite well, particularly on detecting adequate changes under strong noise contaminations, but it has problems in detecting accurate region boundaries between changed and unchanged regions caused from the direct use of sub bands from the UDWT decompositions. In addition, the method depends on the number of scales used in the UDWT decomposition. All the aforementioned unsupervised change detection methods depend on the parameter tuning or a priori assumptions in modeling the difference image data. The parameter tuning process and the a priori assumptions in difference image data Modelings make them unsuitable for change detection on different types of satellite images. Because of this reason, there is a requirement for a general-purpose unsupervised change detection method which can perform well on different types of satellite images. Recent advances in computing technology make it possible to perform high-load computations very fast by employing parallel computing with high-powered processors. This motivates us to solve the change detection problem by using a genetic algorithm (GA). The GA is employed to find the final change detection mask by evolving the initial realization of the binary change detection mask through generations.

    2. Meaningful Sequential Time Series Analysis

      Most change detection studies rely on image differencing, post-classification comparison methods and change trajectory analysis and the data is mostly treated as hyper-dimensional, but not necessarily as hyper-temporal. These methods therefore do not fully capitalize on the high temporal samping rate which

      captures the dynamics of different land cover types, nor do they provide automated change detection capabilities. A time series is a sequence of data points measured at successive (often uniformly spaced) time intervals. Time series analysis comprises methods that attempt to understand the underlying forces structuring the data. Analyzing this structure enables the identification of patterns and trends, detection of change, clustering, modeling, and forecasting. In the time series context, complete clustering is when the entire time series is taken as a discrete object and clustered with conventional methods. In contrast, subsequence clustering is performed on streaming time series that are extracted with a sliding window from an individual time series. Time series analysis is less concerned with the global properties of a time series and more interest in the subsequences of a time series. The sequential extraction of subsequences is achieved by using a temporal sliding window that has a length of and position that is incremented with a natural number to extract sequential subsequences. The signal processing and data mining communities have made wide use of the clustering of subsequence time series that were extracted using a temporal sliding window. To date, it has found very limited applications on satellite time series data. Recently the data mining communitys attention was brought to a fundamental limitation of the clustering of subsequences that were extracted with a sliding window from a time series. The sliding window causes the clustering algorithms to create meaningless results as it forms sine wave cluster centers regardless of the data set, which clearly makes it impossible to distinguish one datasets clusters from another. This is due to the fact that each data point within the sliding window contributes to the overall shape of the cluster center as the window moves through the time series There are two general approaches to classification that can be applied to time series data, namely supervised and unsupervised

      . The supervised approach requires initial training on labeled pixels according to their land cover type. The disadvantage of using a supervised approach to perform change detection is the dependency on periodic high resolution imagery for updating the unchanged training sets over time. The supervised approach must also be robust to errors occurring within the training sets. The unsupervised approach does not require any training and detects change in the inherent properties of the signal. The supervised approach can provide from what, to what information on land cover change, while the unsupervised approach simply provides a change alarm to highlight areas of change for further investigation using e.g., high resolution satellite data and field inspections. Generating training data at

      global and regional scales is a very labour-intensive and costly endeavor, which makes an unsupervised approach to automated land cover change detection a more attractive option.

    3. UsingUndecimated Discrete Wavelet Transform and Active Contours

      Remote sensing imagery generally requires certain preprocessing corrections due to undesirable sensor characteristics and other disturbing effects before performing any data analysis on it. Typical corrections include noise reduction, radiometric calibration, sensor calibration, atmospheric correction, solar correction, topographic correction, and geometric correction. we assume that the changes yielded between the two images under comparison are only caused by the physical changes on the geographical area, and those typical corrections as mentioned previously either play no issue or have been carried out on the images before applying any change detection method. Unsupervised change detection techniques can be categorized into two major classes according to the data domain to which they apply: 1) spatial-domain approach and

  2. transform-domain approach. The spatial-domain techniques directly extract certain statistical quantities from the input images while the transform- domain techniques apply a certain transformation, such as the undecimated discrete wavelet transform (UDWT) or the dual-tree complex wavelet transform (DT-CWT), on the input images first, followed by conducting a statistical analysis to mitigate the noise interference on the change detection accuracy. One exploits an adaptive decision threshold for minimizing the overall change detection error under the assumption that the pixels of the difference image are spatially independent. The other, which is based on the Markov random fields (MRFs), analyzes the difference image by considering the spatial contextual information included in the neighborhood of each pixel. These algorithms are applied in the spatial domain and provide impressive change detection results at the expense of high computational complexity; thus, they are not suitable for real-time change detection applications. A computationally efficient and effective method for conducting unsupervised change detection is where the difference image is analyzed by using the principal component analysis (PCA) and the k-means clustering algorithm. The PCA is employed for the purpose of conducting dimension reduction and feature extraction, which is a canonical technique to find the useful data representations in a space with a much reduced dimensionality. The eigenvector space is created by applying the PCA on non overlapping

square blocks collected from the entire difference image. The number of eigenvectors determines the dimensionality of the feature vector. The feature vector at each pixel position is computed by projecting the local change of the pixel values onto the eigenvector space. The binary k-means clustering

(i.e., k = 2) is then employed on the PCA-extracted feature vectors to compute the final change detection result. The algorithm produces promising results with

a low computational cost. The PCA can only separate

pair wise linear dependencies between data points and thus may fail in situations where the dependencies between the data points are highly nonlinear. Therefore, the PCA is prone to produce false detections due to noise interference. Transform techniques can be exploited to analyze the difference image and to reduce the effect of noise contamination even through a multiresolution structure.

The DT-CWT is used to individually decompose each input image into one low-pass sub band and six directional high-pass sub bands at each scale of the decomposition. The DT-CWT coefficient differences which resulted from the subbands of the two satellite images are analyzed in order to decide whether each pixel position belongs to the changed or unchanged class for each subband. The binary change detection map is thus formed for each sub band, and all the produced subband maps are then merged by using both the interscale fusion and theintrascale fusion to yield the final change detection map. This method is free of parameter selection, except that the number of decomposition scales used in the DT-CWT decomposition is required to be imposed in advance. The attractive change detection performance and robustness against noise contamination are accomplished at the expense of high computational cost. The log-ratio image is first obtained by taking the logarithm of the pixel ratio of the two satellite images, followed by the multiresolution analysis by using the UDWT for generating different resolutions of the representation of the difference image. The final change detection result is obtained according to an adaptive scale- driven fusion algorithm. The method achieves a highly accurate change detection result but has a major concern on the selection of an appropriate detection threshold for each resolution. Recently, another UDWT-basedmultiresolution representation is exploited to decompose the difference image of the multitemporalimages. A feature vector at each pixel is then formed by locally sampling te data from the multiresolution representation of the difference image. The final change detection map is obtained by clustering the multiscale feature vectors using binary k-means algorithm to obtain two disjoint classes:

changed and unchanged. Overall, this method performs quite well, particularly on detecting adequate changes even under strong noise interference. However, due to the spatial support of the local sampling structure employed in the feature vector computation, the boundary accuracy of the changed regions is sacrificed. The multiresolution analysis of the difference image, together with the level set implementation of the scalar MumfordShah segmentation, is employed to perform the unsupervised change detection. The multiresolutionrepresentation of the difference image is achieved by iteratively down sampling the difference image by a factor of two in both directions

. First, the difference image is segmented into changed and unchanged regions at the coarse resolution using the scalar MumfordShah segmentation. The segmentation result from the coarse resolution is upscaled by a factor of two in both directions and then used as the initialsegmentation estimate for the change detection at the next finer esolution. This aforementioned process is repeated on the next finer resolution levels until the final segmentation result reaches to the same spatial support of the difference image. This method achieves comparable change detection results compared with several state-of-the-art change detection methods however; it mainly depends on the initial segmentation achieved at the coarse resolution. The segmentation error yielded at the coarser resolutions will be propagated inevitably to its finer resolution and results in performance degradations. Because of the down-sampling process used in the multiresolution representation, this method is unable to detect the changes whose spatial supports are lost through themultiresolution representation of the difference image due to the down-sampling operation. In order to alleviate the aforementioned concerns, an unsupervised change detection method should possess the following features: 1) high robustness against noise; 2) accurate boundary of changed regions; 3) free of a priori assumptions in modeling the data distribution of the difference image; and 4) low computational complexity The multiresolution representation is achieved by using the UDWT to benefit from its inherited robustness against the noise interference on the representation. The UDWT is exploited, instead of the discrete wavelet transform (DWT), because of the following characteristics: 1) There is no down-sampling operation involved, and thus, it is free from an aliasing problem; and 2) it is shift invariant. The active contour method is employed on the multiresolution representation to segment the difference image into the changed and unchanged regions with accurate region boundaries. The active

contour is free from making any a priori assumption on the statistical modeling of the input data. It is robust to noise interference and holds good regularization properties. The model has been extended for the vector-valued Furthermore, a level set implementation of the active contour model makes it possible to perform the segmentation process with a moderate computational complexity

  1. Using Local Gradual Descent

    Change detection could be defined formally as a clusteringprocess that classifies input pixels into changed or unchanged categories when given two multitemporal images of the same geographical area. This method is based on the analyses of the difference image, which is the absolute valued difference of two temporal images. The difference image is first partitioned into nonoverlapping blocks. Gaussian mixture model (GMM) with the number of components of two is used for modeling the data distribution of the difference image. With GMM, two Gaussian components with their means and co variances are acquired. One component provides an approximation for the data distribution of the changed pixels, and the other one is for unchanged pixels. Sampling process is done on per-pixel basis where the neighborhood data around each pixel constitute a sample. The sample pixels are modified by the so-called local gradual descent matrix (LGDM), values of which are descending from center pixel toward outside within an effective distance. Effective distance of the LGDM indicates the distance, in number of pixels, up to which the center pixel has a decreasing modification effect on neighboring pixels. LGDM visits each sample and creates small variations in pixel values of the sample in an attempt to shift it toward the correct Gaussian component. A critical region occurs somewhere in between the Gaussian components where the samples are unstable, meaning hardly or wrongly classified to their Gaussian components by the mahalanobisclassifier. Thus, the modification effect of the LGDM increases as samples approach to the critical region. This is very sensible in that a sample close to its Gaussian component center is not necessarily or minimally required to be modified since its category is already known with high probability. However, samples close to the critical region require significant modifications forcing the sample to escape from the unstable state. Changes in pixel values of each sample cause fluctuations in nearby samples and provide an important feature for exploring global structures. Collectively, all

    variations in pixel values of neighbor samples create information flow across the samples visited by the LGDM. In this way, the local contextual information is carried to neighboring sides. Thus, the collaborations of the samples help important global structures being explored. All the samples modified by the LGDM form the feature vectors. Finally, the change detection is achieved by partitioning the feature vector space into two clusters, changed or unchanged, using k-means clustering.

  2. Using Dual-Tree Complex Wavelet Transform

    Due to the decimation operation encountered in the wavelet decomposition, the two input images need to be first scaled up by a factor of two in both dimensions in order to produce the final change- detection result with the same image size as that of the original image. The two scaled-up input images are then represented by exploiting the dual-tree complex wavelet transform (DT-CWT) , as it possesses attractive properties for image processing, namely, shift invariance and more directional sub band information, when compared with that of the discrete wavelet transform (DWT). The change- detection problem is then tackled by employing the Bayesian inferencing on each subband difference image obtained by taking the absolute-valued difference of the corresponding DT-CWT high-pass subbands at different scales and directions. To estimate the probability densities of the Bayesian framework and their involved parameters, the expectation maximization (EM) algorithm is exploited to iteratively refine the estimation accuracy. Based on the final estimated densities, an unsupervised thresholding process is then derived and applied to each pixel location to determine whether the pixel intensity involves a change or no change. The binary mask of the final change detection can be formed by eventually merging the intrascale and the inter scale information.

  3. Conclusion

    Here different change detection algorithms for satellite images are discussed. Each of these algorithms has advantages, but suffered from some drawbacks. Unsupervised methods can be commonly seen in all the above mentioned methods as it gives best results for the change detection of the satellite images. Also the algorithms that use wavelet domain

    perform better results as it fairly robust against noise interference.

  4. Applications

    The change detection of images had wide applications in environmental surveillance, remote sensing, medical diagnosis, and infrastructure monitoring. The environmental disasers like forest fire, floodetc can be detected using change detection of satellite images where direct human access is not possible.

  5. References

[1] Paolo Gamba, , Fabio DellAcqua, and Gianni Lisini-Change Detection of Multitemporal SAR Data in Urban Areas Combining Feature-Basedand Pixel-Based Techniques,IEEE Journals [2]TurgayCelik Multiscale Change Detection in Multitemporal Satellite Images,IEEE Journals

  1. TurgayCelik Unsupervised Change Detection in Satellite Images Using Principal Component Analysis and k-Means Clustering,IEEE Jounals

  2. TurgayCelik Change Detection in Satellite Images Using a Genetic Algorithm Approach,IEEE Journals

  3. Brian P. Salmon, Jan Corne Olivier, Konrad J. Wessels, Waldo Kleynhans, Fransvan den Bergh, and Karen C. Steenkamp- Unsupervised Land Cover Change Detection: Meaningful Sequential Time Series AnalysisIEEE Journals

  4. TurgayCelik and Kai-Kuang Ma-Multitemporal Image Change Detection Using Undecimated Discrete Wavelet Transform and Active Contours,IEEE journals

  5. ZekiYetgin-Unsupervised Change Detection of Satellite Images Using Local Gradual Descent,IEEE journals

  6. TurgayCelik and Kai-Kuang Ma-Unsupervised Change Detection for Satellite Images

Using Dual-Tree Complex Wavelet Transform,IEEE journals

Profile

Sreeja K S received the B- tech degree from Rajagiri School of Engineering and Technology under Mahatma Gandhi University,Kochi,Kerala,India.She is currently doing M-Tech in Electronics with

specialization in Signal Processing in Govt.College of Engineering Cherthala under Cochin University of Science and Technology,Kerala,India.

Prasanth C.R Received Diploma in Electronics and Instrumentation from govt. Polytechnic cherthala (first class with distinction, third rank and gold medalist). He passed Btech in Electronics and Communication Engg from M.G University kerala placed in first class with distinction He currently pursing Mtech in Digital signal processing from Govt. Engg college cherthala under Cochin University of science and technology Kerala India. He published three research articles in international journals. His areas of interests include digital image processing, statistical signal processing etc

joyal joseph currently pursing Btech in electronics and communication Engg from college of engineering cherthala under cochin university of science and technology Kerala India

Anuraj .R currently pursing Btech in electronics and communication Engg from college of engineering cherthala under cochin university of science and technology Kerala India

Leave a Reply