Analysis and Comparative Study of Image Fusion Techniques for Land Use and Land Cover Classification on Anthrasanthe Hobli, Karnataka – Case Study

DOI : 10.17577/IJERTV3IS061266

Download Full-Text PDF Cite this Publication

Text Only Version

Analysis and Comparative Study of Image Fusion Techniques for Land Use and Land Cover Classification on Anthrasanthe Hobli, Karnataka – Case Study

Prof. Dr. P. K. Srimani, F.N.A.Sc., Mrs. Nanditha Prasad (Ph.D.)

Prof & Director, R&D (CS), Asst. Prof, Department of Computer Science Bangalore University, DSI Government Science College,

Bangalore, India Bangalore, India

Abstract: Land use/cover classification is one of the most important applications in remote sensing. Image fusion algorithms are the most promising methods in land use and land cover mapping using remotely sensing data. An attempt has been made to evaluate and test image fusion techniques applied to Cartostat -1 and LISS-IV images towards land use and land cover mapping of Anthrasanthe Hobli, Heggadadevanna Kote Taluk, Mysore District, Karnataka. Six image fusion techniques were tested to evaluate their enhancement capabilities to extract different land use and land cover classes; Principle component analysis (PCA), Brovey Transform, Gram Schmidt, Modified-IHS (Intensity, Hue, Saturation) method, Wavelet-PCA and Wavelet-IHS techniques were tested and subjected to visual and statistical comparison for evaluation.

Both visual and statistical comparison showed that Wavelet- PCA and Wavelet- IHS method have highest visual enhancement and maintained the quantitative information of the original image. Ten land use and land cover classes were identified from the study area for classification. Multispectral image and fused images were classified using Maximum likelihood algorithm. Accuracy assessment was carried out on the classified images. The overall classification accuracy obtained for each of the six image fusion techniques are PCA – 70%, Modified-IHS – 78%, Brovery Transform – 75%, Gram- Schmidt – 73%, Wavelet-PCA – 85% and Wavelet-IHS – 88%. In areas with similar terrain characteristics, Wavelet IHS fusion gave better land cover and land use classification accuracy and recommended as a means of enhancing accuracy of mapping.

Keywords: Image fusion, Wavelet transforms, Land use and Land cover, Classification, Accuracy assessment

  1. INTRODUCTION

    Image fusion techniques are used to integrate different information sources and take advantages of the complementary spatial/spectral resolution characteristics typical of remote-sensing imagery. One of the major

    applications of remotely-sensed data obtained from earth orbiting satellites is data fusion because of repetitive coverage at short intervals from different satellites with different sensors characteristics. Image fusion techniques are used to generate visually appealing images and provide detailed input to the later image analysis like image classification, change detection and landslide hazard detection. Automated tasks, such as feature extraction and segmentation, are found to benefit from data fusion. The effective solution for providing high-spatial and spectral- resolution remote sensing data needs effective image fusion techniques. The principal interest of fusing multi-resolution image data is to create composite images of enhanced interpretability [1].

    Generally, image fusion methods can be differentiated into three levels: pixel level, feature level and knowledge or decision level. Pixel level image fusion techniques can be grouped into three classes: color (tone) related techniques, statistical methods and numerical methods. The color technique class includes color composition of three image bands in the RGB color space and more sophisticated color transformations such as Intensity Hue Saturation (IHS) or the Hue Saturation Value (HSV) transforms. Statistical approaches were developed on the basis of band statistics including correlation and filters, like principal component (PCA) transform. The numerical methods employ arithmetic operations of image multiplication, summation and image rationing. Advanced numerical transform approaches use wavelets in a multi- resolution environment [2][3].

    Many publications have focused on how to fuse high resolution panchromatic images with lower resolution multispectral data to obtain high resolution multispectral imagery while retaining the spectral characteristics of the multispectral data. These methods seem to work well for many applications, especially for single-sensor single-date

    fusion. However, they exhibit significant color distortions for multi temporal and multi sensor case studies [4].

    In this study an attempt has been made to evaluate and test image fusion techniques applied to Cartostat-I and LISS-IV images towards land use and land cover mapping for Anthrasanthe Hobli, Heggadadevanakote taluk, Mysore District, Karnataka. Six fusion methods were applied for comparison; Modified-IHS fusion, PCA- Principle Component Analysis, Gram Schmidt fusion, Brovery Transform fusion, Wavelet – PCA fusion and Wavelet – IHS fusion. These fusion techniques were used to generate hybrid images using LISS-IV and Cartostat-I image. The evaluation of the hybrid images is based on the quantitative criteria including spectral and spatial properties and definition of images. The visual qualitative results were analyzed. The correlation between the original MS and the fused images and all the statistical parameters of the histograms of the various frequency bands were determined and analyzed. The merged images and multispectral image were classified by means of Maximum likelihood supervised classification. The quality of the merged images were examined by comparing the classification accuracy results. The objective is to compare the efficiency of six different techniques of merging high spatial resolution image with multispectral image to improve the extraction and identification of different land use and land cover classes.

  2. STUDY AREA

    The study area was Anthrasanthe Hobli, Heggadadevanakote taluk , Mysore district, Karnataka, India with Latitude 12°5'23"N and Longitude 76°19'47"E. It consists of Kabini reservoir. The agriculture in this taluk is rain-fed and irrigated. Major crops of the region are cotton, grams, groundnut, jowar, maize, ragi, rice, sugarcane and sunflower.

    HDkote Taluk

    Mysore District

    Anthrasanthe Hobli

    Fig 1. Study area

  3. MATERIALS AND METHODS

A. Materials

  • The IRS-ID LISS IV image of the above area for March 2012 with a spatial resolution of 5.8m was used for the study. A false color composite (FCC) image was generated using the 3, 2, 1 bands of the satellite data.

  • Cartostat-I image of the above area for March 2012 with spatial resolution of 2.5m was used.

  • The survey of India (SOI) toposheets 58A-1/5/9 and 57D- 4/8/12 of 1:50000 scales, was used as reference data.

  • Land use and Land cover map of the area.

Fig 2. LISS-IV multispectral image

Fig 3. Cartostat-I image

    1. ethodology

      1. Image Fusion Techniques

        The six image fusion methods used are given below.

        1. Modified Intensity-Hue-Saturation Transform Technique (IHS)

          IHS is a common way of fusing high spatial resolution, single band, pan image and low spatial resolution, multispectral image. The R, G and B bands of the multispectral image are transformed into IHS components, replacing the intensity component by the pan image.

          Performing the inverse transformation, a high spatial resolution multispectral image IHS can enhance spatial details and improve the textural characteristics, the fusion image shows spectral distortion. Modified IHS fusion has been developed andused for merging fused multispectral bands to the original data. The panchromatic image replaces the intensity in the original IHS image and the image is transformed back into the RGB color space. The color distortion of IHS technique is often significant. The modified IHS method accepts only 3 input bands. This technique produces an output image which is the best suited for visual interpretation [5].

        2. Principle Component Analysis Fusion Technique (PCA)

          The aim of the PCA method is to reduce the dimensionality of multivariate data and preserves much of the relevant information. It translates correlated data set to uncorrelated dataset. The redundancy of the image data is reduced. The multi-spectral image is transformed by PCA, and the eigen values and corresponding eigenvectors of correlation matrix between images in the multispectral images individual bands are calculated out to obtain each matrixs principle components. The first principle component of the multi- spectral image is replaced with the matched pan image, to obtain new first principle component (FPC). The FPC and other principle components are transformed with inverse PCA to form the fused image. This preserves the original colors in all the possible combinations [6].

        3. The Brovey Transform Fusion Technique- (BT)

          The BT uses a mathematical combination of the Multispectral bands and PAN band. Each Multispectral band is multiplied by a ratio of the PAN band divided by the sum of the Multispectral bands. The general equation uses red, green, and blue (RGB) and the panchromatic bands as inputs to output the new red, green, and blue bands. The BT is based on spectral modeling and increases the visual contrast in the high and low ends of data's histogram. It uses a method that multiplies each resampled, multispectral pixel by the ratio of the corresponding panchromatic pixel intensity to the sum of all the multispectral intensities. The BT is limited to three bands and the multiplicative technique introduces significant radiometric distortion [7].

        4. Gram-Schmidt Fusion Technique (GS)

          In Gram-Schmidt spectral technique simulation of PAN band is performed using lower resolution spectral bands. In the process, the high resolution PAN band is blurred by appropriate factor, sub-sampled and interpolated up to an appropriate scale. GS orthogonolization transformation, removes the redundant information in the data, and is applied on the simulated PAN band and the spectral bands. The simulated lower spatial resolution PAN image is used as the first band in the GS transformation. The statistics of the higher spatial resolution PAN image are adjusted to the statistics of the first transform band resulting from the GS

          transformation. The higher spatial resolution PAN image adjusted statistics is replaced with the band of the first transform. Finally, an inverse transform is applied to produce higher resolution spectral bands [8].

        5. Wavelet- PCA Fusion Technique Method

          The wavelet transform is applied on the images by producing a set of low resolution PAN images from the high resolution PAN image using wavelet coefficients for each level. After decomposing the PAN band, the resulting low resolution PAN band is replaced with a multispectral band at the same resolution level. Then, a reverse wavelet transform is performed to convert the data to the original resolution level of PAN.

          In the Wavelet-integrated PCA method, PCA is applied on the multispectral image prior to the wavelet analysis. After applying a histogram match between the first PC and the PAN image, the first PC is replaced with the PAN band. The inverse transform is applied on the image to construct a fused RGB image [9].

        6. Wavelet- IHS Fusion Technique Method

The Wavelet IHS fusion method preserves the spatial and color information of the LISS- IV data. The wavelet IHS fusion method can preserve the color information much better while maintaining competitive spatial information. The wavelet IHS fusion image is similar to the multispectral image. The wavelet integrated IHS fusion method implements a forward IHS transform to a true colour or near- infrared multispectral composite. In each of the cases, the intensity layer is extracted and wavelet-fused with the high resolution panchromatic image. The intensity layer is replaced by the fusion product and an inverse IHS colour transform is performed. The acquired fused image has its spatial resolution improved, having preserved its spectral characteristics. Intensity hue and saturation with wavelet decomposition will helpful in case of preserving spectral and spatial information [10][11].

The above mentioned fusion techniques were applied for merging LISS-IV and Cartostat-I image. Visual image interpretation was performed for comparison of the merged images along with multispectral image. Statistical parameters min, max, mode, median, mean and standard deviation were determined for analysis and comparison between fused images and original multispectral image. Ten land use and land cover classes were identified. Maximum likelihood supervised classification was applied for all the fused images and multispectral image. The accuracy assessment was performed for all images. The best fusion technique for land use and land cover classification was determined.

4. RESULTS

  1. Visual Evaluation:

    The fused images were evaluated visually for spatial and spectral resolutions. The images from Figure 4 to Figure 9 revealed that the spatial and the spectral resolutions are improved, in comparison to the original images. The fused images contain both the structural details of the higher spatial resolution panchromatic image and the rich spectral information from the multispectral image. In the original LISS-IV images, it is very difficult to identify and classify features like parcels of land, roads, trees and small buildings due to scale limitation.

    The colors of the features in the fused images have changed. This color distortion effect is the largest in PCA method followed by Gram-Schmidt, Brovery transform and ModifiedIHS techniques. Among the six methods, Wavelet- PCA and Wavelet-IHS gives the best result in terms of color conservation. The spectral characteristics of Wavelet-IHS fusion and WaveletPCA are closest to the original multi- spectral image than other fused images.

    The Wavelet-IHS and Wavelet-PCA techniques give a better spatial resolution when compared to other fusion methods. The images obtained of LISS-IV and Cartostat-I by wavelet method has a better visual effect as shown in Figure 8 and Figure 9.

    Fig 4. Modified IHS fused image

    Fig 5. PCA fused image

    Figure 6: Brovery Tronform fused image

    Fig 7. Gram-Schmidt Fused image

    Fig 8. Wavelet PCA fused image

    Fig 9. Wavelet- IHS fused image

  2. Statistical Parameter Evaluation

    The statistical parameters have been displayed in Table 1. The parameter for comparison is Mean, which describes the central location of the data or where the DN histogram curve is positioned horizontally. Figure 10 and 11 illustrate a graphical comparison among the six techniques and the original multi band image. Figure10 shows that Brovey transform method image has lower mean values than other images, followed by PCA and ModifiedIHS. Wavelet-PCA and Wavelet-IHS images have near values to the original multiband image. Brovey image having low mean values appears more darker than other images in color. Gram- Schmidt had mean values greater than the original image and other merged images indicating a brighter image.

    The other tested parameter is the standard deviation. The standard deviation is expressing the variation of brightness values of the image. Standard deviation values of the Wavelet-IHS and Wavelet- PCA methods are significantly close to the multispectral image than the other methods as shown in Figure 11. Table1 shows that the Brovey image has less standarddeviation than the original image, this has resulted in less contrast in brightness and environmental boundaries definition.

    Fig 10. Mean values of fused images

    Fig 11. Standard deviation values of fused images.

    TABLE 1. Statistical parameters of fused images

    Images

    Laye rs

    Mi n

    Ma x

    Mea n

    Medi an

    Mo de

    Std. Dev

    Pan

    Band

    1

    40

    931

    91.3

    2

    92

    92

    17.

    01

    Band

    51

    368

    75.6

    72

    69

    10.

    1

    1

    20

    Multi-

    Band

    32

    401

    63.5

    57

    52

    18.

    spectral

    2

    5

    05

    Band

    34

    427

    127.

    135

    36

    40.

    3

    49

    40

    Band

    1

    310

    34.1

    33

    31

    10.

    1

    7

    19

    PCA_

    Band

    1

    283

    27.0

    25

    18

    13.

    Merged

    2

    9

    00

    Band

    1

    541

    50.6

    55

    56

    18.

    3

    7

    22

    Band

    1

    581

    53.4

    52

    56

    10.

    1

    5

    58

    IHS

    Band

    1

    538

    44.6

    40

    37

    13.

    Fusion

    2

    2

    66

    Band

    1

    931

    87.6

    92

    89

    24.

    3

    8

    27

    Band

    10

    263

    25.6

    25

    25

    3.4

    Brovery Transfo rm

    1

    5

    9

    Band

    2

    7

    244

    21.2

    7

    20

    18

    5.2

    37

    Band

    11

    422

    42.9

    46

    47

    12.

    3

    0

    49

    Band

    44

    179

    84.1

    84

    85

    9.0

    1

    7

    6

    Gram

    Band

    30

    176

    80.2

    81

    82

    12.

    Schmidt

    2

    4

    8

    Band

    51

    301

    143.

    143

    142

    18.

    3

    39

    97

    Band

    1

    491

    74.6

    72

    69

    10.

    1

    8

    83

    Wavelet

    Band

    1

    460

    62.7

    57

    51

    18.

    – PCA

    2

    1

    12

    Band

    1

    871

    126.

    133

    128

    39.

    3

    47

    85

    Band

    1

    563

    74.8

    73

    67

    10.

    1

    8

    69

    Wavelet

    Band

    1

    474

    62.7

    57

    50

    17.

    -IHS

    2

    8

    95

    Band

    1

    532

    126.

    133

    35

    40.

    3

    54

    05

  3. Classification Evaluation

The image was preprocessed using ERDAS IMAGINE 9.2. Maximum likelihood supervised classification and accuracy assessment were applied to all the fused images and LISS-IV multispectral image. To select the training data for classification 1/50000 scaled standard topographic maps and expert visual interpreter advices were used as reference. Fairly dense forest, degraded forest, reservoir backwater land, canal irrigated lands, upland irrigated lands, upland

irrigated land without crop, Agriculture land with crop, Agriculture land1 without crop (whitish tone), Agriculture land2 without crop (greenish tone) and water bodies were selected as classes. Signature extraction was done followed by classification. Accuracy assessment was calculated using an error matrix which showed the accuracy of both the producer and the user. For accuracy assessment, 250 pixels were used randomly selected for each image through stratified random method. Land use map and topographic sheets were used as referenced data to observe true classes.

Images

Overall Accuracy %

Kappa

Multispectral

80.08

0.7761

PCA_Merged

70.01

0.6694

Modified IHS Fusion

78.52

0.7584

Brovery Transform

75.00

0.7201

Gram Schimdt

73.44

0.7024

Wavelt-PCA

85.33

0.8333

Wavelt-IHS

88.28

0.8661

The overall accuracy and kappa accuracy assessment were used to perform a classification accuracy assessment based on error matrix analysis. Table 2 shows the classification and kappa results. The comparison of total accuracy and kappa accuracy for the classifications shows that Wavelet -IHS transformation method is found more appropriate for fusion. Higher kappa values have been obtained in wavelet based methods. Maximum likelihood classification did not provide satisfactory results in distinguishing fairly dense forest, agriculture land with crop and upland irrigated land due to the overlapping of spectral values in most of the fused images except wavelet-PCA and wavelet IHS fusion images as observed in Figure 17 and 18. Accuracy for Upland irrigated lands, reservoir backwater land, degraded forest and agriculture land with crop was relatively less because of similar scattering mechanism as shown in the classification results in Figure 12 to 16. Canal irrigated lands, upland irrigated land without crop, agriculture land1 without crop (whitish tone), agriculture land2 without crop (greenish tone) and water bodies were classified correctly as observed in Figure 12 to 18. The overall accuracy of Wavelet-IHS and Wavelet-PCA is above 85% and Kappa statistics is above 81%.

Fig 12. Modified IHS-Classified image

The above analysis and comparison, concludes that Wavelet- IHS method can preserve the spectral characteristics and high spatial resolution characteristics of source multispectral image and panchromatic image respectively. And Wavelet- IHS method is found best suited for fusion of LISS-IV multispectral and Cartostat-I PAN images.

TABLE 2: Overall Accuracy and Kappa Values

Fig 13. PCA Classified image

Fig 14. Brovery Transform Classified image

Fig 15. Gram-Schmidt Classified image

Fig 16. Wavelet-PCA Classified image

Fig 17. Wavelet-IHS Classified image

Fig 18. LISS- IV Multispectral image classification

CONCLUSION

Six fusion methods were tested for their suitability for different land use and land cover classes extraction. Visual enhancement and statistical comparison were made for the evaluation process. In General, all the methods enhance the image spatial resolution. The visual inspection showed that the Wavelet-PCA and Wavelet-IHS methods had better image quality. Wavelet-IHS method preserved the natural colors of the original multispectral image than other fused images.

The statistical comparison showed that wavelet-IHS method maintained the quantitative information of the original image. The classification result indicated for the study area was more accurate by using Wavelet-IHS technique with overall accuracy of 88.28% and kappa value of 0.866. For areas with similar terrain characteristics, Wavelet- IHS fusion gave better land cover and land use classification accuracy. Hence Wavelet- IHS fusion method can be utilized for enhancing accuracy of mapping in such areas.

This study is a successful experience with the wavelet transforms based fusion approach. It is shown that proposed wavelet transform approach improves the spatial resolution of a multispectral image while it also preserves much portion of the spectral component of the image. Some features that cannot be perceived in the original multispectral images could be identified in the fused ones. The fused images have higher accuracy and information compared to the multispectral image.

ACKNOWLEDGEMENT

The authors wish to place on record the support given by Karnataka State Remote Sensing Applications Center for providing the data and support.

REFERENCES

  1. C. Pohl, J.L. Van Genderen, Multisensor image fusion in remote sensing: concepts, methods, and applications, International Journal of Remote Sensing 19 (5), 1998, Pp.823854.

  2. L. Bruzzone, F. Melgani, A data fusion approach to unsupervised change detection, in: Geoscience and Remote Sensing Symposium,IGARSS 03.Proceedings, IEEE International, 2003, vol. 2, Toulouse, Pp. 13741376.

  3. Alparone, L., B. Aiazzi, S. Baronti, A. Garzelli and F. Nencini, 2006. A critical review of fusion methods for true colour display of very high resolution images of urban areas. 1st EARSeL Workshop of the SIG Urban Remote Sensing, Humboldt- Universität zu Berlin, 2-3 March 2006, unpaginated CD-ROM.

  4. Konstantinos G. Nikolakopoulos, Comparison of Nine Fusion Techniques for Very High Resolution Data, PhotogrammetricEngineering & Remote Sensing, Volume No. 74, Issue No. 5, PP. 647-659, Year.2008.

  5. Y. Siddiqui, The modified IHS method for fusing satellite imagery, ASPRS 2003 Annual ConferencecProceedings, Anchorage, Alaska, 2003.

  6. W.J. Chavez, S.C. Sides, and J.A. Anderson, Comparison of three different methods to merge multiresolution and multispectral data: TM & Spot Pan, Photogrammetric Engineering & Remote Sensing, 57(3), 295 303, 1991.

  7. W.A. Hallada, and S. Cox, Image sharpening for mixed spatial and spectral resolution satellite systems, Proc. of the 17th International Symposium on Remote Sensing of Environment, 9-13 May, pages 10231032, 1983.

  8. C.A. Laben, V. Bernard, and W. Brower, Process for enhancing the spatial resolution of multispectralimagery using pan- sharpening, US Patent 6,011,875, 2000.

  9. R. Pouncey, K. Swanson, ERDAS Manual, in: K. Harth (Ed.), 5th ed., ERDAS, USA, 1999, p. 193.

  10. Y. Zhang, System and method for image fusion, United States Patent 20040141659, 2004.

  11. G. Hong, and Y. Zhang, High resolution image fusion based on wavelet and IHS transformations, Proceedings of the IEEE/ISPRS Joint Workshop on Remote Sensing and Data Fusion over Urban Areas, Berlin 2003, 99104, 2003.

Leave a Reply