- Open Access
- Total Downloads : 5
- Authors : Pavani N , Aruna A , Manas Ranjan Biswal
- Paper ID : IJERTV8IS010094
- Volume & Issue : Volume 08, Issue 01 (January – 2019)
- Published (First Online): 30-01-2019
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
A New Agglomerative Hierarchical Clustering Combined with Guided Filter for Hyperspectral Image Classification
Pavani. N1, Aruna. A2, Manas Ranjan Biswal2
1PG Student, 2Asst.Professor
Sanketika Vidya Parishad College of Engineering, Visakhapatnam. India.
Abstract
:-The growth of applications in hyperspectral image is explosive in recent years and has brought many researches to work on how effectively classify the objects by their spectral feature. To increase and enhance the classification accuracy , many spectral-spatial approaches are proposed, in place of traditional pixel-wise classification. We combine Hierarchical clustering with guided filter to mine spatial information effectively or and optimize the classification accuracy. To verify the usefulness of the two proposed methods, we evaluate performance on two benchmark datasets. Experimental results suggest that the proposed approaches show better accuracy.
Keywords:- k-nearestneighbor, hyperspectral image classification, guided filter
-
INTRODUCTION
The development of hyperspectral sensors, hyperspectral images(HSI) is enormous and it is easy to obtain. HSI have been used in many areas, such as environmental protection[3], land cover[1,2],
With the development of hyperspectral sensors, hyperspectral images(HSI) are easy to obtain. So, HSI have been widely used in many fields, such as land cover [1,2], environmental protection [3], agriculture [4,5], and so on, dueto the abundance in spectral and spatial information. HSI classification, as a critical problem for HSI application, has attracted more and more attention.
The goal of HSI classification is to categorize the pixels into one of several classes based on their spectral characteristics. During the last decade, a large number of pixel-wise classifiers were applied, including random forests [6], k-Nearest Neighbour [7], support vector machine(SVM)[8], and sparse representation[9]. However, these traditional methods only focused on the spectral information, ignoring the spatial contextual information which also affected the classification performance. After all, that is a universal phenomenon that remote sensing images exist "different body with same spectrum" or "same body with different spectrum".
Recently, spectral-spatial classification was proposed by many researchers which combines spatial context with
spectral information, based on the assumption that pixels from a local region should have similar spectral information and belong to the same materials. One manner of spectral-spatial classification is based on the kernel combination or fusion, e.g., composite [10], morphological [11], and graphic [12] kernels. The kernel-based methods have been proved to have good performance in the HSI classification [10-12].
In addition, the joint representation model is an effective manner to use spectral and spatial information, drawing on the progress of sparse representation [13] and collaborative representation [14]. The paper [15] exploits a joint sparse model to incorporate the spatial information. The main idea of [15] is that neighboring pixels of a pixel are represented by the sparse samples of training set. Since then, a great deal of literature on sparse models and joint representation has emerged, such as kernel-based joint sparse model [16], structured joint sparse model [17], dictionary learning [18,19] and so on. Inspired by the joint representation model, Bo et al. [20] develop a novel classification framework based on the Spectral-Spatial K – Nearest Neighbor approach. They exploit neighbor window of a pixel to represent the spatial information, which effectively applies the spectral-spatial information.
Image filtering has been widely used to suppress or extract content in computer vision, including image restoration, blurring, edge detection, feature extraction, etc. HSI, as a kind of special images, applies edge-preserving filtering(EPF) for hyperspectral image visualization [21]. Early, the joint bilateral filter [22] and the weighted least- squares filter [23] were proposed. Later, the domain transform filter [24] and the guided filter [25] were presented. The two most widely used are the joint bilateral filter and the guided filter. Motivated by EPF, Kang et al.
-
introduce EPFs to spectral-spatial HSI classification. First, they adopt a pixel-wise classifier (support vector machine) to classify each pixel. And then, they apply a EPF to the resulting classification map, which improved the classification accuracy significantly. The EPF is the first principal component of the HSI. The paper [27] also apply guided filter to obtain the spatial feature of HSI. Then, an auto encoder is adopted to extract the feature which combined the spatial information with the spectral
information. This paper presents a novel approach using Hierarchical clustering combined with guided filter for HSI classification.
-
-
RELATED WORK:
-
Agglomerative Hierarchical clustering:
Hierarchical clustering is one of the major cluster analysis techniques that construct hierarchical structure of clusters through a two-dimensional diagram known as dendrogram. Each observation in the dataset is assigned to one distinct cluster, then distances between each pair of the objects of the clusters are calculated and the closest pair of clusters according to the linkage criteria is merged into one cluster continuously.
-
Guided filter:
Guided filter was first proposed by He. [28]. Given a guidance I and an input image p, we can obtain an output image q by guided filter. Generally, q is a linear transform of I in a window centered at the pixel . If the radius of k is r, the size of local window k is (2r+1) x (2r+1)
qi = kIi+bk, ik (1)
Where is linear coefficient and is a bias. From the model, it is obvious that =, which means that the filtering output q will have similar edge with guidance image I. To obtain the coefficient and bias, a minimum cost function in the window k is applied as follows:
2)
2)
E(k,bk) =i k((kIi+bk pi)2 + k ) (2)
Here, is a regularization parameter which could affect the blurring for the guided filter.
-
-
A NEW AGGLOMERATIVE APPROACH FOR HIERARCHICAL CLUSTERING COMBINED WITH
GUIDED FILTER FOR HIS CLASSIFICATION:
3.1. Problem Formulation Generally, to describe the HSI problem clearly, we define M={x1,x2xn}as the hyperspectral data set, where xn={xn1,xn2xnS}is the nth pixel with S bands, and N denotes the number of HSI pixels. For obtaining a classifier, we need to construct a set T={(x1,y1), (x2,y2), ..(xM,yM)}where {1,2 } denotes the one of K labels, and < is the number of samples. The aim of HSI classification is to output a for a given M.
3.1 Algorithm of the propose approach:
Let X = {x1, x2, x3, …, xn} be the hyperspectral data set.
-
Begin with the disjoint clustering having level L(0) = 0 and sequence number m = 0.
-
Find the least distance pair of clusters in the current clustering, say pair (r), (s), according to d[(r),(s)] = min d[(i),(j)] where the minimum is over all pairs of clusters in the current clustering.
-
Increment the sequence number: m = m +1.Merge clusters (r) and (s) into a single cluster to form the next clustering m. Set the level of this clustering to L(m) = d[(r),(s)].
-
Update the distance matrix, D, by deleting the rows and columns corresponding to clusters (r) and s) and adding a row and column corresponding to the newly formed cluster. The distance between the new cluster, denoted (r,s) and old cluster(k) is defined in this way: d[(k), (r,s)] = min (d[(k),(r)], d[(k),(s)]).
-
If all the data points are in one cluster then stop, else repeat from step (2).
-
-
EXPERIMENTAL RESULTS:
-
Experimental Setup
-
Data Sets
The Indian Pines image was recorded by AVIRIS sensor over the Indian Pines test site in North-western Indiana. This image consists of 145145 pixels with 220 spectral bands in the wavelength range from 0.4 to 2.5m. There are 16 categories to be classified.
-
Evaluation metrics
We apply three widely used quality indexes, i.e., the overall accuracy (OA), the average accuracy (AA), and the kappa coefficient. OA is the percentage of correctly classified samples to all test samples, AA is the mean of the percentage of correctly classified pixels for each class, and the kappa coefficient is calculated based on the confusion matrix of different classes. Because the samples of training set are randomly selected, we take the average of 10 times experiments as the final result.
-
Parameter settings
In our experiment, there are several parameters to be set. In which, the radius r of guided filer and regularization parameter are the two key factors to affect the result of guided filtering. Radius r is used to express the range of smooth. And is used to control the ambiguity, in which, the bigger the value, the more blurred the output image is. We set r=3 and =0.001 in this work. Meanwhile, a local window is need to set for joint representation KNN. We also set the r of the local window to 3. For the above two datasets, we take 5% of the data as the training set and the remaining 95% to test the proposed approach.
-
-
Experimental Results
The first experiment is performed on the Indian Pines data set. We show some of the results in fig.1. Obviously, the edge of picture (f) is clear than others, especially than picture (d). According tothe quantitative index, the detailed results of our experiments are shown in Table 1. Clearly, there is a vast distance between the pixel-wise classifier SVM which is an outstanding classifier and spectral-spatial
classifier. FGF-JKNN-g, FGF-JKNN-c are roughly the same with the methods EPF-g, EPF-c, and SSKNN. Only one or two indexes outperform three other methods. However, PGF-JKNN-g and PGF-JKNN-c are better than all else methods. Especially, PGF-JKNN-g obtains the best
results in 9 categories of 16 categories. Compared with a primary reference method SSKNN, our approaches increase the OA, AA, and Kappa by 4%, respectively. Also, PGF-JKNN-g,PGF-JKNN-c are better than EPF-g, EPF-c by 5%.
Fig. 1.(a) Ground truth
(b) SVM
(c)EPF-c
(d) SSKNN
(e)FGF-JKNN-g (f)Proposed method
Table 1. Classification accuracy on the Indian Pines data set (% )
SVM
EPF-g
EPF-c
SSKNN
FGF-
FGF-
PGF-
Proposed
[35] [26] [26] [20] JKNN-g
JKNN-c
JKNN-g
method
Alfalfa
68.6
94.7
100
100
100
100
100
100
Corn-N
59.3
81.5
81.4
92.1
93.8
92.1
98.8
97.5
Corn-M
56.7
77.9
77.3
93.8
91.5
93.5
97.7
98.1
Corn
74.2
100
100
100
94.6
100
95.1
99.4
Grass-M
88.7
95.9
95.6
94.4
85.9
96.0
98.3
97.9
Grass-T
94.9
97.7
98
96.9
96.7
95.7
97.4
96.9
Grass-P-M
91.7
98
92.9
95.5
99
98
99
98
Hay-W
97.4
99
98
96.1
98.6
99
98
98.8
Oats
62.0
71.0
68.0
98
99
100
99
98
Soybean-N
68.8
85.8
83.1
92.2
95.8
94.9
98.5
94.8
Soybean-M
65.2
86.4
94.1
95.4
92.5
96.1
99.2
97.1
Soybean-C
72.5
97.9
96.3
94.9
86.4
94.1
97.4
99.4
Wheat
99.3
99.3
100
92.5
88.9
94.3
100
98.5
Woods
88.1
95.0
96.7
97.1
98.4
99.8
98.6
100
Build-G-T-D
65.1
100
95.6
96.6
99.7
98.6
99.7
99.7
Stone-S-T
97.7
100
100
98.7
100
89.9
96.6
100
OA
71.31
90.05
92.75
93.74
92.19
95.64
97.26
97.26
AA
77.75
92.83
92.75
92.36
90.04
85.89
95.90
94.60
Kappa
68.78
88.62
90.01
92.86
91.10
95.03
96.88
96.87
-
-
CONCLUSION
In this paper, we combine joint representation Agglomerative Hierarchical Clustering with guided filter. A front guided filter method is used to extract spatial
information. A posterior guided filter take advantage of denoising to optimize the classification result. The proposed two methods perform well and succeeded in classify hyperspectral images with highest accuracies compared with the existing methods. It is shown that
guided filter can greatly improve the classification accuracy in hyperspectral image. In future, our work involves analysing relationship between spectral dimension and classification accuracy and designing a weighted filter for hyperspectral image classification.
REFERENCES
-
Zomer R J, Trabucco A, Ustin S L. Building spectral libraries for wetlands land cover classification and hyperspectral remote sensing[J]. Journal of Environmental Management, 2009, 0(7): 2170-2177.
-
Petropoulos G P, Kalaitzidis C, Prasad Vadrevu K. Support vector machines and object-based classification for obtaining land- use/cover cartography from Hyperion hyperspectral imagery[J]. Computers & Geosciences, 2012, 41(2):99-107.
-
Lawrence R L, Wood S D, Sheley R L. Mapping invasive plants using hyperspectral imagery and Breiman Cutler classifications (RandomForest)[J]. Remote Sensing of Environment, 2006, 100(3): 356-362.
-
Dale L M, Thewis A, Boudry C, et al. Hyperspectral imaging applications in agriculture and agro-food product quality and safety control: a review[J]. Applied Spectroscopy Reviews, 2013, 48(2): 142-159.
-
Haboudane D, Miller J R, Pattey E, et al. Hyperspectral vegetation indices and novel algorithms for predicting green LAI of crop canopies: Modeling and validation in the context of precision agriculture[J]. Remote sensing of environment, 2004, 90(3): 337-352.
-
Dalponte M, Orka H O, Gobakken T, et al. Tree species classification in boreal forests with hyperspectral data[J]. IEEE Transactions on Geoscience and Remote Sensing, 2013, 51(5): 2632-2645.
-
Ma L, Crawford M M, Tian J. Local manifold learning-based $ k
$-nearest-neighbor for hyperspectral image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2010, 48(11): 4099-4109.
-
Melgani F, Bruzzone L. Classification of hyperspectral remote sensing images with support vector machines[J]. IEEE Transactions on geoscience and remote sensing, 2004, 42(8): 1778-1790.
-
Chen Y, Nasrabadi N M, Tran T D. Hyperspectral image classification via kernel sparse representation[J]. IEEE Transactions on Geoscience and Remote sensing, 2013, 51(1): 217-231.
-
Camps -Valls G, Gomez-Chova L, Muñoz-Marà J, et al. Composite kernels for hyperspectral image classification[J]. IEEE Geoscience and Remote Sensing Letters, 2006, 3(1): 93-97.
-
Fauvel M, Chanussot J, Benediktsson J A. A spatialspectral kernel-based approach for the classification of remote-sensing images[J]. Pattern Recognition, 2012, 45(1): 381-392.
-
Camps-Valls G, Shervashidze N, Borgwardt K M. Spatio-spectral remote sensing image classification with graph kernels[J]. IEEE Geoscience and Remote Sensing Letters, 2010, 7(4): 741-745.
-
Wright J, Yang A Y, Ganesh A, et al. Robust face recognition via sparse representation[J]. IEEE transactions on pattern analysis and machine intelligence, 2009, 31(2): 210-227.
-
Zhang L, Yang M, Feng X. Sparse representation or collaborative representation: Which helps face recognition?[C]//Computer vision (ICCV), 2011 IEEE international conference on. IEEE, 2011: 471-478.
-
Chen Y, Nasrabadi N M, Tran T D. Hyperspectral image classification using dictionary-based sparse representation[J]. IEEE Transactions on Geoscience and Remote Sensing, 2011, 49(10): 3973-3985.
-
Liu J, Wu Z, Sun L, et al. Hyperspectral image classification using kernel sparse representation and semilocal spatial graph regularization[J]. IEEE Geoscience and Remote Sensing Letters, 2014, 11(8): 1320-1324.
-
Sun X, Qu Q, Nasrabadi N M, et al. Structured priors for sparse- representation-based hyperspectral image classification[J]. IEEE Geoscience and Remote Sensing Letters, 2014, 11(7): 1235-1239.
-
Soltani-Farani A, Rabiee H R, Hosseini S A. Spatial-aware dictionary learning for hyperspectral image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2015, 53(1): 527-541.
-
Chen Y, Nasrabadi N M, Tran T D. Hyperspectral image classification using dictionary-based sparse representation[J]. IEEE Transactions on Geoscience and Remote Sensing, 2011, 49(10): 3973-3985.
-
Bo C, Lu H, Wang D. Spectral-spatial K-Nearest Neighbor approach for hyperspectral image classification[J]. Multimedia Tools and Applications, 2017: 1-18.
-
Kotwal K, Chaudhuri S. Visualization of hyperspectral images using bilateral filtering[J]. IEEE Transactions on Geoscience and Remote Sensing, 2010, 48(5): 2308-2316.
-
Tomasi C, Manduchi R. Bilateral filtering for gray and color images[C]//Computer Vision, 1998. Sixth International Conference on. IEEE, 1998: 839-846.
-
Farbman Z, Fattal R, Lischinski D, et al. Edge -preserving decompositions for multi-scale tone and detail manipulation[C]//ACM Transactions on Graphics (TOG). ACM, 2008, 27(3): 67.
-
Gastal E S L, Oliveira M M. Domain transform for edge-aware image and video processing[C]//ACM Transactions on Graphics (ToG). ACM, 2011, 30(4): 69.
-
He K, Sun J, Tang X. Guided image filtering[J]. IEEE transactions on pattern analysis and machine intelligence, 2013, 35(6): 1397-1409.
-
Kang X, Li S, Benediktsson J A. Spectral spatial hyperspectral image classification with edge-preserving filtering[J]. IEEE transactions on geoscience and remote sensing, 2014, 52(5): 2666-2677.
-
Wang L, Zhang J, Liu P, et al. Spectralspatial multi-feature- based deep learning for hyperspectral remote sensing image classification[J]. Soft Computing, 2017, 21(1): 213-221.