- Open Access
- Total Downloads : 603
- Authors : Nisthula P, Mr. Yadhu. R. B
- Paper ID : IJERTV2IS60966
- Volume & Issue : Volume 02, Issue 06 (June 2013)
- Published (First Online): 29-06-2013
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
A Novel Method of Image Fusion Combining PCA, IHS and Integrated RIM for Road Extraction in Satellite Imagery
Nisthula P
Calicut University, kerala, India
Guided by
Mr. Yadhu. R. B Asst.Professor,Calicut university, Kerala , India
Abstract
In Ikonos imagery, both multispectral (MS) and panchromatic (PAN) images are provided with different spatial and spectral resolutions. Multispectral classification detects object classes only according to the spectral property of the pixel. Panchromatic image segmentation enables the extraction of detailed objects, like road networks, that are useful in map updating in Geographical Information Systems (GIS), environmental inspection, transportation and urban planning, etc. Therefore, the fusion of a PAN image with MS images is a key issue in applications that require both high spatial and high spectral resolutions. The fused image provides higher classification accuracy. To extract, for example, urban road networks in pan-sharpened images, edge information from the PAN image is used to eliminate the misclassified objects. This paper integrate spectral information from the multispectral (MS) image with spatial information from the panchromatic (Pan) image for road extraction through a pan-sharpening or image fusion technique using IHS, PCA and integrated RIM and an edge- aided reclassification algorithm.
Keywords: Intensity Hue Saturation (IHS) transform, Principal component analysis(PCA) transformation fusion, Integrated Retina Inspired Model(RIM), Edge detection.
-
Introduction
Roads are among the most important objects that are extracted from aerial images; they are necessary for many applications, for example navigation systems or spatial planning. Extracted roads are recorded in geospatial databases. As roads are subject to frequent changes, it is necessary to check road databases frequently to eliminate errors and to add new road objects.
Earth observation satellites provide multispectral and panchromatic data having different spatial, spectral, temporal, and radiometric resolutions. To extract useful information from available high- resolution images, including airborne and space borne
imagery, different automatic and semi-automatic approaches have been developed. To date, automatic techniques for information extraction from imagery can be divided into two main categories: (1) multispectral classification techniques to classify objects from multispectral images, and (2) grey value and feature based techniques to extract objects from panchromatic images.
Mapping of urban features (e.g. roads and buildings) from satellite images has gained enormous research interest with the launch of the Ikonos and Quickbird satellites that provide very high spatial resolution (VHR) panchromatic images (PAN) and multispectral images (MS). Sensor limitations in acquiring images with high spatial as well as high spectral resolution have led to the research in image fusion techniques to obtain images with high spatial as well as high spectral resolution. Because of the complexity of an urban environment and the high level of spatial details in VHR images, different fusion techniques to combine complementary data sets such as PAN, MS, Lidar and hyperspectral data is currently of interest in the field of urban feature extraction. The fused image may provide feature enhancement, classification accuracy increase, and may be of great help in change detection. Hence, there is an increasing use of image processing techniques to combine the available multispectral images and PAN images. These techniques are known as pan-sharpening or resolution fusion techniques.
The existing fusion approaches, the multiresolution fusion approaches have been widely used in the recent studies because of their efficiency and convenience, yet, their fusion results are usually limited by the number of decomposition layer and the selection of fusion rules. IHS transform and PCA[8,9] technique can keep a better resolution, but, they also
distort the spectral characteristics with different degree. A detailed study indicated that the color distortion problem arises from the change of the saturation during the fusion process. However, retina-inspired fusion method can just complement this shortcoming of spectral distortion in fusion process.
In this paper, we present a new approach to better extract roads in urban scenes, which utilizes both
spectral information from multi-spectral images and spatial information from panchromatic images. Different from existing techniques, this new approach effectively integrates techniques of image fusion, multi-spectral classification, and feature extraction into the extraction process[2,3,4]. This paper proposes a novel method of image fusion which uses a spatial frequency
(SF) motivated PCA, IHS and RIM integrated fusion approach by integrating the advantages of above mentioned different approaches.
When comparing Sobel, Robert and Canny detectors, Robert edge detector can easily achieve a clear and proper edge image from a Quick Bird Pan image. However, some detailed edges in indistinct edge areas cannot be detected. The Canny edge detection algorithm needs to adjust two thresholds and a standard deviation of Gaussian smooth mask to yield a proper result. But, edges in blurred areas can be clearly delineated. In this study, therefore, a combination of Sobel and second derivative of Sobel detectors is employed.
-
The IHS, PCA and retina inspired fusion models
-
The RGB-IHS Conversion Model
The IHS[5,6,10] transformation converts a multispectral image or panchromatic image with red, green and blue channels (RGB) to intensity, hue and saturation independent components. The intensity displays the brightness in a spectrum, the hue is the property of the spectral wavelength, and the saturation is the purity of the spectrum. This technique may be used for the fusion of multi-sensor images.
To understand the whole fusion process preferably, we must review the RGB-IHS conversion model. There are two essential RGB-IHS conversion models. In this study, we select a more close to the real visual effect model- triangular spectral model. The IHS triangular[5] model can produce a fused and enhanced spectral image.
-
PCA Transform Fusion Approach
The whole idea of the method is described in detail in References , and here the fundamentals of PCA fusion[9] are briefly outlined as follows. Firstly, a multispectral image is transformed with PCA transform and the eigen values and corresponding eigenvectors of correlation matrix between images in the multi-spectral images individual bands are worked out to obtain each matrixs principle components. Next, the panchromatic image is matched by the first principle component
using histogram method. Finally, the first principle component of the multispectral image is replaced with the matched panchromatic image and with other principle components, followed by the transformation with inverse PCA transform to form the fused image.
-
Retina-Inspired Model
The RIM fusion[10] consists of five basic layers, fusion structure diagram of which is depicted in Fig. 1. The earliest layer represents an array of high resolution cone photoreceptors, while the second layer is a high scale spatial feature extractor. The third layer is the array of low resolution receptors (horizontal cells), the fourth and the last layers are made of bipolar and ganglion cells. Every layer has its own mathematical model and corresponding expressions.
fig1.Retina inspired modl
In the proposed fusion approach, we use a black box to represent the RIM model, which is just a part in the whole system, its internal structure and some cells detailed information in the fusion process shown the same as Fig. 1.
-
-
The Sobel edge detector
Edge detection[1] is the name for a set of mathematical methods which aim at identifying points in a digital image at which the image brightness changes sharply or, more formally, has discontinuities. The points at which image brightness changes sharply are typically organized into a set of curved line segments termed edges. The same problem of finding
discontinuities in 1D signals is known as step detection and the problem of finding signal discontinuities over time is known as change detection. Edge detection is a fundamental tool in image processing, machine vision and computer vision, particularly in the areas of feature detection and feature extraction.The well-known and earlier Sobel operator is based on the following filters
The gradient component of Sobel edge operator:
= 7 + 28 + 9 1 + 22 + 3
= 3 + 26 + 9 1 + 24 + 7
-
Proposed method
Figure 2 illustrates the general process of the proposed approach. To overcome the shortcoming of classification of low-resolution MS images, the MS and Pan QuickBird images are first fused into a pan- sharpened MS image. An unsupervised classification is then applied to the pansharpened image to obtain a classified road image. And an edge detection approach is applied to the Pan image to obtain an edge image. In the edge-aided segmentation, the binary edge image from the Pan image is employed to segment the classified road image from the pan-sharpened image. Then, a shape-based segmentation and a segments filtering algorithm are employed to remove non-road objects. The whole edge-aided classification process can be iterated to deal with complex road classification results. The individual processes of the proposed approach are described in the following sections.
Fig2. Proposed method
-
Image fusion
Fig. 3 shows the whole fusion process of the proposed approach, which may be divided into the following several steps. Firstly, the MS image is transformed into the IHS triangular model components. Then, histogram matching is applied to match the histogram of the PAN image with the MS intensity component. Next, the PCA transform extracts their own principal components of MS intensity image and new MRI (called New Pan), and selects corresponding components weight coefficients by calculating their spatial frequency respectively to obtain a new intensity component. Finally, the approach is performed by combining the new intensity component and original MS intensity component, using retina-inspired fusion model.
In this stage a final intensity image is obtained, which contains the same spatial detail of the original PAN and has the same intensity distribution to the original MS. In the meantime, it also avoids some superfluous details and artefacts in the previous transformation. Ultimately, we can obtain a satisfied
fused image by inverse IHS transform exploiting the new intensity component and original H and S components of MS image.
Fig3. Proposed fusion method
This fusion process generates a new high resolution color image. The new image contains both the spatial details of the PAN source image and the spectral information of the MS source image, simultaneously. How to select two principal components weight coefficients after PCA transform is a critical problem. This paper propose an adaptive selection method by calculating spatial frequency (SF) of original MS intensity image and old PAN image.
The expression for a K *L pixels image f (x, y) is defined as:
Fig4. Panchromatic image
Fig5. Multispectral image
= 2 + 2
where RF and CF are the row frequency and column frequency respectively.
= 1
×
, , 1 2
=1 =2
= 1
×
, 1, 2
=1 =2
The selection of two principal components weight coefficients based on SF can be depicted as:
= 1 + 2
+ = 1
Where 1 , 2 represent the principal component of New Pan and original multispectral intensity component, respectively. and are normalized SF values.
Fig6. Fused image
-
Classification
Figure7 shows the clustering result from the pan sharpened QuickBird image in an urban area. Figure8 is the classified binary road image from the clustering result. It is clear that almost all the road networks are correctly extracted. However, the rate of misclassification is high. For example, there are small family driveways connected to road networks, and many house roofs are classified into the road networks. These make it impossible to obtain an accurate road network without further processing. Since an unsupervised clustering method is usually better suited for classifying heterogeneous classes in high resolution satellite images than a supervised classification the unsupervised fuzzy K mean clustering method is used to classify the QuickBird images in this study.
Fig7. Result of clustering
Fig8. Classified road image
-
Edge detection
Edge detection is one of the fundamental steps in image processing, image analysis, image pattern recognition etc. Edge can outline the target objects profile, which is an important attribute of extract from the images recognition. The proposed edge detection method is shown below.
Edge detection steps include :The input image is first convoluted with the horizhontal and vertical components of Sobel operator and second derivative of Sobel operator.Then take the distance function between the outputs of Sobel and Second derivative since second derivative operator enhances only smaller edges and sobel operator enhances all types of edges.Then taking the convolution of horizontal and vertical components.
Fig9. Proposed edge detection
Fig10. PAN image
Fig11. Edge detected image
-
Edge-Aided Classification
An edge-aided classification[3] approach was developed to extract accurate road networks from a classified road image with the help of the edge information from the corresponding Pan image. The edge-aided classification consists of : edge-aided segmentation and shape-based segmentation.
-
Edge-Aided Segmentation.
As shown in Figure 12, the road network classified[2] from the pan-sharpened image contains many non-road objects either connecting to or isolating from the road network. Currently, most existing road extraction methods experience difficulties to deal with such problems.
Fig12. Road network after edge aided segmentation
-
Shape-based Segmentation.
-
A fast component labelling algorithm is applied to the road image after disconnecting noise, e.g. drive ways and house roofs, from the classified road network. Individual objects, including road networks and noise, are labelled first. They are then segmented according to their size (number of pixels) and shape information (e.g., compactness), resulting in final road networks to be extracted (Figure 6). An iterative process of edge-aided segmentation, shape-based
segmentation, segments filtering, and mathematic morphological operations may be needed to deal with complex cases.
Fig13. Road network after shape based segmentation
-
-
Conclusion
A new approach for object extraction from high resolution satellite images has been developed. The IHS,PCA and integrated RIM image fusion model and edge detection effectively integrates image fusion and feature extraction into the object extraction process. Both spectral information from MS images and spatial information from Pan images are utilized for the extraction to improve the extraction accuracy.
-
MA,Jing, BI,Qiang, Processing practise of remote sensing image based on spatial modeler The National Natural Science Foundation of China ( No. 41071237 ) IEEE 2012.
-
Ruisheng Wang, Yong Hu, Xinmei Zhang, Extraction of Road Networks Using Pan-Sharpened Multispectral and Panchromatic QuickBird Images.
-
Yun Zhang, Ruisheng Wang, Multi-resolution and multi- spectral image fusion for urban object extraction, XXth ISPRS Congress.
-
Miloud Chikr El-Mezouar, Nasreddine Taleb, Kidiyo Kpalma, Joseph Ronsin, Edge Preservation in Ikonos Multispectral and Panchromatic Imagery Pan-sharpening, Author manuscript, published in "1st Taibah University International Conference on Computing and Information Technology, Al- Madinah Al-Munawwarah : Saudi Arabia (2012)"
-
Changtao He, Guiqun Cao, Fangnian Lang, An Efficient Fusion Approach for Multispectral and Panchromatic Medical Imaging,Biomedical Engineering Research, March. 2013,
Vol. 2 Iss. 1, PP. 30-36
-
S. Daneshvar and H. Ghassemian, MRI and PET image fusion by combining IHS and retina-inspired models, Information Fusion, vol. 11, pp. 114-123, 2010.
-
M. Ehlers, Multisensor image fusion techniques in remote sensing, ISPRS Journal of Photogrammetry and Remote Sensing, vol. 46, pp. 19-30, 1991.
-
T. te-ming, S.shun-chi and S.Hsuen-chyunet, A new look at IHS-like image fusion methods, Information fusion, vol. 2, pp. 177-186, 2001
-
C. Wen, L. Bicheng and Z. yong, A remote sensing image fusion method based on PCA transform and wavelet packet transform, In: IEEE Conf. on Neural Networks &Signal Processing, vol. 976-980, 2000.
-
Sabalan Daneshvar, member, IAENG, Mosleh Elyasi, Morad Danishvar, Multispectral and Panchromatic Images Fusion Based on Integrating Feedback Retina and IHS Model, Proceedings of the World Congress on Engineering 2011 Vol II WCE 2011, July 6 – 8, 2011, London, U.K.