- Open Access
- Total Downloads : 939
- Authors : Raut Anant S. , Dr. Sudhir S. Kanade
- Paper ID : IJERTV4IS010109
- Volume & Issue : Volume 04, Issue 01 (January 2015)
- Published (First Online): 05-01-2015
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Detection and Removal of Object-Oriented Shadows from Urban High-Resolution RSI
Dr. Sudhir S. Kanade1, 2
Head of Department, Dept. of E&TC, TPCT´s COE,
Osmanabad, Maharashtra, India1
Mr. Raut Anant S.
M.E [E&TC] Student, Dept. of E&TC, TPCT´s COE, Osmanabad, Maharashtra, India2
Abstract: In accordance with the characteristics of urban high- resolution color remote sensing images, we put forward an object- oriented shadow detection and removal method. In this method, shadow features are taken into consideration during image segmentation, and then, according to the statistical features of the images, suspected shadows are extracted. Furthermore, some dark objects that could be mistaken for shadows are ruled out according to object properties and spatial relationship between objects. For shadow removal, innerouter outline profile line (IOOPL) matching is used. First, the IOOPLs are obtained with respect to the boundary lines of shadows. Shadow removal is then performed according to the homogeneous sections attained through IOOPL similarity matching. Experiments show that the new method can accurately detect shadows from urban high-resolution remote sensing images and can effectively restore shadows with a rate of over 85%.
Keywords – Change detection, innerouter outline profile line (IOOPL), object-oriented remote sensing images (RSI), TSAI´s algorithm, STS algorithm, shadow detection, shadow removal.
I.INTRODUCTION
In the last ten or more years, with the availability of high- spatial-resolution satellites such as IKONOS, QuickBird, GeoEye, and Resource 3 for the observation of Earth and the rapid development of some aerial platforms such as airships and unmanned aerial vehicles, there has been an increasing need to analyze high-resolution images for different applications. In urban areas, surface features are quite complex, with a great variety of objects and shadows formed by elevated objects such as high buildings, bridges, and trees. Although shadows themselves can be regarded as a type of useful information in 3-D reconstruction, building position recognition, and height estimation, they can also interfere with the processing and application of high-resolution remote sensing images. For example, shadows may cause incorrect results during change detection. Consequently, the detection and removal of shadows play an important role in applications of urban high- resolution remote sensing images such as object classification, object recognition, change detection, and image fusion.
In Urban aerial images, shadows usually result in information loss or distortion of objects; thus, it is an important research issue to detect shadows for urban aerial images. Based on the three features, which are intensity values, geometrical properties, and light directions, several
efficient algorithms have been presented to detect shadows for gray aerial images. Since gray aerial images only provide the intensity information, some non-shadow regions may be identified as shadows even if the aforementioned three features have been considered. However, for RSIs, the shadow detection accuracy can be improved by utilizing both the intensity and the color information.
-
SYSTEM HISTORY
Many effective algorithms have been proposed for shadow detection. Existing shadow detection methods can be roughly categorized into two groups: model-based methods and shadow-feature-based methods. The first group uses prior in- formation such as scene, moving targets, and camera altitude to construct shadow models. This group of methods is often used in some specific scene conditions such as aerial image analysis and video monitoring. The second group of methods identifies shadow areas with information such as gray scale, brightness, saturation, and texture. An improved algorithm exists that combines the two methods. First, the shadow areas are estimated according to the space coordinates of buildings calculated from digital surface models and the altitude and azimuth of the sun. Then, to accurately identify a shadow, the threshold value is obtained from the estimated gray-scale value of the shadow areas. However, information such as scene and camera altitude is not usually readily available. Consequently, most shadow detection algorithms are based on shadow features. For example, the shadow region appears as a low gray-scale value in the image, and the threshold is chosen between two peaks in the gray-scale histogram of the image data to separate the shadow from the non-shadow region. An illuminant invariance model has been used to detect shadows; this method can obtain a comparatively complete shadow outline from a complex scene and derive the shadow-free image by using certain neutral interface reflecting assumptions.
In a related study, images are converted into different invariant color spaces (HSV, HCV, YIQ, and YCbCr) to obtain shadows with Otsus algorithm. This can effectively get rid of the false shadows created by vegetation in certain invariant spaces. Based on that work, a successive thresholding scheme was proposed to detect shadows. To avoid the false shadows of dark objects such as vegetation and moist soil, the normalized difference vegetation index, the normalized saturationvalue difference index, and the
size and shape of the shadow area are considered. The method used by Makarau et al. [1] accurately detected shadows with the blackbody radiation model. Recently, a hierarchical supervised classification scheme was used to detect shadows.
A variety of image enhancement methods have been proposed for shadow removal, such as histogram matching, gamma correction, linear correlation correction (LCC) and restoration of the color invariance model. In a related study, several enhancement methods were analyzed to recover shadows, namely, gamma correction, LCC, and histogram matching. Inspired by this related analysis, a better approach was developed, based on a linear relationship between shadow classes and the corresponding non-shadow classes. In addition, a paired-region-based approach is employed to detect and remove the shadows in a single image by calculating the difference between the shadow and non-shadow regions of the same type. Aside from the aforementioned methods, shadows can be retrieved using multisource data. For example, shadow pixels can be identified from the region of interest in an image and from another image obtained at a different time. Then, non- shadow pixels of the corresponding region are used to replace the shadow pixels. This latter approach is useful in low-resolution images.
Because the chromaticity information is not affected by the change of illumination for some cases, a shadow region can be detected by selecting the region that is darker than its neighboring regions but has similar chromaticity information. According to this illumination invariant property of chromaticity, several efficient methods have been developed to detect shadows for color images efficiently. However, they may not work well for RSIs since some shadow properties in RSIs have not been considered. To detect shadows of RSIs, Polidorio et al. [2] utilized two properties of shadows, which are the low luminance and the highly saturated blue/violet wavelength, to detect shadows. According to the two shadow properties, the red, green, and blue (RGB) RSI is first transformed into the hue, saturation, and intensity (HSI) color model, and then, a segmentation process is applied to the saturation component and the intensity component to identify shadows.
-
Shadow Detection
Shadows are createdbecause the light source has been blocked by something. There are two types of shadows: the self-shadow and the cast shadow. A self- shadow is the shadow on a subject on the side that is not directly facing the light source. A cast shadow is the shadow of a subject falling on the surface of another subject because the former subject has blocked the light source. A cast shadow consists of two parts: the umbra and the penumbra. The umbra is created because the direct light has been completely blocked, while the penumbra is created by something partly blocking the direct light, as shown in Fig.
-
In this paper, we mainly focus on the shadows in the cast shadow area of the remote sensing images.
Fig. 1 Principle of shadow formation
-
Shadow Detection by Tsai´s Algorithm
The flowchart of Tsais algorithm is shown in Fig.
-
-
To detect shadows in the RSI, Tsai transforms the input RGB image I into an invariant color model, i.e., HSI, HSV, HCV, YIQ, or Y CbCr color models. For each pixel, the ratio of the hue over the intensity is used to determine whether the pixel is a shadow pixel or not. For easy exposition, the HSI color model is used as the representative. Note that, among these five invariant color models, Tsais algorithm has the best shadow detection performance for the HSI model.
Tsai presented a shape preservation process to preserve shape information of objects casting shadows. It first performs the Sobel operator on Ie(x,y) to obtain the gradient map, and then, a shape map Sh can be constructed by applying Otsus thresholding method to the gradient map. In the shape map Sh, the value of each pixel is determined. For each pixel Sh(x, y), it is a boundary pixel of the casting object when Sh(x,y) = 1. After performing the logical AND operation on the shape map Sh and the shadow map S, the shape information of objects can be preserved. Finally, from the result of the shape preservation process, the shadow compensation process compensates shadows by adjusting the intensity values of shadow pixels. Since this paper focuses on the detection of shadows for RSIs, the shadow compensation process is ignored.
IOOPL
Generation & IOOPL Matching
Inner & outer Outline lines Generation
Fig.2 Flowchart of TSAI´s Algorithm
-
-
PROPOSED SYSTEM
Due to the shortcomings of pixel-level shadow detection, in this study, we propose a new technique: an object-oriented shadow detection and removal method. First, the shadow features are evaluated through image segmentation, and suspected shadows are detected with the threshold method. Second, object properties such as spectral features and geometric features are combined with a spatial relationship in which the false shadowsare ruled out (i.e., water region). This will allow only the real shadows to be detected in subsequent steps.
Shadow removal employs a series of steps. We extract the inner and outer outline lines of the boundary of shadows. The gray-scale values of the corresponding points on the inner and outer outline lines are indicated by the innerouter outline profile lines (IOOPLs). Homogeneous sections are obtained through IOOPL sectional matching. Finally, using the homogeneous sections, the relative radiation calibration parameters between the shadow and non-shadow regions are obtained, and shadow removal is performed.The proposed method for shadow detection and removal is shown in Fig. 3.
-
Image Segmentation Considering Shadow Features
Images with higher resolution contain richer spatial information. The spectral differences of neighboring pixels within an object increase gradually. Pixel-based methods may pay too much attention to the details of an object when processing high- resolution images, making it difficult to obtain overall structural information about the object. In order to use spatial information to detect shadows, image segmentation is needed. We adopt convexity model (CM) constraints for segmentation [6], [7].
Original Image
Segmentation
Suspected Shadow Detection
Recovered Image
Shadow Removal with RRN or PF
Elimination of False Shadow
Boundary Extraction
Fig. 3 Flowchart of object-oriented shadow detection and removal from urban high-resolution remote sensing images
Traditional image segmentation methods are likely to result in insufficient segmentation, which makes it difficult to separate shadows from dark objects. The CM constraints can improve the situation to a certain degree. To make a further distinction between shadows and dark objects, color factor and shape factor have been added to the segmentation criteria. The parameters of each object have been recorded, including gray-scale average, variance, area, and perimeter. The segmentation scale could be set empirically for better and less time-consuming results, or it could be adaptively estimated according to data such as resolution.
-
Detection of Suspected Shadow Areas
For shadow detection, a properly set threshold can separate shadow from non-shadow without too many pixels being misclassified [3]. Researchers have used several different methods to find the threshold to accurately separate shadow and non-shadow areas. Bimodal histogram splitting provides a feasible way to find the threshold for shadow detection, and the mean of the two peaks is adopted as the threshold [3]. In our work, we attain the threshold according to the histogram of the original image and then find the suspected shadow objects by comparing the threshold and gray-scale average of each object obtained in segmentation. We chose the gray-scale value with the minimum frequency in the neighborhood of the mean of the two peaks as the threshold.Inaddition, atmospheric molecules scatter the blue wavelength most among the visible rays (Rayleigh scattering). So for the same object, when in shadow and non-shadow, its gray-scale difference at the red and green wavebands is more noticeable than at the blue wave- band. Thus, we retrieve a suspected shadow with the threshold method at the red and green wavebands. Specifically, an object is determined to be a suspected shadow if its gray- scale average is less than the thresholds in both red and green wavebands.
Here, our proposed STS-based algorithm is presented to detect shadows for remote sensing images (RSI). Instead of using the ratio map obtained by Tsais algorithm, we present the modified ratio map to distinguish the candidate shadow pixels from non-shadow pixels. From the modified ratio map, the global thresholding process is first performed to obtain the coarse-shadow map, which separates all the pixels of the input image into candidate shadow pixels and non-shadow pixels. Thelocal thresholding process is applied to each candidate shadow region in the coarse-shadow map iteratively to distinguish true shadow pixels from candidate shadow pixels. Thefine- shadow determination process is applied to determine whether each pixel in the remaining candidate shadows is a true shadow pixel or not.
-
Proposed New STS Algorithm
For some cases, it is hard to identify shadow regions from non-shadow regions even if the proposed modified ratio map has been considered in the shadow detection process. For example, Fig. 4(a) shows the input RSI, and Fig. 4(b) shows the modified ratio map of Fig. 4(a). In Fig. 4(b), the ratio sub-map surrounded by the dashed line includes the river and the roadways shadow, and it seems that separating such a ratio sub-map into the shadow and non-shadow regions is easy. Based on the modified ratio map as shown in Fig. 4(b), the result of shadow detection carried out by using the global thresholding process is shown in Fig. 4(c). Surprisingly, the pixels within the river region are identified as shadow pixels even if the gap between the ratio values of river region and roadways shadow region is large.
Fig. 4 Shadow detection comparison between the global thresholding sheme and our proposed STS. (a) Input RSI. (b) Modified ratio map of (a).
-
Shadow detection result of (a) by using the global thresholding scheme.
-
Shadow detection result of (a) by using the proposed STS.
In order to alleviate this problem, we now propose a new STS combining global and local thresholding processes to deal with such RSI. The flowchart of the proposed STS-
based algorithm is shown in Fig. 5. In our proposed STS, based on the modified ratio map, the global thresholding process is first performed to obtain the coarse-shadow map, which separates the input image into candidate shadow pixels and non-shadow pixels. Based on the coarse-shadow map, the candidate shadow regions can be identified by using the connected component analysis [5], and then, we perform the local thresholding process to each region iteratively to detect true shadow pixels from candidate shadow pixels. Furthermore, we present a fine-shadow determination process to distinguish true shadows from candidate shadows, and then, we enforce the remaining candidate shadows to be the non-shadows. The detected shadows using our proposed STS-based algorithm are shown in Fig. 4(d). It reveals that the proposed algorithm has better accuracy than that of Tsais algorithm.
Fig. 5 Flowchart of the proposed STS-based algorithm
In our proposed STS-based algorithm, only the candidate shadow pixels are required to perform the local thresholding process to identify true shadow pixels. For the candidate shadow pixels in the coarse-shadow map, we construct candidate shadow regions by applying the connected component analysis to these pixels. Next, for each candidate shadow region, the local thresholding process is applied to distinguish true shadow pixels from candidate shadow pixels. Here, based on Otsus
thresholding method, the separability factor SP [4] is used to determine whether each candidate shadow region can be separated into the true shadow region and the candidate shadow region or not.
-
Elimination of False Shadows
Dark objects may be included in the suspected shadows, so more accurate shadow detection results are needed to eliminate these dark objects. Rayleigh scattering results in a smaller gray-scale difference between a shadow area and a non-shadow area in the blue (B) waveband than in the red (R) and green (G) wavebands. Consequently, for the majority of shadows, the gray-scale average at the blue waveband Gb is slightly larger than the gray-scale average at the green waveband Gg. Also, the properties of green vegetation itself make Gg significantly larger than Gb, so false shadows from vegetation can be ruled out by comparing the Gb and Gg of all suspected shadows. Namely, for the object i, when Gb + Ga<Gg , i can be defined to be vegetation and be ruled out. Ga is the correction parameter determined by the image type. After the elimination of false shadows from vegetation, spatial information of objects, i.e., geometrical characteristics and the spatial relationship between objects, is used to rule out other dark objects from the suspected shadows. Lakes, ponds, and rivers all have specific areas, shapes, and other geometrical characteristics.
-
Shadow Removal
To recover the shadow areas in an image, we use a shadow removal method based on IOOPL matching. There is a large probability that both shadow and non-shadow areas in close range on both sides of the shadow boundary belong to the same type of object. Contracting the shadow boundary inward and expanding it outward, respectively, can obtain the inner and outer outlines. Then, the inner and outer outline profile lines are generated along the inner and outer outline lines to determine the radiation features of the same type of object on both sides. As shown in Fig. 6, R is the vector line of the shadow boundary obtained from shadow detection, R1 is the outer outline in the non-shadow area after expanding R outward, and R2 is the inner outline in the shadow area after contracting R inward. There is a one-to-one correspondence between nodes on R1 and R2. When the correlation between R1 and R2 is close enough, there is a large probability that this location belongs to the same type of object. The gray-scale value of the corresponding nodes along R1 and R2 at each waveband is collected to obtain the IOOPL. The outer profile lines (OPLs) in the shadow area are marked as inner OPLs; OPLs in the non-shadow area are marked as outer OPLs (Fig. 6).
Fig. 6 Diagram of shadow boundary, inner, and outer outline lines
The objects on both sides of the shadow boundary linked with a building forming a shadow are usually not homogeneous, and the corresponding inner and outer outline profile line sectionsare not reliable. In addition, the abnormal sections on the inner and outer outlines that cannot represent homogeneous objects need to be ruled out. Consequently, similarity matching needs to be applied to the IOOPL section by section to rule out the two kinds of nonhomogeneous sections mentioned previously. The parameters for shadow removal are obtained by analyzing the gray-scale distribution characteristics of the inner and outer homogeneous IOOPL sections.
-
IOOPL Matching
IOOPL matching is a process of obtaining homogeneous sections by conducting similarity matching to the IOOPL section by section. During the process, Gaussian smoothing is performed to simplify the view of IOOPL. The Gaussian smoothing template parameters were = 2 andn=11.To rule out the nonhomogeneous sections, the IOOPL is divided into average sections with the same standard, and then, the similarity of each line pair is calculated section by section. If the correlation coefficient is large, it means that the shade and light fluctuation features of the IOOPL line pair at this section are consistent. If consistent, then this line pair belongs to the same type of object, with different illuminations, and thus is considered to be matching. If the correlation coefficient is small, then some abnormal parts representing some different types of objects exist in this section; therefore, these parts should be ruled out. The sections that have failed the matching are indicated in red. If more accurate matching is needed, the two sections adjacent to the section with the smallest correlation coefficient can be segmented for matching again.
-
Implementation of Shadow Removal
Shadows are removed by using the homogeneous sections obtained by line pair matching. There are two approaches for shadow removal. One approach calculates the radiation parameter according to the homogeneous points of each object and then applies the relative radiation correction to each object. The other approach collects and analyzes all the homogeneous sections for polynomial fitting (PF) and retrieves all shadows directly with the obtained fitting parameters. It is not appropriate to perform PF at greater than the third degree. One reason is to avoid the overly complex calculation; the other reason is that higher fitting to degrees greater than three does not significantly improve accuracy.
-
-
CONCLUSION
We have put forward a systematic and effective method for shadow detection and removal in a single urban high-resolution remote sensing image. In this paper, based on the coarse-to-fine strategy, our proposed STS-based algorithm has been presented for detecting shadows of RSI. Based on the proposed modified ratio map, the coarse- shadow map is constructed by using the global thresholding process. From the coarse-shadow map, our proposed STS first classifies all the pixels into the true and candidate shadow types. Furthermore, the proposed fine- shadow determination process is used to distinguish the true shadows from the candidate shadows.In order to get a shadow detection result, image segmentation considering shadows is applied first. Then, suspected shadows are selected through spectral features and spatial information of objects, and false shaows are ruled out. The subsequent shadow detection experiments compared traditional image segmentation and the segmentation considering shadows, as well as results from traditional pixel-level threshold detection and object-oriented detection. Meanwhile, they also show the effects of different steps with the proposed method. For shadow removal, after the homogeneous sections have been obtained by IOOPL matching, we put forward two strategies: relative radiation correction for the objects one at a time, and removal of all shadows directly after PF is applied to all the homogeneous sections and correction parameters are obtained.
REFERENCES
-
Makarau, R. Richter, R. Muller et al., Adaptive Shadow Detection Using A Blackbody Radiator Model, IEEE Trans. Geosci. Remote Sens., Vol. 49, No. 6, pp. 2049- 2059, 2011.
-
A. M. Polidorio, F. C. Flores, N. N. Imai, A. M. G. Tommaselli, and C. Franco, Automatic shadow segmentation in aerial color images,in Proc. XVI Brazilian Symp. Comput. Graph. Image Process., Oct. 12 15, 2003, pp. 270277.
-
P.M.Dare,Shadowanalysisinhigh- resolutionsatelliteimageryofurban areas, Photogramm. Eng. Remote Sens., vol. 71, no. 2, pp. 169177, 2005.
-
H. H. Oh, K. T. Lim, and S. I. Chien, An improved binarization algorithm based on a water flow model for document image with inhomogeneous backgrounds, Pattern Recognition., Vol. 38, no. 12, pp. 26122625, Dec. 2005.
-
R. C. Gonzalez and R. E. Woods, Digital Image Processing, 2nd ed. Reading, MA: Addison-Wesley, 2002.
-
J. Gong, H. Sui, K. Sun et al., Object-level change detection based on full-scale image segmentation and its application to Wenchuan earth- quake, Sci. China Series E, Technol. Sci., vol. 51, no. 2, pp. 110122, 2008.
-
K. Sun, D. Li, and H. Sui, An object-oriented image smoothing algorithm based on the convexity model and multi-scale segmentation, Geomatics Inf. Sci. Wuhan Univ., vol. 34, no. 4, pp. 423426, 2009.