- Open Access
- Total Downloads : 418
- Authors : Muthukumar Subramanyam, Krishnan Nallaperumal, Ravi Subban, Pasupathi P, Shashikala D, Selva Kumar S, Gayathri Devi S
- Paper ID : IJERTV2IS121321
- Volume & Issue : Volume 02, Issue 12 (December 2013)
- Published (First Online): 24-12-2013
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
A Study and Analysis of Colour Space selection for Insignificant Shadow Detection
Muthukumar Subramanyam1, Krishnan Nallaperumal1, Ravi Subban2, Pasupathi .P1, Shashikala .D1, Selva kumar .S1, Gayathri Devi .S1
1Centre for IT & Engg., Manonmaniam Sundaranar University, Tirunelveli, India
2Dept.,of CSE, Pondicherry University, Pondicherry, India.
Abstract
Though shadows provide realism to the scene, it is found to be problematic in many computer vision and cognitive science applications. Many researches are going on to detect and remove them significantly. There are variety of features that can be used for identifying shadows. This papers experiments the importance of Colour space selection to detect shadow from scenes and videos. Colour features can improve the performance of object tracking and scene understanding. Hence, several well known Colour spaces are experimented. From the outcome, the performance of shadow detection can be improved significantly through the appropriate Colour space selection strategy. The experimental results on real-time scenes show that proposed Colour space is efficient over other Colour spaces.
Keywords: Shadow Detection, Colour Space, Shadow Identification, Natural Scene, shadow removal
-
Introduction
The recent real-time problems of computer vision interested in, tracking the objects and their movements. Visual object tracking uses cameras to track target objects in the environment, which has many applications nowadays, such as intelligent surveillance, medical care, intelligent transportation and human-machine interaction [1].However, it is still a challenging task because of background noises, occlusions, illumination changes and fast motion. Colour histograms have become popular and important descriptors for object tracking, due to their simplicity, effectiveness and efficiency. However, they suffer from illumination changes. Colour is one among the prominent visual features used for object detection and tracking systems, especially in the arena of robotic vision and machine perception.
Shadows are ubiquitous in natural scenes, and their removal is an interesting and important area of current research. Apart from a few geometry based
approaches which are suited to specific conditions
[2] shadow detection is usually done by Colour based photometric methods.Still image based methods [3] attempt to find and remove shadows in the single frames independently. However, these models have been evaluated only on high quality images where the background has a uniform Colour or texture pattern, while in video surveillance; images with poor quality and resolution must expected. The authors in [4] note that their algorithm is robust when the shadow edges are clear, but artifacts may appear in cases of images with complex shadows or diffuse shadows with poorly defined edges. For practical use, the computational complexity of these algorithms should be decreased [3]. Some other methods focus on the discrimination of the shadow edges, and edges due to objects boundaries
-
[6]. However, it may be difficult to extract connected foreground regions from the resulting edge map, which is often ragged [5]. Complex scenarios containing several small objects or shadow-parts may be also disadvantageous for these methods. Prati et al., [7] gives a thematic overview on shadow detection for video surveillance. The methods are classified into groups based on their model structures, and the performances of the different model-groups are compared via test sequences.
Colour model is a method for explaining the properties or behavior of a Colour within some particular context. The authors note that the methods work in different Colour spaces [8]. However, it remains open-ended, how important are the appropriate Colour space selection, and which Colour space is the most effective regarding shadow detection. For the above reasons, the main issue of this paper is to present an experimental comparison of different Colour models regarding shadow detection on the casually captured scenes and videos. For the comparison, a general framework to work with different Colour spaces is proposed. During the development of this framework, the main approaches in the state-of-the art have been carefully considered. It is noted that an experimental evaluation of Colour spaces have been already done for edge classification in [9], and some other literatures too. But in the current research, the main focus is the detection of the shadowed and foreground regions, which is a practically an intricate problem.
-
[6]. However, it may be difficult to extract connected foreground regions from the resulting edge map, which is often ragged [5]. Complex scenarios containing several small objects or shadow-parts may be also disadvantageous for these methods. Prati et al., [7] gives a thematic overview on shadow detection for video surveillance. The methods are classified into groups based on their model structures, and the performances of the different model-groups are compared via test sequences.
-
The Colour based Techniques
The Colour technique is based on the fact that the Colour tune values of a shadow region are the same as the values in the background while the intensity values are different [10]. This technique attempts to find the Colour features that are illumination invariant using the Colour differences in the shadowed region and image and employs the spectral information of the foreground region, background region and shadow region to detect shadows [11]. The Colour techniques are useful for the Colour information in the HSV Colour space and RGB Colour space [12]. The weakness of this approach appears more when the objects have a similar intensity or brightness as the shadows or when the Colour of the objects is the same as the Colour of the background region or even when the objects are darker than the background.
In these cases, the foreground pixels will be misclassified as shadow pixels or holes will be created within the object [13]. Overall, by converting Colour spaces, this is difficult to detect all shadow pixels stably [14]. In addition, since Colour is the primary cue to identify a shadow pixel in Colour images, this technique might not Maryam Golchin et al., [16] work with black and white images [Zhu J] the ratio of Colour channels over Near Infrared (NIR) image is used. Their method is automatic and reliable for mosaiced images. Also, Sun and Li et al., [17] proposed a combined Colour model using the ratio of hue over intensity in HSI Colour model and photometric Colour invariant c1c2c3 Colour model.
Overall there are five different kinds of information to detect shadows namely, texture information, temporal information, grey scale information, Colour information and edge information. Texture information such as Local Binary Pattern (LBP) is only helpful to detect foreground objects which are a combination of the objects and shadow areas. It means this method is not able to distinguish the objects from the shadows. Due to this disability, this kind of information does not preserve affective information for the shadow detection process. Note that this study is going to detect shadow pixels from objects. As the second utilized information in the shadow detection process, the temporal information is also able to detect motions in an image. Where each motion is a combination of both an object and a
in this study [15]. Last but not least, use of edge information is based on the fact that shadow boundaries are strict and connected to the object while the edges are faint next to the background [17]. Therefore, this information is help full to detect shadows. As a conclusion, from the explained five types of information, Colour information and edge information are selected in this research.
-
Feature vector for Shadow detection
Here, the features for shadow detection are constructed by including some challenging environmental conditions [9]. The approach on shadow detection uses shadow variant and invariant features, from the feature importance measure analysis. The shadow variant features considered are Intensity/ chromaticity difference, Illumination changes, Local max, smoothness and skewness. The shadow invariant features supporting are Gradient similarity and Texture similarity. Finally, the efficiency of the proposed scheme is validated by variety of experiments with shadow-illness criteria using the colour spaces discussed in the next section.
-
Colour Spaces
Given the illumination changes, the Colour invariance properties of Colour histograms can be analyzed. A Colour histogram depicts the Colour distribution of the objects in a specific Colour space, e.g., RGB, HSV and HSI. Therefore, the Colour constancy of Colour spaces determines the Colour invariance properties of Colour histograms [23]. This section describes different types of Colour spaces.
-
RGB space
The RGB Colour space has three channels: red, green, and blue. A 3D histogram can be derived by calculating the number of pixels that have Colours in a fixed range which depends on the number of bins. The RGB histogram has no illumination invariance properties [24].
-
nRGB Space
The nRGB Colour space is the normalized RGB Colour space [5]
shadow area and again this method does not
nR, nG, nB = [R R
R+G+B
, G
R+G+B
, B
R+G+B
] (5)
provide valuable information either. Another kind of information to detect the shadow pixels is the Colour information [15].
The Colour tune values, as colour information, present valuable information to detect shadows that cannot be obtained using the grey scale information. As a result, the Colour information is selected and the grey scale information is omitted
nB = 1 nR nR (6)
The nRGB histogram is invariant to light intensity change. Since channels are only used in the experiment.
-
HSV Colour Space
The Hue Saturation Value (HSV) and Hue Saturation Intensity (HSI) Colour spaces are
similar. These Colour spaces are similar to human description of a Colour [5]. Since they can be obtained by a linear transformation from the RGB Colour space, they inherit drawbacks from RGB Colour space. In HSV Colour space, the convention from RGB Colour space is obtained as follows:
identified foreground pixels of the evaluation sequence denoted . Similarly, is introduced for the number of well classified shadowed points, and is the number of misclassified foreground, and shadowed ground truth points, respectively. First, the Recall (R) and Precision (P) rates of foreground detection are defined [28]:
: = , : =
(11)
+ +
)
-
C1C2C3 Colour Space
,V=max(R+G+B)(7
In the further tests, the F-measure is used [23] which combines recall and precision in a single efficiency measure (it is the harmonic mean of precision and recall):
The C1C2C3 Colour Space is introduced in [25].This Colour space is referred as being invariant to shadows and shading. The conversion from RGB Colour space is as follows:
(8)
= 2..
+
Shadow Detection Performance
% of Shadow Detection
% of Shadow Detection
1
0.95
0.9
(12)
RGB HSV
-
h h h Colour Space
CIEL*U*V
1 2 3
0.85
The rgb space has showed itself to be invariant to illumination intensity. The l1l2l3 space has shown invariance with respect to highlights and illumination intensity [26]. A new space introduced which is only invariant with respect to highlights. This space preliminarily called the ppp space and defined as follows:
p= R G, p= G B,p= B R (9)
0.8
0.75
0.7
Data Set
CIEL*a*b C1C2C3
nrgb T1T2T3
-
CIE-LAB Colour Space
-
The CIELAB space has been designed to be a perceptually uniform space [27]. A system is perceptually uniform if a small perturbation to a component value is approximately equally perceptible across the range of that value. A perceptual difference between two points in the CIELAB space can be represented closely by the Euclidean distance (square norm) measure [17]. The XYZ to CIELAB transformation is shown in equation:
Fig1. Evaluation of Shadow Detection Performance
6. Conclusion
This paper examines the selection of suitable Color space for shadow detection. A framework is developed for this task, which can work under different Color spaces. Mean-while, it can detect shadow significantly from different scene classes
Shadow Detection Performance Using Color space
99
L =
25 100
Y 1 3
Y0
16 if Y
Y0
0.08856
% of Shadow Detection
% of Shadow Detection
(10)
903.3 Y
Y0
otherwise
94 Hallway
5. Performance Evaluation
The evaluations were done through various datasets in both quantitative and qualitative methods. The benchmark images are tested and results are shown in figures and tables. In this experiment, two sets of values corresponding to manually marked foreground and shadowed pixels are collected, respectively. It is investigated to know how many pixels are classified properly by the ellipse model with different Colour spaces. The number of correctly
89
84
Color Space
Fig2. Evaluation of F-Measure
HWI HWII PETS
Int.Room
Table 1. Shadow Detection Accuracy
conditions and it has some common parameters to be properly validating the Colour spaces. In our case, the transition between the background and shadow domains is described by statistical distributions. With this framework, several well known Colour spaces are compared in terms of both quantitative and qualitative evaluations and observed that the Colour space selection issue is more important. Regarding the segmentation of different kinds of objects under varying environmental conditions is evident in results, with appropriate selection of Colour space. The proposed method is validated on well-known benchmark datasets of indoor and outdoor scenes and videos, which contain different kind of objects and different environmental conditions. Experimental results show that T1T2T3 Colour space is the most efficient in comparison with other Colour spaces.
Machine Intelligence, vol.28, no. 1, pp. 5968,
Input Scene |
RGB |
HSV |
Lab |
LUV |
C1C2C3 |
nrgb |
T1T2T3 |
Table 3: Shadow Restitution of Color Spaces |
Input Scene |
RGB |
HSV |
Lab |
LUV |
C1C2C3 |
nrgb |
T1T2T3 |
Table 3: Shadow Restitution of Color Spaces |
Vehicle/Shadow Models", IEEE Conference on |
|||||||
Colour space |
Hallway |
HWI |
HWII |
PETS |
Int. Room |
Advanced Video and Signal Based Surveillance, 2003. |
|
RGB |
92.33 |
90.47 |
90.71 |
90.01 |
90.01 [3] C. Fredembach and G. D. and Finlayson, |
||
HSV |
94.74 |
92.86 |
90.11 |
91.43 |
91.43 |
"Hamiltonian path based shadow removal", In Proc. |
|
CIE LUV |
92.01 |
91.41 |
91.18 |
89.36 |
89.36 |
BMVC, 2005. |
|
CIE Lab |
91.89 |
90.15 |
92.78 |
89.99 |
[4] |
G. D. Finlayson, S. D. Hordley, Cheng Lu, and M. |
|
93.11 |
95.39 |
92.09 |
89.74 |
89.74 |
S. Drew, On the removal of shadows from |
||
nrgb |
95.41 |
95.48 |
91.41 |
88.83 |
88.83 images,, IEEE Trans. Pattern Analysis and |
||
97.42 |
96.73 |
94.65 |
95.66 |
95.66 2006. |
Vehicle/Shadow Models", IEEE Conference on |
|||||||
Colour space |
Hallway |
HWI |
HWII |
PETS |
Int. Room |
Advanced Video and Signal Based Surveillance, 2003. |
|
RGB |
92.33 |
90.47 |
90.71 |
90.01 |
90.01 [3] C. Fredembach and G. D. and Finlayson, |
||
HSV |
94.74 |
92.86 |
90.11 |
91.43 |
91.43 |
"Hamiltonian path based shadow removal", In Proc. |
|
CIE LUV |
92.01 |
91.41 |
91.18 |
89.36 |
89.36 |
BMVC, 2005. |
|
CIE Lab |
91.89 |
90.15 |
92.78 |
89.99 |
89.99 |
[4] |
G. D. Finlayson, S. D. Hordley, Cheng Lu, and M. |
93.11 |
95.39 |
92.09 |
89.74 |
89.74 |
S. Drew, On the removal of shadows from |
||
nrgb |
95.41 |
95.48 |
91.41 |
88.83 |
88.83 images,, IEEE Trans. Pattern Analysis and |
||
97.42 |
96.73 |
94.65 |
95.66 |
95.66 2006. |
-
T. Gevers and H. Stokman, "Classifying Colour edges in video into shadow-geometry, highlight, or
Methods Colour space Colour channels
Outdoor / Indoor tests
Cavallaro et al., Rg Invariant Both Salvador et al., C1C2C3 invariant Both Paragios et al., Rg Invariant Indoor Mikic et al., RGB 1 Outdoor
Rittscher et al., Grayscale 2 Outdoor
Wanget al., grayscale 2 Indoor Cucchiara et al., HSV 1,33 Both
Birsson et al., CIE L *u*v* 2 Indoor
material transitions", IEEE Trans. on Multimedia, vol 5, issue 2, pp 237-243, 2003
-
E. A. Khan and E. Reinhard, "Evaluation of Colour Spaces for Edge Classification in Outdoor Scenes", IEEE International Conference on Image Processing, Genova, Italy, September 11-14, 2005.
Rautianinen et al.,
CIE
L*a*b*/HSV
N. a Outdoor
-
A. Prati, I. Mikic, M. M. Trivedi, and R. Cucchiara,"Detecting moving shadows: algorithms
Siala et al., RGB N .a Outdoor
and evaluation", IEEE Trans. Pattern Analysis and
Proposed All from above +
T1T2T3
2 Both
Machine Intelligence, No. 7, pp. 918-923, July 2003.
Table 2. Related Works of significant authors
7. References
-
Muthukumar.S, S. Ravi et.al, Real Time Insignificant Shadow Extraction from Natural sceneries, Advances in Intelligent Informatics, Springer International Publishing Switzerland, 2013.
-
A. Yoneyama, Chia H. Yeh, C.-C. Jay Kuo, "Moving Cast Shadow Elimination for Robust Vehicle Extraction Based on 2D Joint
-
-
I. Mikic, P. Cosman, G. Kogut and M. M. Trivedi, "Moving Shadow and Object Detection in Traffic Scenes", Proc. ICPR, vol 1, pp. 321-324, Sept 2000.
-
Benedek, Csaba, and Tamás Szirányi. "Study on Colour space selection for detecting cast shadows in video surveillance." International Journal of Imaging Systems and Technology 17, no. 3 (2007): 190-201.
-
Zhu J., Samuel K., Masood S., Tappen M.,"Learning To Recognize Shadows In
-
Monochromatic Natural Images".In CVPR ,2010, IEEE, Pp. 223230
-
Lin, Chin et al., "An Efficient and Robust Moving Shadow Removal Algorithm and Its Applications in ITS", EURASIP Journal on Advances in Signal Processing 2010 (945130), 119
-
Zhou, Z. and L. Xiaobo, 2010. An accurate shadow removal method for vehicle tracking. Proceedings of the International Conference on Artificial Intelligence and Computational Intelligence, Oct. 23-24, IEEE Xplore Press, Sanya, pp: 59-62. DOI: 10.1109/AICI.2010.135.
-
Panicker, J.V. and M. Wilscy, 2010. Detection of moving cast shadows using edge information. Proceedings of the 2nd International Conference on Computer and Automation Engineering, Feb. 26-28, IEEE Xplore Press, Singapore, pp: 817- 821. DOI: 10.1109/ICCAE.2010.5451878
-
Kurahashi, W., S. Fukui, Y. Iwahori and R. Woodham, 2010. Proceedings of the 14th International Conference on Knowledge-Based and Intelligent Information and Engineering Systems, Sept. 8-10, Springer Berlin Heidelberg, Cardiff, pp: 89-96. DOI: 10.1007/978-3-642-15393-8_11
-
Golchin, Maryam, Fatimah Khalid, Lili Nurliana Abdullah, and Seyed Hashem Davarpanah. "Shadow detection using Colour and edge information." Journal of Computer Science 9, no. 11 (2013): 1575.
-
Sun, B. and S. Li, 2010. Moving cast shadow detection of vehicle using combined Colour models. Proceedings of the Chinese Conference on Pattern Recognition, Oct. 21-23, IEEE Xplore Press, Chongqing, pp: 1-5. DOI: 10.1109/CCPR. 2010.5659321
-
Wesolkowski, Slawo, M. E. Jernigan, and Robert D. Dony. "Comparison of Colour image edge detectors in multiple Colour spaces." In Image Processing, 2000. Proceedings. 2000 International Conference on, vol. 2, pp. 796-799. IEEE, 2000.
-
D. A. Forsyth: "A novel algorithm for Colour constancy", International Journal of Computer Vision, 5(1):5-36, August 1990.
-
D. K. Lynch and W. Livingstone, "Colour and Light in Nature", Cambridge University Press, 1995.
-
G. Wyszecki and W. Stiles, Colour Science: Concepts and Methods,Quantitative Data and Formulas, 2nd edition, Wiley, 1982.
-
Cs. Benedek, and T. Sziranyi, Markovian Framework for Foreground- Background-Shadow Separation of RealWorld Video Scenes, Proc. Asian Conference on Computer Vision (ACCV 2006), LNCS 3851, pp. 898907, Jan. 2006.
-
C. Stauffer and W. E. L. Grimson, "Learning Patterns of Activity Using Real-Time Tracking", IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 22, No.8, pp. 747-757, 2000
-
Muthukumar, S., et al. "An efficient Colour image denoising method for Gaussian and impulsive noises with blur removal." Computational Intelligence and Computing Research (ICCIC), 2010 IEEE International Conference on. IEEE, 2010.
-
van de Sande, K., Gevers, T., and Snoek, C, "Evaluating Colour descriptors for object and scene recognition", Pattern Analysis and Machine
Intelligence, IEEE Transactions on,32(9):1582 1596, 2010.
-
Takahashi, et.al., Effect of light adaptation on the perceptual red-green and yellow-blue opponent- Colour responses, JOSA A 2.5 (1985): 705-712.
-
Gevers, Th, and A. W. M. Smeulders. "Object recognition based on photometric Colour invariants." In Proceedings of the Scandinavian Conference on Image Analysis, Vol. 2, Pp. 861- 866., 1997.
-
Poynton, Charles, and Brian Funt. "Perceptual uniformity in digital image representation and display." Colour Research & Application (2013).
-
C. J. Van Risbergen, "Information Retrieval", 2nd edition, London, Butterworths.