- Open Access
- Total Downloads : 111
- Authors : Mohamad Raad, Majd Ghareeb, Ali Bazzi
- Paper ID : IJERTV6IS070320
- Volume & Issue : Volume 06, Issue 07 (July 2017)
- Published (First Online): 27-07-2017
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Applying Catastrophe Theory to Image Segmentation
Mohamad Raad, Majd Ghareeb, Ali Bazzi Department of computer and communications engineering Lebanese International University
Beirut, Lebanon
Abstract This paper describes a study into the application of catastrophe theory to image segmentation. The theory is found to be applicable to this problem, however, extensions are required for its application to yield comparable results to the state of the art. Catastrophe theory provides several models that describe change in dynamic systems. Since image segmentation can be viewed as the segmentation of a signal generated by a dynamic process, it was hypothesized that catastrophe theory should be applicable to this problem in general. The results presented verify this hypothesis.
Keywords Catastrophe Theory; Image Segmentation, Canny, Sobel
-
INTRODUCTION
Image segmentation is the process of clustering pixels into outstanding image regions to transform the representation of an image into something easier to analyze. It locates objects and boundaries such as lines and curves in the image. Image segmentation approaches are divided into two categories based on the properties of an image [1]. Image segmentation could be used in many fields like object recognition, image editing, medical field, image compression, or image database lookup. There are several ways to perform image segmentation including: thresh-holding methods, clustering methods, transform methods, and texture methods [2]. Thresh-holding based methods are the simplest methods; these methods rely on statistical clustering of pixels based on one or more thresholds. The other methods can be broadly described as clustering algorithms where the difference between each of these is the way that the clusters are generated. Obtaining a high percentage of accuracy in image segmentation is generally a very challenging problem, which justifies the on-going research in this field.
One typical approach to developing new solutions to old problems is the application of an unapplied theory. In this case, Catastrophe theory is such a theory [3]. Catastrophe theory is used for the analysis of systems that experience sudden changes. The recorded observations of such systems are multidimensional signals. Since image segmentation may be viewed as detecting discontinuities, or more specifically abrupt changes in intensity, in a 2D signal catastrophe theory may prove useful in this regard.
Catastrophe theory was introduced as a special branch of dynamical systems theory that studies sudden shifts in behavior [4]. Thats why its been effectively used in describing many cases that include discontinuous changes in various applications [3].
In a nutshell, catastrophe theory deals with signals based on each signal's codimension, the originators and proponents of the theory claim that the seven elementary functions that make up catastrophe theory are general enough to describe nearly all physical phenomena [5]. With regards to images, an image maybe viewed as being a sample from a light field [6]. In such a case, the difference in dimensions between a light field and an image is 2 (the time and depth dimensions). Or, it may be argued that the difference in the number of dimensions between a static light field and an image is 1 (the depth dimension). This allows one to conclude that the potentially useful parts of catastrophe theory are those that apply to signals of codimension 1 and 2.
-
IMAGE SEGMENTATION TECHNIQUES Image segments are generally groups of pixels whose
grouping conveys some meaning. This section provides a brief review of image segmentation techniques.
-
Segmentation based on edge detection
This approach consists of observing the pixels of different regions that have rapid and sudden transitions in the intensity. These pixels are linked to form a closed object boundary in a binary form. Edge detection methods need a balance between detecting accuracy and noise immunity. Its suitable for simple and noise free images. There are two main edge based methods: gray histogram and gradient based method [7].
-
Gray histogram based techniques
The edge detection in this case is based on selecting a threshold value and grouping pixels based on their level in comparison with this threshold [8]. Note that there may be more than one threshold used and the groupings in that case are determined based on whether the pixel level is within a given range.
-
Gradient based methods
Since the gradient provides a rate of change, it can be used to identify an edge when the value of the gradient is high [9]. Once the edges have been identified, pixels within an image can then be grouped into segments. One of the most known image segmentation algorithms based on this method is the Canny edge detector [10].
-
-
Segmentation based on region growing or splitting algorithms
These methods tend to be useful for more noisy images. The basic region growing approach can be summarized as follows [11]:
-
Select a group of pixels.
-
Select a similarity criterion.
-
Add pixels that satisfy the similarity criterion to the selected group.
-
Stop adding pixels when no new pixels satisfy the similarity criterion.
Note that the pixels being operated on must be spatially connected.
Region splitting, on the other hand, is based on dividing an image first into small regions and then determining the level of similarity between adjacent regions. Similar regions are grouped and so segmentation is achieved. Note that an image tends to be divided using regular shapes at first (e.g. rectangles) but there is no theoretical reason why that should be the case.
-
-
Segmentation based on partial differential equations
-
Snakes
These are mathematical functions used to determine the edge of region by identifying where a region's smooth changes are morphing into sharp changes [11]. Note that this method has some similarities with the approach being taken in this paper but does suffer from the need for user intervention and a high level of complexity, which is not the case for the catastrophe theory based method.
-
Level set mode
Represents edges as a set zero level points of higher dimensional surfaces [12]. Conceptually this approach represents images as a "shadow" of higher dimensional surfaces. Again, it can be noted that this approach has some similarity with that described in this paper, but the approach in this paper is applies a more consistent method across a broad set of images.
-
Mumford and Shah model
This is the generalized model that can be used to identify several regions simultaneously, without requiring a recursive treatment of the image [13].
-
C-V model
This is the generalized model that can be used to identify several regions simultaneously, without requiring a recursive treatment of the image [13].
-
-
Segmentation based on artificial neural networks
In general, methods that apply this approach represent each pixel as a neuron in a neural network. Such a representation allows feature extraction to take place by identifying groups of related neurons [15].
-
Segmentation based on clustering
Clustering refers to the grouping of pixels based on an apriori defined similarity criteria. Clustering may be hard or fuzzy. When hard clustering is used, a pixel is allocated to one group and that group is identified as a segment. Fuzzy custering allows a pixel to be assigned a probability of belonging to a given cluster. Further processing is then required to completely separate groups of pixels [16].
-
-
APPLYING CATASTROPHE THEORY TO IMAGE SEGMENTATION
This section describes the proposed method for applying catastrophe theory to image segmentation. As mentioned previously, the codimension of an image may be argued to be either 1 or 2. For such systems, the fold or cusp catastrophe functions are applicable [3]. The fold catastrophe has an unfolding that takes on the shape of a side-ways parabola controlled by a single control parameter. The curve produced by this unfolding indicates that the potential of the surface being analyzed has points where it changes from one state to another state. In terms of image segmentation, one can view the point at which such a change takes place as being at the border between one image segment or object and another. In other words, if such points can be identified, then the edges present within an image may also be identified.
The fold catastrophe unfolding function is given by:
a
Where "a" is the control parameter and the system behavior is determined by the variable "".
It was hypothesized that if the variable is assumed to be a spectral component of an image, for example, the Y component in a YUV image or the R component in an RGB image, and if this function is applicable to image edge detection, then one should be able to observe a change in the value of the control parameter "a" that resembles the fold catastrophe. In order to test this hypothesis, a number of color images were analyzed where the derivative was approximated by the difference between neighboring horizontal pixels. The following figures show the original images and resulting "a" value per column for each of the RGB components. The images were obtained from an online database [17].
Figure III-1 Image 196027 from test set [17]
Figure III-2 Calculated mean control parameter value for each spectral
component for image 196027
Figure III-3 Image 130066 from the test set [17]
Figure III-4 Calculated mean control parameter value for each spectral component for image 130066
Figure III-5 Image 69007 from test set [17]
Figure III-6 Calculated mean control parameter value for each spectral component of image 69007
The above set of figures show that indeed when this approach is used to calculate the control parameter, one observes a parabolic-like behavior for the mean value calculated. This observation lead to the conclusion that the fold catastrophe may indeed be used to identify edges within an image and hence could be usefully applied for image segmentation. However, given the possibility that a first order derivative is being applied to the calculation of the control parameter, it was further hypothesized that it could be more effective to apply this type of analysis as a pre-processing step prior to the application of a simple edge detection algorithm. Besides the application of the fold catastrophe function, a quantization step was also applied to reduce the low-level noise of the resulting image. Figure III-7 shows the results of these steps where the input image's Y component was used.
Figure III-7 Original image (left) and the outcome of the pre-processing step
As can be seen, this pre-processing step has highlighted the edges within an image. To test the effect of this step on a widely used algorithm, this step was applied to the input of the Canny algorithm. The results of some of the sample images can be seen in Figure III 8. As can be seen, the pre-processing step
reduces the volume of noise in the output of the edge detection algorithm, which is a significant result in favor of applying this type of pre-processing in image segmentation.
-
CONCLUSION
This paper has presented a novel approach to image segmentation, namely, the use of catastrophe theory for image analysis prior to the application of an off the shelf segmentation or edge detection algorithm. The results presented show that this is a viable approach that could lead to the development of new, better, and less complex segmentation algorithms. More analysis will need to be conducted to prove the viability of this approach but the results so far are encouraging.
Figure III-8 Effect of the pre-processing step on the output of the Canny edge detector
Finally, Figure III 9 shows the effect of applying the pre- processing step to the output of a simple edge detector the sobel edge detector in comparison to the output of the Canny edge detector without the pre-processing step. As can be seen, the result of the much simpler edge detector (sobel) is now comparable with the results of the Canny edge detector.
Figure III-9 Output of the Canny edge detector compared with the output of the sobel (right column) edge detection with pre-processing
REFERENCES
-
Y.-J. Zhang, Advances in image and video segmentation. IGI Global, 2006.
-
M. Sonka, V. Hlavac, and R. Boyle, Image processing, analysis, and machine vision. Cengage Learning, 2014.
-
D. P. L. Castrigiano and S. A. Hayes, Catastrophe theory, 2 ed. Colorado, USA: Westview press, 2004.
-
T. Poston and I. Stewart, Catastrophe Theory and Its Applications. New York, USA: Dover Publications, 1978.
-
E. C. Zeeman, Catastrophe theory. Springer, 1979.
-
M. Levoy and P. Hanrahan, "Light field rendering," in Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, 1996, pp. 31-42: ACM.
-
P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik, "Contour detection and hierarchical image segmentation," IEEE transactions on pattern analysis and machine intelligence, vol. 33, no. 5, pp. 898-916, 2011.
-
J. N. Kapur, P. K. Sahoo, and A. K. Wong, "A new method for gray- level picture thresholding using the entropy of the histogram," Computer vision, graphics, and image processing, vol. 29, no. 3, pp. 273-285, 1985.
-
J. Freixenet, X. Muñoz, D. Raba, J. MartÃ, and X. CufÃ, "Yet another survey on image segmentation: Region and boundary information integration," Computer VisionECCV 2002, pp. 21-25, 2002.
-
J. Canny, "A computational approach to edge detection," IEEE Transactions on pattern analysis and machine intelligence, no. 6, pp. 679-698, 1986.
-
S. C. Zhu and A. Yuille, "Region competition: Unifying snakes, region growing, and Bayes/MDL for multiband image segmentation," IEEE transactions on pattern analysis and machine intelligence, vol. 18, no. 9, pp. 884-900, 1996.
-
D. Cremers, M. Rousson, and R. Deriche, "A review of statistical approaches to level set segmentation: integrating color, texture, motion and shape," International journal of computer vision, vol. 72, no. 2, pp. 195-215, 2007.
-
L. A. Vese and T. F. Chan, "A multiphase level set framework for image segmentation using the Mumford and Shah model," International journal of computer vision, vol. 50, no. 3, pp. 271-293, 2002.
-
T. F. Chan and L. A. Vese, "Active contours without edges," IEEE Transactions on image processing, vol. 10, no. 2, pp. 266-277, 2001.
-
G. Kuntimad and H. S. Ranganath, "Perfect image segmentation using pulse coupled neural networks," IEEE Transactions on Neural Networks, vol. 10, no. 3, pp. 591-598, 1999.
-
Z. Wu and R. Leahy, "An optimal graph theoretic approach to data clustering: Theory and its application to image segmentation," IEEE transactions on pattern analysis and machine intelligence, vol. 15, no. 11, pp. 1101-1113, 1993.
-
"SEISM – Supervised Evaluation of Image Segmentation Methods," Available: http://www.vision.ee.ethz.ch/~cvlsegmentation/seism/browse.php