- Open Access
- Total Downloads : 356
- Authors : Arpita Pattanaik, Dr. Satyasis Mishra, Debaraj Rana
- Paper ID : IJERTV4IS031077
- Volume & Issue : Volume 04, Issue 03 (March 2015)
- DOI : http://dx.doi.org/10.17577/IJERTV4IS031077
- Published (First Online): 30-03-2015
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Comparative Study of Edge Detection using Renyi Entropy and Differential Evolution
Arpita Pattanaik M.Tech Scholar, Dept. of ECE CUTM
Bhubaneswar, ODISHA, INDIA
Dr. Satyasis Mishra Associate Professor, Dept. of ECE CUTM
Bhubaneswar, ODISHA, INDIA
Debaraj Rana Asst. Professor, Dept. of ECE CUTM
Bhubaneswar, ODISHA, INDIA
=1
Abstract: Edge detection is an important task in image processing. Edge is defined as the boundary between two regions separated by two relatively distinct gray level properties. Traditional edge detection methods give rise to the exponential increment of computational time. In this paper, edge detection in gray level images is done by using Renyi entropy and differential evolution algorithm. The Renyi entropy is a one- parameter generalization of the Shannon entropy. Here Renyi entropy was calculated for the1dimensional histogram of the images. Differential evolution (DE) is an efficient and powerful population-based stochastic search technique for solving optimization problems, and this has been widely applicable in many scientific and engineering fields. The selection of the initial population in a population-based heuristic optimization method is most important, as it affects the search for a number of iterations and has an influence on the final solution. If the prior information about the optima is not available, then initial
algorithm. DE is an evolutionary algorithm and it is use to find out the near optimal solutions. For minimizing the total cost and maximize the possible reliability most of the optimization algorithm is used.
-
RENYI ENTROPY
-
Shannon entropy
Normally we use the entropy for measure the amount of information. And it is defined as the probabilistic behaviour of a source of information. Let k be the total number states. And the given events are 1, e2 having corresponding probabilities 1, 2, , . where pi = 1 , i=1,2,,k and 0 1.
Shannon entropy is dined as:
population is selected randomly using a pseudorandom numbers. The main advantage of DE algorithm is its simple in structure, easy to use, speed and robustness.
-
Renyis entropy
=
() (1)
I .INTRODUCTION
Edge detection is a method which identifies the points in a digital image at which the brightness of the image changes clearly. Edge detection is the boundary between two regions separated by two relatively distinct grey level properties. The applications of edge detection such as image enhancement, water marking, compression, restoration etc. The traditional edge detection algorithms have been developed based on computation of the intensity gradient vector and it is very sensitive to noise in the image. For decreasing the noise some spatial averaging may be combined with differentiation that is known as LOG (laplacian of Gaussian operators). But this method used a 2D linear filter which is similar to second order derivatives and that also sensitive to noise. And the magnitude of the images produces double edges and gives the undesirable effect due to incomplete segmentation. For this reason laplacian combined with smoothing and find the edges via zero crossing and it also time taking. But the proposed a technique which is based on the information theory known as Renyi entropy.
Renyi entropy decreases the computation time. To
Let considered two independent subsystem A and B. Then the extension properties of Shannon entropy is S(A+B)=S(A)+S(B).
The Renyi's entropy (P) is defined as:
( ) = 1 ( ) (2)
1
Where, is the order of renyi entropy and 1 is a positive real parameter. Since Shannon entropy measure is a special case of the Renyi entropy for 1.
-
Selection of suitable threshold value using Renyi entropy
Let pi = 1, . 2. . , is the probability of image having k gray levels. Let us take two probabilities. One for the object (class A) and the other for the background (class B),which is as follows:
=0/ , 1/,…….. / (3)
=+1/ , +2/,……… / (4)
compare the result of Renyi entropy we proposed a new method which is based on optimization algorithm i.e.
Where =
, =
=1-
=0
=+1
differential evolution algorithm which is a optimization
For grey level G we put k =255. The Renyi entropy is:
(t)= 1
ln
( )
(5)
When sum=8, = 8 and when sum =9 then = 9, so in this
1
=0 9 9
= 1
ln 255
( )
(6)
case we cannot consider the central pixel as an edge pixel because diversity for grey level of pixel under the window is
1
=+1
low. And when the sum 6 and pc 6/9, the diversity for gray level of pixels under the window is high.
The Renyi entropy (t) is depend upon the value of t of object and background class. We try to maximize the
information measure between the two classes (object and background).When (t) is maximized, the luminance level t that maximizes the function is considered to be the optimum threshold value [1].
t*()=Arg max[ ()] (7)
IV. DIFFERENTIAL EVOLUTION
It introduced by Rainer Storn and Kenneth price in 1995. DE is a stochastic, population based optimization algorithm which is used to find for effectively finding the near optimal solution. To minimize the total cost or maximize
the possible reliability we mainly use optimization algorithm.
It can use in the field of science, engineering etc. In DE each
The technique consists of treating each pixel of the original image and creating a new image, such that (x, y) = 0 if (x, y) t*() otherwise, (x, y) =1 for every 1 x M and 1 y N. Since Shannon entropy measure is a special case of the Renyi entropy. The following expression can be used as a criterion function to obtain the optimal threshold at 1 [1, 4, 5].
variables value is represented as a real number. The advantage of using DE is because of its simple structure, ease of use, speed, robustness, flexibility etc [11].
Initialization
Recombination
Mutation
Initialization
t*(1)=Arg max[ + ()] (8)
-
-
EDGE DETECTION USING RENYI ENTROPY
Here we will use a 3× 3 mask (figure 1) which remove the noise from the image, and after removing the noise the mask will detect the edge. In this method in order to detect the edge, first we have to create a binary image by choosing a suitable threshold value and this threshold value is obtained by using Renyi entropy method [1]. And then the3× 3 mask is applied on the binary image. Set all the mask coefficients equal to one except the centre pixel value. The centre pixel value is set to X
1 |
1 |
1 |
1 |
X |
1 |
1 |
1 |
1 |
Fig 1. 3×3 matrix with centre pixel equal to X
We find the probability of each pixel of image after moving the window on the binary image. The formula for calculating the entropy of each central pixel of image is S(C ) = –
ln( ).where is the probability of central pixel C of binary image under window. When =1 then entropy of each pixel=0.Thus, if all the pixels under the window is homogeneous =1and S=0.In this case the central pixel is not an edge pixel. The probability and entropy of cenral pixel is given below.
TABLE 1
P AND S OF CENTRAL PIXEL UNDER WINDOW
Fig. 2. General evolutionary algorithm procedure
-
Initialization:
For optimizing a function let us select the size of population N and number of real parameter D. The parameter has the form: , =[1,. , 2,,, . . ,, ]
where i=1, 2………..N, J=1, 2……….D, G=generation number Initial population is chosen randomly if no information is available about the problem. But in case of available of primary solution, the initial population is often generated by adding normally distributed random deviations to the primary solution.
-
Mutation
Mutation extends the search process. For the given parameter vector , , it randomly select three vectors 1, , 2, , 3, .Then add the weighted difference of two of the vectors to the third.
,+1 =1, + (2, 3, ) (9) The mutation factor F is a constant from [0-1], ,+1is called the donor vector.
-
Recombination:
Recombination incorporates successful solutions from previous generation. The trial vector ,+1 is developed from the elements of the target vector, and the element of the donor vector,+1 .. Element of donor vector enter the trial vector with probability CR.
P
1/9
2/9
3/9
4/9
5/9
6/9
7/9
8/9
S
0.244
1
0.334
2
0.366
2
0.360
4
0.326
5
0.270
3
0.195
5
0.104
7
, ,+1
= , ,+1 , =
, , , > ! =
(10)
is a random integer from [1,2…..D] and it ensure that
,+1 ,
-
Selection
The target vector is compared with the trial vector and the one of the lowest function value is admitted to the next generation.
threshold value and classification accuracy of the differential evolution. Figure 6, 7, 8 are the output of the DE.
TABLE 2
VALUES OF OPTIMUM THRESHOLD AND CLASSIFICATION ACCURACY (RENYI ENTROPY METHOD)
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
t*=1
t*=1
t*=1
t*=1
t*=1
t*=1
t*=1
t*=1
t*=1
A
35
34
30
23
19
17
16
14
13
ca=0
ca=0.
ca=0
ca=0
ca=0
ca=0
ca=0
CA=
ca0.
.641
6373
.625
.622
.621
.620
.620
619
2
4
3
9
9
3
1=
t*=1
t*=1
t*=1
t*=1
t*=1
t*=1
t*15
t*=1
t*=1
B
35
40
46
48
48
48
0
51
51
ca=0
ca=0.
ca=0
ca=0
ca=0
ca=0
ca=0
ca=o
ca=0
.845
8468
.846
.845
.845
.845
.844
.844
.844
8
5
8
8
0
8
8
8
t*=1
t*=1
t*=1
t*=1
t*=1
t*=1
t*=1
t*=1
t*10
C
24
21ca
18
15
13
12
10
09
8
ca=0
=0.6
ca=0
ca=0
ca=0
ca=0
ca=0
ca=0
ca=0
.609
097
.599
.593
.586
.582
.574
.570
.565
9
4
3
5
3
4
4
8
Xi,G+1=Ui,G, if (Ui,G+1)<= f(Xi,G),i=1toN
Or
Xi,G+1= Xi,G ,Otherwise (11) Mutation, recombination and selection continue until some stopping criterion is reached [11].
-
Variants of DE:
There are numerous variants of DE, Notation: DE/X/Y/Z, where X: Vector to be muted (e.g rand or best), Y: is the number of difference vectors used, Z: denotes the cross over scheme. Most commonly used variant is DE/rand/1/bin, where, Rand: choose a random individual for mutation (r1), 1: a single difference vector is used (r2-r3) and bin: crossover is according to independent binomial experiments (pc)[12].
Also very useful: DE/best/2/bin
0.1 |
0.2 |
0.3 |
0.4 |
0.5 |
0.6 |
0.7 |
0.8 |
0.9 |
|
A |
t*= |
t*=2 |
t*=2 |
t*=2 |
t*=2 |
t*=2 |
t*=2 |
t*=2 |
t*=2 |
241 |
41 |
41 |
41 |
41 |
41 |
41 |
41 |
41 |
|
ca= |
ca= |
ca= |
ca=0 |
ca=0 |
ca=0 |
ca=0 |
ca=0 |
ca=0 |
|
0.6 |
0.68 |
0.68 |
.688 |
.688 |
.688 |
.688 |
.688 |
.688 |
|
884 |
84 |
84 |
4 |
4 |
4 |
4 |
4 |
4 |
|
B |
t*= |
t*=2 |
t*=2 |
t*=2 |
t*=2 |
t*=2 |
t*=2 |
t*=2 |
t*=2 |
241 |
41 |
41 |
41 |
41 |
41 |
41 |
41 |
41 |
|
ca= |
ca= |
ca= |
ca= |
ca= |
ca= |
ca= |
ca= |
ca= |
|
0.8 |
0.82 |
0.82 |
0.82 |
0.82 |
0.82 |
0.82 |
0.82 |
0.82 |
|
261 |
61 |
61 |
61 |
61> |
61 |
61 |
61 |
61 |
|
C |
t*= |
t*=2 |
t*=2 |
t*=2 |
t*=2 |
t*=2 |
t*=2 |
t*=2 |
t*=2 |
241 |
41 |
41 |
41 |
41 |
41 |
41 |
41 |
41 |
|
ca= |
ca= |
ca= |
ca= |
ca= |
ca= |
ca= |
ca= |
ca= |
|
0.6 |
0.60 |
0.60 |
0.60 |
0.60 |
0.60 |
0.60 |
0.60 |
0.60 |
|
051 |
51 |
51 |
51 |
51 |
81 |
81 |
81 |
81 |
,+1 = , +F ( 1, + 2, + 3, 4, )
TABLE 3
OPTIMUM THRESHOLD VALUE AND CLASSIFICATION (DE METHOD)
(12) |
||||||||||
Mutate the best individual in the population. The use of two |
||||||||||
difference vector seems to improve diversity. |
||||||||||
V. ANALYSIS OF EXPERIMENTAL RESULTS |
||||||||||
The performance of the proposed scheme is evaluated |
||||||||||
through the simulation results using MATLAB for different |
||||||||||
test images. For this purpose, standard test images were taken |
||||||||||
from image online databases provided by Berkley |
||||||||||
Segmentation Database (BSD300) [9]. Here, we have |
||||||||||
presented a set of only twenty test images and the results of |
||||||||||
the Renyi entropy are compared with the results of well- |
||||||||||
established optimization algorithm (DE) on the same set of |
||||||||||
test images. Differential evolution method is chosen for |
||||||||||
comparison because for its simple structure, ease of use, and |
||||||||||
its speed and robustness. Our analysis is based on how much information is lost due to thresholding. In this analysis, given |
2000 |
|||||||||
two thresholded images of a same original image, we prefer |
1000 |
the one which lost the least amount of information. The
optimal threshold value was computed by the proposed method for these images [1].
Table 2 lists the optimal threshold values that are found for these images for value equal to 0.1, 0.2, 0.3, 0.4, 0.5,
-
, 0.7, 0.8 and 0.9, respectively for Renyi entropy method. The original images together with their histograms and the thresholded images obtained by using the optimal threshold of some values t*. The output shown in figure 3,4 and 5.
Using the above images, we conclude that when value lies between 0 and 1, our proposed method produced good optimal threshold values. A denotes the image 241004.B denotes the image300091.C denotes the image 119082.Table 2 contain the values of optimum threshold and classification accuracy obtained from Renyi entropy. In table 3 we are also taking the same image and that contain the optimum
0
0 100 200
-
(b)
-
(c) (d)
Fig. 3. (a) Original Image (b) histogram of image (c) ground truth image (d) edge detection using Renyi method alpha=0.2, t*=134
The edge detected output resulted from the ground truth images. We have executed the renyi algorithm for different values of alpha and t* to a varieties of images included indoor and outdoor.
2000
2000
1000
0
0 100 200
(a)
1000
(b)
0
0 100 200
(a) (b)
(c) (d)
(c) (d)
Fig. 4. (a) Original Image (b) histogram of image (c) ground truth image (d) edge detection using Renyi method alpha=0.3, t*=146
1500
1000
500
Fig. 7. (a) Original Image (b) histogram of image (c) ground truth image (d) edge detection using DE method alpha=0.3, t*=241
1500
1000
500
0
0 100 200
0
0 100 200
-
(b)
-
(d)
Fig. 5. (a) Original Image (b) histogram of image (c) ground truth image (d) edge detection using Renyi method alpha=0.9, t*=109
We have tested the same images for edge detection using differential evolution, and the result shown below in figure 6, 7 and 8 for different alpha and t* values.
2000
1000
0
(a) (b)
(c)
(d)
Fig. 8. . (a) Original Image (b) histogram of image (c) ground truth image
-
edge detection using DE method alpha=0.5, t*=241.
-
VI. CONCLUSION
The main objective of this work is to study and understand the process of edge detection in natural and synthetic images. An entropy based approach was chosen to perform the image edge detection task. In this regard the well known Renyi entropy has been used for the application in hand. The Renyi entropy has been used for the task of automatic thresholding. The entropy was calculated from the 1-dimensional histogram of the images. The following strengths with the Renyi entropy metrics were readily defined: Numerically robust, computationally fast and Easy to
(a)
0 100 (b)
200
implement. For effective thresholding the Renyi entropy must become maximum at the threshold point. Thus this problem of finding maximum of Renyi entropy can be posed as an optimization problem.
In this paper DE is used to find out the optimized value of the threshold. To measure the effectiveness of the thresholding and edge detection process, CA has been used. The result justify
(c) (d)
Fig. 6. (a) Original Image (b) histogram of image (c) ground truth image (d) edge detection using DE method alpha=0.7, t*=201
that the Renyi entropy can be used as a means for automatic thresholding and edge detection of image. The combination of Renyi entropy and DE can be used effectively for the edge detection application. The order can be used as an adjustable value and can play an important role as a tuning parameter of the image detection chain for the same class of images.
REFERENCES
[1]. M. A. El-Sayed and M. Atta. Khfagy,Using renyi's entropy for edge detection in level images£,ijicis, vol.11, no. 2, july 2011 [2]. M. P. de Albuquerque, I. A. Esquef , A.R. Gesualdi Mello,Image Thresholding Using TsallisEntropy£. Patern Recognition Letters, 25, (2004)1059-1065. [3]. P.K. Sahoo, S. Soltani, A.K.C. Wong, Y.C. Chen, "A Survey of the Thresholding Techniques", Computer Vision Graphics Image Process. 41 (1988) 233-260. [4]. Prasanna K. Sahoo and Gurdial Arora, A thresholding method based on two-dimensional Renyis entropy, Pattern Recognition 37 (2004) 1149 1161 [5]. P. Sahoo, C. Wilkins, and J. Yeager, "Threshold Selection Using Renyi's Entropy", Pattern Recognition, (1997), pp. 71-84. [6]. Tomasz Maszczyk and W_lodzis_law Duch, Comparison of Shannon, Renyi and Tsallis Entropy used in Decision Trees, Lecture note on computer science, vol.5097, 643- 651,2008. [7]. Rainer Storn and Kenneth Price, Differential Evolution – A simple and efficient adaptive scheme for global optimization over continuous spaces, International Computer Science Institute. [8]. Vasan Arunachalam, Optimization Using Differential Evolution The University Of Western Ontario, London, Ontario, Canad, JULY 2008. [9]. Berkeley image database, https://www.eecs.berkeley.edu/Research/Projects/CS/vision/ bsds/BSDS300/html/dataset/images.html. [10]. Debaraj Rana, Sunita Dalai, Review on Traditional Methods of Edge Detection to Morphological based Techniques, International Journal of Computer Science and Information Technologies, Vol. 5 (4),2014, Page:5915- 5920 [11]. Fleetwood K., "An Introduction to Differential Evolution",http://www.maths.uq.edu.au/MASCOS/Multi-Agent04/Fleetwood.pdf
[12]. Dr. Philipp Rohlfshagen, Differential Evolution And Particle Swarm Optimisation Lecture 16, link: http://www.cs.bham.ac.uk/~pkl/teaching/2008/ec/lecture_no tes/l15-DE-PSO.pdf [13]. S. Das, A. Abraham, U. K. Chakraborty, and A. Konar, Differential evolution using a neighborhood based mutation operator, IEEE Trans. Evol. Comput., vol. 13, no. 3, pp. 526553, Jun. 2009 [14]. K. Price, R. Storn, and J. Lampinen, Differential EvolutionA Practical Approach to Global Optimization. Berlin, Germany: Springer, 2005. [15]. F. Neri and V. Tirronen, Recent advances in differential evolution: A review and experimental analysis, Artif. Intell. Rev., vol. 33, no. 1, pp. 61106, Feb. 2010