- Open Access
- Total Downloads : 232
- Authors : Ms. Sunita Dalai, Ms. Manjushree Jena
- Paper ID : IJERTV3IS030831
- Volume & Issue : Volume 03, Issue 03 (March 2014)
- Published (First Online): 24-03-2014
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Object Recognition using Higher order Moments and Comparative Study Between Shape Space and Moment Invariant Method
Ms. Sunita Dalai Ms. Manjushree Jena
Asst. Prof., Dept. of ECE M.Tech Scholar, Dept. of ECE Centurion University of Technology & Management Centurion University of Technology & Management
Bhubaneswar, Odisha, India Bhubaneswar, Odisha, India
Abstract-In this paper, for boundary representation of the image some invariable features are extracted using higher order moments. Moment based invariants, in various forms, have been widely used over the years as features for recognition in many areas of image analysis. Here the moment based method combines the original moment invariants and the contour moment invariant, which is called a relative contour moment invariant. This algorithm is discussed and tested with scaling, translation and rotation invariance. As all the possible views of an object caused by translation, scaling and rotation are represented as a single point here we also describe shape space method to represent objects as points on a high-dimensional surface for efficient recognition of object. Then we compare both the relative contour invariant method and shape space method.
Keywords-complex hyper plane, landmarks, manifold, moment invariant, object recognition
-
INTRODUCTION
Object recognition is of considerable interest in the field of image analysis and computer vision. Object recognition is the process of matching a test image with the data base images irrespective of the translation, rotation, scaling and occlusion of the test image [4]. Because shape representation plays a very important role in the object recognition, a large number of shape description techniques have been studied. Generally speaking, these methods can be divided into two categories:
-
Region based object recognition:
The region-based techniques take into account the whole area of the object i.e. the region bounded by the boundary line.
-
Boundary based object recognition:
The boundary-based techniques concentrate merely on its boundary lines. The boundary-based methods are more popular than region-based methods, because the size of the boundary information is significantly smaller than the original 2-D object images [1].
To adopt boundary-based techniques we have to go for Edge tracing process, which is one of the basic topics in image processing [4]. The aim of the process is to evaluate image information and reduce it to adequate contour line by
eliminating the unnecessary information that takes time for recognition process. The task of edge tracing plays an important role in object recognition. We can explain it as follows: The human seeing system looks at any object firstly during the recognition period and runs an eye over the contour points of it. After tracing contour points, geometrical information of objects is obtained. This information is transformed to electrical signals and transmitted to brain and recognition process comes to an end. When we apply this approach to the artificial seeing systems, it is essential to trace edges of objects effectively to reach a successful recognition [2].
Many shape descriptors, including global features such as moment invariants ,Fourier descriptors autoregressive models and eigen values of Dirichlet Laplacian, as well as local features such as chain codes, shape context, curvature scale space (CSS) representation and wavelet descriptors have been developed to describe the boundaries of different patterns[2]. The advantages of methods using global features are that they usually require less computational effort and are invariant to size, translation, and orientation changes. The boundary based methods are more popular than region-based method, because the size of the boundary information is significantly smaller than the original 2-D object images. One of the boundary based technique i.e Moment invariant technique which is discussed here.
-
-
OBJECT RECOGNITION USING DISCRETE
MOMENT INVARIANT
Recognition is the basic capacity of human. Pattern recognition is the automatic recognition of the computer system. When the computer system recognizes the image, a value is invariable regardless of the image scaling, translation and rotation. The value is moment invariant. This method was brought forward by M.K.Hu in 1962. He gave the definition and the property of continuous function moment invariant. Moreover he proved the scaling, translation and rotation invariance and gave seven moment invariant functions for continuous function[1]. A recognizing test was done for two letters in computers. Moment invariant functions by Hu need computing all pixels in target area. Though much arithmetic was researched by some researchers,
they are fairly time-consuming. Target contours pixels are generally much fewer than target area pixels. A researcher provided contour moment invariant which only needs computing the moment of contour. He also proved the scaling, translation and rotation invariance. Apparently this arithmetic has much advantage. We call this arithmetic contour moment invariant in order to distinguish Hus moment invariant. The above contour moment invariant is continuous functions moment invariant. It has the scale, translation and rotating invariance for continuous function. But image is discrete. The above contour moment invariant for discrete function only has the translation and rotating invariance but not has the scale invariance [5]. For this case, a new contour moment invariant is provided in this paper. It has the scale, translation and rotating invariance. This arithmetic is used in the images recognition and is validated effectively. It is called a relative contour moment invariant arithmetic
-
The definition of contour moment invariant for discrete function:
Above we discuss the contour moment for continuous function but in fact discrete function is usually used. So the research of contour moment invariant for discrete function is needed [5]. Moment invariant of order (p+q) for discrete function is defined as :-
From the above the seven moment invariants are
Here we will prove the translation and rotating invariance and discuss the change of moment if image magnifies and reduces in discrete case.
-
The Translation Invariance: x is the translation quantity in x direction and y is the translation quantity in y direction. (x, y)is the coordinate before translation and (x, y)is the coordinate after translation.
We can obtain mpq:
The relation of f (x, y)and f(x, y)is followed:
Now we can obtain the followed equation:
Now pq can prove having the translation invariance. In terms of the above relation, the seven moments in 1~7 have the translation invariance.
-
The Rotating Invariance: If is the images rotating angle, (x, y) is the coordinate before rotating and (x, y) is the coordinate after rotating –
From above, we can obtain the followed equation:
Similarly, we can obtain the followed formula
The relation among µ20, µ20 and is followed:
Similarly, The relation among µ02, µ02 and isfollowed:
Because
So
From above, the moment 1 has the rotating invariance in discrete situation. Similarly, the other moments has also the rotating invariance in discrete situation.
-
The Scale Invariances: If is the images scale value, (x, y) is the coordinate before changing the images scale and (x, y) is the coordinate after changing the images scale.
Similarly, we can obtain formula
From these formulas we can obtain µpq.
Now we can obtain pq.
From above, we can find that moment has not the scale invariance. The moment relates not only with but also with p +q [5]. In theory, the seven moments has the scale invariance in continuous situation but not in discrete
situation. Therefore, a new contour moment is provided in this paper which is called a relative contour moment invariant.
-
-
Relative contour moment invariant
Aiming at the above, 1, 2,.7 are combined again and then we can obtain six relative contour moments invariant. Relative contour moment invariants have the translation, rotating and scale invariance[9].
Object recognition can then be achieved for invariant object recognition, the shape space approach has several advantages. First, it provides a complete, rather than a partial, object representation that is invariant to similarity transformations. It is also relatively insensitive to noise and occlusion. Second, through statistical shape analysis, classical statistical pattern recognition techniques can be extended to the non-Euclidean shape space[8].
Nevertheless, we believe it represents a new approach to invariant object recognition and is worthy of further investigation. In this work, the shape space approach is studied in the context of 2D object recognition. Potentially, this approach can also be used for 3D object recognition Suppose a 2D object is represented as a set of pointson the plane, called landmarks[8]. For example, they may be perceptual salient points on the object boundary, such ashigh- curvature and extreme points. Let the landmarks be represented by a vector,
1,2 , .6 have the translation , rotating and scale invariances. Here we only prove the scale invariance. If is the images scale value, 1 is the relative contour moment invariant after scaling.
Similarly, 2= 2,…., 6= 6Relative contour invariant moments have the translation , rotating and scale invariance.
-
-
OBJECT RECOGNITION USING SHAPE SPACE
Among the many cues proposed, such as color, texture, motion, context and function, shape is perhaps the most common and dominant. This work concerns shape representation and shape-based object recognition. Indeed, the recognition of many common objects in natural settings, such as cars and people in outdoor scenes, is still beyond the capability of current techniques and systems. The main difficulty, as pointed out by Ullman, lies in the tremendous view variability associated with the images of a given object. For example, depending on the viewing angle, the pictures of a car may look very different. As a result, an algorithm designed based on a single or a few views may not work on a picture of the object taken from a new view. So, the idea here is to represent objects as points on a high-dimensional surface (i.e., a manifold), called the shape space[7]. For example, in a shape space of 2D objects, all possible views of an object caused by translation, scaling and rotation are represented as a single point.
where n is the total number of landmarks and xi is the position of the ith landmark, represented as a complex number. Then, x is a point in Cn, the n-dimensional complex space (Cn can be identified with R2n).
According to Kendall, the shape of x is what is left when the effects associated with translation, scaling and rotation are filtered away. To remove the effect of translation we let
Where
is the centroid of x1,x2, ., xn. Now, x satisfies
Hence, x is a point on a n 1 dimensional complex hyperplane (isometric to Cn1) passing through the origin of Cn, as illustrated in Fig. 2. Similarly, to remove the effect of rotation and scaling, we associate x with an equivalence class (a set)
where C is the set of complex numbers. As varies over C, x* covers all possible scaling and rotations of x. Now x* is the shape of x.
Notice that x*, as illustrated in Fig. 1, represents a complex line passing through origin and on the n 1 dimensional complex hyper plane. Therefore, x* is a point in a space isometric to CPn-2 , the n 2 dimensional complex projective space. This is a smooth and curved non-Euclidean space (i.e., a differentiable manifold) which we will now call the shape space. Its most important geometric property, for the purpose of object recognition, is perhaps the geodesic distance between two points. As shown by Kendall
Fig. 1 Illustration of geometric aspect of definition of shape
-
RESULT ANALYSIS:
The six moment invariant or the 10 different classes are given in table-1.
1
2
3
4
5
6
Class-1
0.1951
1.6348
0.0184
2.3413
2.1546
1.1203
Class-2
0.0507
13.913
0.4069
6.1413
5.0957
-10.115
Class-3
0.0001
-7.5468
0.1389
4.4436
28.1239
6.81047
Class-4
0.0248
81.775
0.1044
7.3248
12.82734
21.4518
Class-5
0.0321
-11.91
0.8233
11.0342
9.02333
-14.2478
Class-6
0.0991
-2.7907
0.3954
8.93140
3.66775
-1.1112
Class-7
0.0361
-24.139
0.07449
5.86676
-0.12407
– 46.32458
Class-8
0.2002
0.3537
0.0058
3.31841
-7.0378
-0.63679
Class-9
0.3382
0.7245
0.9845
13.6685
28.12403
-33.0391
Class- 10
0.0143
-265.0
0.0646
1.9761
26.0168
0.1188
TABLE-1
For the object of class-1 the relative contour moment invariants are calculate for different scale, translation and rotation value which is given in table-2
Table-2
x
y
1
2
3
4
5
6
0
0
1
0
0.19
51
1.63
48
0.01
84
2.3413
0.1546
1.1203
30
0
1
0
0.19
51
1.63
48
0.01
84
2.3413
0.1546
1.1203
0
50
1
0
0.19
51
1.63
48
0.01
84
2.3413
0.1546
1.1203
10
10
1
0
0.19
51
1.63
48
0.01
84
2.3413
0.1546
1.1203
0
0
2
0
0.18
88
1.64
65
0.01
73
2.3417
0.1669
1.0074 0
0
1
3
0
0.19
88
1.54
65
0.02
73
2.5117
0.2869
1.1174
8
15
0.
5
3
5
0.23
91
1.56
48
0.09
87
2.2413
0.2346
1.1763
10
20
1.
4
1
5
0.17
45
1.69
50
0.01
40
2.3117
0.9023
1.2176
5
2
3
4
0
0.17
88
1.64
65
0.00
73
2.3117
0.1869
1.3074
45
15
1
1
0
0.19
51
1.63
48
0.01
84
2.3413
0.1546
1.1203
Through the experiment result we can see that the relative contour moment invariants basically keep the same value, so the relative contour moment invariants suit with pattern recognition.
-
CONCLUSION
The images discrete contour moment is analyzed and proved in theory. The result is the images discrete contour moment without the scale invariance. In terms of this situation, the arithmetic of the relative contour moment invariants is brought forward. This arithmetic is based on the original seven contour moments. The original seven contour moments are combined into the six relative contour moment invariants. six relative contour moment invariants have the translation, rotating and scale invariance, so the veracity of pattern recognition improves. Though need some quantity of computing moment invariants in the whole object area. So the quantity of computing reduces much and the speed of pattern recognition improves much.
(a)
REFERENCES
(b)
Fig.2 (a) and (b) Comparison between shape space and moment invariant method.
[1]. D. Zhang and G .Lu, Review of shape representation and description techniques. Pattern Recognition..vol. 37, no.1,pp. 1-19,2004. [2]. S. O Belkasim, M.Shridha and M. Ahmadi, pattern recognition with moment invariants, a comparative study and new result, vol.24,1991. [3]. Hu .M.K ,Visual pattern recognition by moment invariants IEEE trans. on information theory ,vol.& no.2,1962,pp.179-187. [4]. Dalai, S. and Rana, D. A new approach to object recognition for multi viewed objectsInternational Conference on Communication, Information & Computing Technology (ICCICT) Preceeding , pp 1 4,2012. [5]. Abu-Mostafa, Y., and Psaltis, D..Recognitive aspects of moment invariants. IEEE Transactions on Pattern Analysis and machine intelligence 6,698-706. 1984 [6]. Abu-Mostafa, Y., and Psaltis, D.Image Normalization by Complex Moments. IEEE Transactions on Pattern Analysis and machine intelligence 7,46-55. 1985 [7]. Y.Amit, D.Geman and K. Wilder, Joint Induction of Shape Features and Tree classifiers, IEEE Trans. Pattern Analysis and Machine Intelligence ,Vol19,no.11,pp.1300-1305,Nov.1997. [8]. Serge Belongie, Jitendra Malik and Jan Puzicha, Shape Matching and Object Recognition Using Shape Contexts,IEEE trans. On Pattern Analysis and Machine Intelligence, vol. 24, No.24, April.2002. [9]. Boyce, J., And Hossack.W., Moment invariant for pattern recognition, Pattern Recognition Letters 1,451-456. 1983