Image Classification through Dynamic Hyper Graph Learning

DOI : 10.17577/IJERTV2IS110529

Download Full-Text PDF Cite this Publication

Text Only Version

Image Classification through Dynamic Hyper Graph Learning

Ms. Govada Sahitya,

Dept of ECE,

St. Ann's College of Engineering and Technology,chirala.

  1. Lakshmi Narayana,(Ph.D),

    Associate Professor,

    Dept of ECE, St. Ann's College of Engineering and Technology, Chirala.

    Abstract

    Image classification has gained much attention in the recent years for variety of applications ranging from image processing, remote sensing, biomedical etc., To categorize an image into one of a set of classes many approaches have been proposed particularly much focus is given on graph based learning. Hyper graph learning has been investigated and found problems for generating hyper edges and how to handle set of hyper edges using statistical learning theory still it suffers with the problems of loss minimization. In this paper regularization of loss minimization is addressed for existing and new images also with dynamic hyper graph learning approach by giving promising performance over conventional methods.

    1. Introduction

      The objective of classification of an image is to automatically categorize all pixels in an image into land cover classes or information themes. This can be done in two ways. Using the Supervised methods, that will take the statics of the previous images, and unsupervised methods [8] that depend on the current image pixel intensity to derive some threshold value based on the intensities. There are many advantages using the hyper graph learning: First, Compared to simple graph learning, the hyper graph structure more reliable. Second, the simple graph sometime does not exist a simple similarity measure for pair wise data point.

      A hyper graph is a special graph which contains hyper edges [1]. Normally, in the simple graphs each edge connects to set of vertices, but in hyper graph each edge connects arbitrary number of vertices. Let V={v1, v2,..vn} and E={e1,e2,..em}defined on V. For any hyper

      edge e E such that {v e,v e,v e}which is sub set of V. In

      Learning, the weights of the hyper edges are set according to certain rules, for example, the weight of hyper edge [4] is calculated by average, median of the pair wise affinities within the hyper edge and constructs a hyper graph, and hyper edges are generated based on images and their nearest neighbors.

      Hyper graph 1 Hyper graph 2

      Resulting hyper graph

      Fig 1: Hyper graph construction

      In the dynamic hyper graph learning, first some features are extracted from image. Names or numbers are available for several images. Construct a hyper graph by using a set of hyper edges generated from samples and its neighbors. We construct some rules for that hyper graph. Based on that rules we classify the images. For example, we construct a hyper edge and generate rules for that hyper graph. For example, we take 2 or 3 images from that images, we calculate median, standard deviation (or) averages. We form rules for those images. Those pixels which having same rules that pixels are formed as a group. Finally, based on rules we classify the images.

      1 2 6

      a hyper graph vertices are connected by a hyper edge, each hyper edge is assigned a weight. In hyper graph

      Transductive learning methods are used for both labeled and unlabeled samples for improving the classification performance of conventional methods. Many methods have been proposed. For example, graph based and hyper graph learning.

      Among this hyper graph learning has achieved promising performances. For example, Agarwal etal.applied hyper graph to clustering by using clique a average to transform a hyper graph to a simple graph. Zass and Shashua adapted the hyper graph in image matching by using convex optimization. In, Tian et al. proposed a semi supervised learning method called Hyper prior to classify geni expression data by using biological knowledge. Wong and Lu proposed hyper graph based 3-D object recognition. Bu atal. Developed music recommendation by modeling relationship of different entities through a hyper graph to include music and users.

      This paper includes the algorithm in simultaneous learning the labels of unlabeled samples and the weights of hyper edges, such that image classification performance can be significantly improved.

    2. Existing methods

      In the real world there are two classes of images: Textured images [5]. And Non-textured. The textured images are the ones that have some repeated patterns and they dont have any shape, and color such as fabric, rocks etc. Whereas the Non-Textured images have some shape, and color such as chairs man, mountains etc. We will define classifiers on the trained images, when a new image matches to any one of the classifier. We say that the image is having the defined texture or non texture.

      Imag

      Textured

      Textured

      Non Textured

      Non Textured

      Fig 2: Image classes

      Many classification techniques have been developed, such as decision tree, K-nearest neighbor [6], and support vector machines [7]. Since image labeling is labor intensive and time consuming, image classification frequently suffers from the training data insufficiency problem. These problems can be overcome by the Transductive learning methods. Among existing Transductive learning methods, graph-based learning achieves promising performance. In the graph based learning method [2]

      it suffers from two problems: First, The similarity estimation between samples and the Second, Ignore the relation in a high order. It is easily find two close samples according to the pair wise similarities, but it is not easy to predict whether there are three or more samples. These two problems can be resolved by hyper graph learning.

      In hyper graph learning a set of vertices is connected by a hyper edge in a hyper graph. Each hyper edge is assigned a weight. Weights having certain rules. Weights will help improve classification. Hyper graph learning algorithms are star expansion, clique expansion, clique averaging, bolas expansion etc.,[3]. In the hyper graph learning technique two problems are to be addressed: First, How to generate the hyper edge and the Second, How to handle the large set of hyper edges. Two avoid these two problems we use Dynamic hyper graph learning.

      In dynamic hyper graph learning we generate hyper edges by linking images and their nearest neighbors. By varying the size of the neighborhood we improve the hyper graph construction. In the proposed dynamic hyper graph learning a principled approach define to regularize the loss minimization based on the statistical learning theory. In this paper, the regularizing loss minimization is addressed with dynamic learning along with statistical learning theory.

    3. Proposed method

In Dynamic hyper graph learning, the input image is converted into gray. We formulated different rules based the standard deviation of the entire pixels of the image. The pixels that pass the rule are going to be classified into one group, like wise different groups are formed with different hyper edges. The pixels connected through the hyper edge called hyper graph. The pixels that are common and part of all the hyper graphs [9] are going to be listed, for the generation of the new hyper graph. The resultant hyper graph is the output that classifies the image.

Input image

Feature extracti on

Rules generati on

Rule databa se

Input image

Feature extracti on

Rules generati on

Rule databa se

The following figure (3.a) shows the block diagram for

rules geeration.

In the above Fig, from the input image features are extracted. The extracted are then fed to the rules generation process, which then generates the rules. These newly generated rules are stored in the rule base for the further processing of the image.

The following figure shows the block diagram for dynamic hyper graph learning.

Feature extraction

Labele d points

Hyper graph

Dynamic hyper graph learnin

Classif ier image

Feature extraction

Labele d points

Hyper graph

Dynamic hyper graph learnin

Classif ier image

Fig (3.b): Dynamic hyper graph learning

In the above fig, the features are extracted from the input image. The labeled points are assigned to that features and construct a hyper graph for that features. And finally, the classifier image is obtained.

Table 1: Rule Base

Rules

Condition

Value

Rule1

Classres>maxclassifier

Not Matched

Rule2

Classres<mincalssifier

Not Matched

Rule3

Classres>=avgstd &&classres<=maxclassifier

<=maxclassifier

Matched

  1. Results and Discussion

    The bellow result is showing, four different images in the forest about a tiger. The features are extracted from the four images after converting them into gray scale images. The features are stored, for analysis. The image to be classified is also taken and processed to get features. Once the features from the image to be classified are collected, they will be compared with the all existing features, marking with some threshold. Some predefined rules are applied to classify it. Thus if the input image matches with any one of the image classifier, we say a "Match" occurred, otherwise "Not Matched".

    Vertex 1 Vertex 2 Vertex 3 Vertex 4

    Classified image Classified image Classified image Classified image

    Features value

    Image to be classified

    100

    50

    0

    0 2 4

    Classes

    The classes for classification

    Fig 4: Classification result for Forest

    The Rule base is prepared and is used to classify the input image. The rules are constructed based on the analysis of the features extracted from different images of the same class.

    IV. Algorithm

    1

    0.5

    0

    0

    1

    0.5

    0

    The edge1

    50

    The edge3

    100

    1

    0.5

    0

    0

    1

    0.5

    0

    The edge2

    50

    The edge4

    100

    1. Convert rgb to gray.

    2. Calculate standard deviation for images.

      500

      400

      0 50 100

      Hyper Graph for animal images

      0 50 100

      Calculate maximum, minimum and threshold by using standard deviation.

      = max : .

      = : .

      1

      300

      1 2 3 4

      Fig 5: The edge details of existing Method

      The feature vector for the existng method [472,420,393,353]

      = , .

      =1 =1

      Construct adjacent matrix.

    3. Finding adjacent matrix to find hyper graph.

    4. Rules generation as shown in the table 1.

    5. Finally classify the images.

1

0.5

0

1

0.5

0

500

400

300

The edge1

0 50 100

The edge3

0 50 100

Hyper Graph for animal images

1 2 3 4

1

0.5

0

1

0.5

0

The edge2

0 50 100

The edge4

0 50 100

them into gray scale images. The features are stored, for analysis. The image to be classified is also taken and processed to get features. Once the features from the image to be classified are collected, they will be compared with the all ready existing features, marking with some threshold. Some predefined rules are applied to classify it. Thus if the input image matches with any one of the image classifier, we say a "Match" occurred, otherwise "Not Matched".

Fig 6: The edge details for proposed method

The feature vector for the proposed method is [457,408,385,343]. Here we can understand that the misclassification can be reduced in the propoesd method and hence the performance is increased.

moon1 edge moon2 edge

500

400

300

200

100

0

472 456

420 408393 401

353

303

V1 V2 V3 V4

existing method

moon3 edge moon4 edge

Fig 9:Moon Edges

Fig 7: The edge weight vs. vertices

1

0.5

0

The edge1

0 10 20 30 40 50 60 70 80 90 100

1

0.5

0

The edge2

0 10 20 30 40 50 60 70 80 90

Vertex 1 Vertex 2 Vertex 3 Vertex 4

1

0.5

0

The edge3

0 10 20 30 40 50 60 70 80 90 100

1

0.5

0

The edge4

0 10 20 30 40 50 60 70 80 90

Vertext 1 ClassifieVdertex 2 Classified imVeargtex 3 Classified iVmeargt ex 4 Classified image

Featues value

Featues value

Image to be clasTshifieedclasses for classification 100

50

0

500

Edge Weight

Edge Weight

0

Hyper Graph for moon images

1 1.5 2 2.5 3 3.5 4

Vertices

Fig 10: The edge details for existing method

The feature vector for the existing method [157,228,416,238],

0 2 4

Classes

Fig 8: Classification result for moon

Here four different images about a moon are collected in different times. The features are extracted from the four images after converting

1

0.5

The edge1

1

0.5

The edge2

VII. References

  1. Ze Tian, TaeHyun Hwang and Rui Kuang, "a hypergraph-based learning algorithm for classifying

    0

    0 10 20 30 40 50 60 70 80 90 100

    The edge3

    1

    0.5

    0

    0 10 20 30 40 50 60 70 80 90 100

    Hyper Graph for moon images

    Edge Weight

    Edge Weight

    400

    0

    0 10 20 30 40 50 60 70 80 90 10

    The edge4

    1

    0.5

    0

    0 10 20 30 40 50 60 70 80 90 10

    arraycgh data with spatial prior," Genomic Gensips2009, IEEE International workshop May 2009, pp.1-4.

  2. Jun Yu, Dacheng Tao, Senior Member, IEEE, and Meng Wang, Member,adaptive hypergraph learning and its application in image classification,IEEE transaction on img proc,vol. 21, No.7, july 2012, pp.3262-3272.

    200

    0

    1 1.5 2 2.5 3 3.5 4

    Fig 11: The edge details for proposed method

    The feature vector for the proposed method is

  3. Yuchi huang,Hyper graph based visual categorization and segmentation, October 2010.

  4. Li Pu and Boi Faltings,Hypergraph Learning with Hyperedge Expansion, Artificial Intelligence

    fv= [155,224,380,224]. Here we can understand that the

    misclassification can be reduced in the proposed method and hence the performance is increased.

    500

    416

    380

    Laboratory Ecole Polytechnique Federale de Lausanne.

  5. Robert M.Haralick, K.Shanmugam, Textural features for image classification,IEEE transactions on systems, man and cybernetics, vol SMC-3, NO- 6.november 1973, pp.610-621.

    400

    300

    200

    100

    0

    228 224

    157 155

    238 224

    FV_Existn FV_Propo .

    ed

  6. J.A.Hartigan and M.A.Wong,Algorithm AS136, a k- means clustering algorithm, applied statistics,vol.28, pp.100-108,1978.

  7. Christopher J. C Burges, a tutorial on support vector machines for pattern recognization, kulwer academic publishers, Bosten manufactured in Netherlands, pp.1- 43.

    V1 V2 V3 V4

    Fig 12: The edge weight Vs Vertices

    VI. Conclusion

    The objective of this project is to define classifiers and also verifying new images for analysis by sing Dynamic hyper graph learning technique. This technique gives promising performance over conventional methods. The proposed method along with investigating a robust hyper edge construction, but also presents a simultaneous learning of the labels of unlabeled images and the weights of hyper edges.

  8. Yuchi Huang,Hyper graph based visual categorization and segmentation,new Brunswick, new jersy, October 2010.

  9. Y.Huang, Q. Liu, S. Zhang, andD.Metaxas, Image retrieval via probabilistichypergraph ranking,, in Proc. Int. Conf. comput. Vis. Pattern Recog., San Francisco, CA, 2010, pp. 33763383.

Leave a Reply