Object Identification from an Image Using GBCM Feature Extraction Method and Classification Using SVM

DOI : 10.17577/IJERTV2IS90381

Download Full-Text PDF Cite this Publication

Text Only Version

Object Identification from an Image Using GBCM Feature Extraction Method and Classification Using SVM

Object Identification from an Image Using GBCM Feature Extraction Method and Classification

Using SVM

Amit Thakur #, Prof. Avinash Dhole *

Computer Science and Engineering#, Professor, Computer Science and Engineering*; Head, Department of Computer Science & Engineering, Raipur Institute of Technology#*, Raipur,

Mandir Hasaud, Raipur, Chhattisgarh, INDIA,

Abstract:-

In Image processing system, it treats images as two dimensional signals while applying number of image processing methods to them. This leads to an increasing number of generated digital images. Therefore it is required automatic systems to recognize the objects from the images. These systems may collect the number of features of a image and specification of image and consequently the different features of an object will identify the object from the image.

Image processing is among rapidly growing technologies today, with its applications in various aspects of a business. Image Processing forms core research area within engineering and computer science disciplines too. Its most common and effective method is retrieve the textual features from various methods. But most of the methods do not yield the more accurate features form the image. So there is a requirement of an effective and efficient method for features extraction from the image.

An image is also thought-about to contain sub- images typically cited as regions-of-interest, ROIs, or just regions. The foremost needs for image process of pictures is that the photographs be obtainable in digitized type, that is, arrays of finite length binary words. For conversion, the given Image is sampled on a separate grid and every sample or component is quantal employing a finite variety of bits. The digitized image is processed by a pc. To show a digital image, it's initial reborn into analog signal, that is scanned onto a show.

Keywords- object, grid-based-color-moments (gbcm), features, feature extraction, textual features, Image Processing, Digital form, Object Identification.

  1. INTRODUCTION

    It is a way to change or interpret existing photos, like images. Two principal applications of image process are:

    1. Rising image quality

    2. Machine perception of visual info as employed in robotic.

      Working of image process: to use image- processing ways, we tend to 1st alter a photograph or different image into a picture file. Then digital ways will be applied to arrange image components, to reinforce color separations, or to enhance the standard of shading. Associate example of the applying of image-processing ways is to reinforce the standard of an image. These techniques area unit used extensively in art applications that involve the retouching and rearranging of sections of images and different design. Similar ways area unit accustomed analyze satellite photos of the world and photos of galaxies.

      1. Methods of Image Processing

        There are two methods available in Image Processing.

        1. Analog Image Processing

          Analog Image process refers to the alteration of image through electrical suggests that. the foremost common example is that the TV image.

          The television signal could be a voltage level that varies in amplitude to represent brightness through the image. By electrically variable the signal, the displayed image look is altered. The brightness and distinction controls on a TV serve to regulate the amplitude and reference of the

          video signal, leading to the brightening, darkening and alteration of the brightness vary of the displayed image.

        2. Digital Image Processing

          In this case, digital computers area unit accustomed method the image. The image are regenerate to digital type employing a scanner analog-digital converter [6] so method it. It is outlined as the subjecting numerical representations of objects to a series of operations in order to get a desired result. It starts with one image and produces a changed version of an equivalent. it's so a method that takes a picture into another.

    3. Machine Learning

      Machine learning, a branch of computer science, is concerning the development and study of systems that may learn from knowledge. As an example, a machine learning system may well be trained on email messages to find out to differentiate between spam and non-spam messages. Once learning, it will then be accustomed classify new email messages into spam and non-spam folders. There is a good type of machine learning tasks and flourishing applications. Optical character recognition, during which written characters area unit recognized mechanically supported previous examples, could be a classic engineering example of machine learning. So Machine Learning will be outlined as a "Field of study

      that provides computers the flexibility to find out while not being expressly programmed".

      3.1 Algorithm types for Machine Learning

      Machine learning rules will be organized supported the required outcome of the algorithm or the sort of input offered throughout coaching the machine.

      • Supervised learning generates a operate that maps inputs to desired outputs (also referred to as labels, as a result of they're usually provided by human specialists labeling the coaching examples). As an example, in a very classification drawback, the learner approximates a operate mapping a vector into categories by gazing input- output samples of the operate.

      • Unsupervised learning models a set of inputs, like clustering. See also data mining and knowledge discovery. Here, labels are not known during training.

      • Semi-supervised learning combines both labeled and unlabeled examples to generate an appropriate function or classifier. Transduction, or transductive inference, tries to predict new outputs on specific and fixed (test) cases from observed, specific (training) cases.

      • Reinforcement learning learns how to act given an observation of the world. Every action has some impact in the environment, and the environment provides feedback in

        the form of rewards that guides the learning algorithm.

      • Learning to learn learns its own inductive bias based on previous experience.

    4. Literature Review

      The existing research shows that object identification in images raise new challenges and imposes significant problems due to similarities and color moments in the images. Object recognition by the existing system is gaining lots of importance now a day. However, the area is still lacking in image processing analysis techniques and methods that could be used to improve the probabilities of object identification in the images.

      Hui Yu, Mingjing Li, Hong-Jiang Zhang, JufuFeng [1] presented paper on Color TextureMoments for Content-Based Image Retrieval. They have calculated the first and second moments of color maps as a representation of the natural color image pixel distribution, resulting in a 48-dimensional feature vector. The feature is named as color texture moments (CTM), which can also be regarded as a certain extension to color moments.

      Noah Keen [3] presented paper on Color moments. He has proposed a methodology to differentiate the images based on the features of color. Once color moments have been calculated, then these moments provide a

      measurement for color similarity among the images. Then these values of similarity canbe compared to the values of images indexed in a database.

    5. Related Work

      S.R. Kodituwakku and S.Selvarajah [9] have presented paper on Comparison of Color Features for Image Retrieval. They have suggested systems i.e. Content based image retrieval (CBIR) systems that are used for automatic indexing, searching, retrieving and browsing of image databases. Color is one of the important features used in CBIR systems. A feature is a characteristic that can capture a certain visual property of an image either globally for the entire image or locally for regions or objects. Color, texture and shape are commonly used features in CBIR systems. A key function in any CBIR system is the feature extraction. Mapping the image pixels into the feature space is known as feature extraction. Extracted features are used to represent images for searching, indexing and browsing images in an image database. Use of feature space is more efficient in terms of storage and computation. Most of the CBIR systems represent the feature space as a feature vector. Once the features are represented as a vector it can be used to determine the similarity between images. CBIR systems use different techniques to measure similarity.

      J F Dale Addison, Stefan Wermter, Garen Z Arevian [6] presented paper on A Comparison

      of Feature Extraction and Selection Techniques. They have applied several dimensionality reduction techniques to data modeling using neural network architectures for classification using a number of data sets. They have considered a number of means of improving the classification accuracy of neural network models by reducing the dimensionality of the data set.

      We have focused on the number of Feature Extraction Techniques and inputted to the training data sets. More techniques can be involved for accurate result. There are many possibilities to make changes in images. We can use number of filters to the image so that it can be inputted in desires forms or formats. Image processing field is very huge and large. It provides you extensile ways to research in the horizon of images. The intension of classification process is to divide the pixels of images into particular or defined themes. Generally multispectral images are used in classification process.

    6. Problem Statement

      Object Identification systems for an Image have ever been an increasing demand in various significant applications. Many dataset resources with rich set of features have been systematically studied and employed in many systems. In spite of their widespread applications, these dataset resources and features suffer from two main disadvantages:

      • Failure to match in low resolution images

      • Failure for black & white images and Gray scale images

      • Necessitates of rich set of dataset with number of features for accurate results

      For these reasons, innovative object recognition methods from an Image have been an urgent need for surveillance applications and gained immense attention among the computer vision community researchers in recent years. In this modern era, including the number of features of an object and properties of an object have been including to make our data set more rich for the requirement of the accurate result and this has turned out to be a popular research direction.

    7. Methodology

      In this present work features like mean, median, skewness ,Auto Correlation, Contrast, Energy, Entropy, Homogeneity, Sum Variance, Sum Average, Difference Entropy, Maximum Probability, Dissimilarity, Cluster Prominence are used as basic feature vectors of an object that helps to recognize an Object in a image.

      Figure 1: Various stages of object recognition process

      Figure1 shows the various stages of object recognition process. First of we acquire the desired image which is to be tested with the data sets created. Then we apply the method for extracting the features of the image. Here we are using GBCM() method to extract the features. Then the image is classified with the help of Support Vector Machine (SVM). And thus object is recognized. In this project, we have proposed a new method for solving object recognition problem.

    8. Classification using SVM (Support Vector Machine)

      The datasets are provided to the SVM Classifier as an input and consequently the classified objects are produced as an output. Image classification is also an active area in the field of machine learning, in which it uses algorithms that map sets of input, attributes or variables a

      feature space X – to set of labeled classes Y. These algorithms are known as classifiers. Basically what a classifier does is assign a pre- defined class label to a sample.

      Figure shows a simple architecture of a classification system.

      Figure 2: Block Diagram of SVM

      SVM [9, 12, 19] introduced by Vladimir Vapnik is an algorithm which has shown better performance in many domains over other standard machine learning techniques. The SVM is popular in image classification as this approach tries to find the optimal by separating a hyperplane between classes based on the training cases. Understanding on how SVM classifies data can be illustrated by a simple situation in which there are two linearly separable classes in d-dimensional space.

    9. Implementation and Explanations

      Object recognition is one kind of technology that can be used to identify the object with the help of features extracted. In present work Object in an image can be classified in different classes depending upon the features extracted. Machine learning methodology is used to recognize any object. Here a JPEG image has been taken as input. And then features depending upon color

      moments have been extracted with the help of Grid Based Color Method. Then system has been trained with the help of SVM Training.This training has been done on One-To-One basis.After that with the help of SVM Classify the objects are classified into eight different classes.

      Figure 3: Image classification using the Support Vector Machine.

    10. Result and discussion

      There are 25 features have been extracted. Some important features have been graphed such as :-

      • Autocorrelation

      • cluster shade

      • contrast

      • correlation

      • dissimilarity

      • energy

      • entropy

      • variance

      Figure 4: Auto correlation Feature Graph

      Figure 5: Cluster Shade Feature graph

      Figure 6: Contrast Feature Graph

      Figure 7: correlation Feature Graph

      These graphed are plotted against 5 images. These images have been selected at random basis. These graphs have been plotted with the help of plot () function in MATLAB. X axis denotes the numbers of images and the Y axis resembles the particular features of an images. These 8 features have been plotted randomly. There is no any logic to opt these features to plot as a graph.

      S. No.

      Class Name

      Accuracy (in %)

      1

      Aeroplane

      80

      2

      Bike

      80

      3

      Bus

      80

      4

      Car

      60

      5

      Cat

      80

      6

      Dog

      80

      7

      Horse

      60

      8

      Person

      100

      Here we have taken 5 samples from each class as an image. Here image format is *.jpeg/ *jpg. We have inputted each image to the system and classified with the trained data, according the outputs have been observed. And after that we have assigned number 1 for each correct output and 0 for incorrect output manually.

      increase the accuracy or efficiency of he system to identify the object or to classify the object from the images. Thus it sets the stage for further development for object recognition field. The complete work has been carried out into two phases first phase is the training phase and second phase is the testing phase. The work can be extended by increasing the number of features into datasets. So it can be used as one of the good reliable way of object identification.

      References

      120

      100

      80

      60

      40

      20

      Aeroplane

      Bike Bus Car Cat Dog Horse Person

      Aeroplane

      Bike Bus Car Cat Dog Horse Person

      0

      Accuracy (in

      %)

          1. SWAIN,M. and BALLARD,D.: Color indexing, International Journal of Computer

            Vision, 1991, 7, (1), pp. 11-32ology, 9(2):49-6

            1, 1992.

          2. Mas RinaMustaffa, Fatimah Ahmad, RahmitaWirza O.K. Rahmat, RamlanMahmod, CONTENT-BASED

            Figure 8: Accuracy in percentage of different classes

            Here we have ranged graph from value 0 to 1. So that it is clearly visible atomic value to the output. Note that images have been taken at random basis. So the output ratio may vary according the different input images. When we talk about these image samples taken here for the testing purpose, lead more than 80% accurate result.

    11. Conclusion and future work

The work presented in this project had been carried out earlier, but the basic difference comes in part of number of features extracted. That is the main key issue in my work to

IMAGE RETRIEVAL BASED ON COLOR-SPATIAL FEATURES,

Malaysian Journal of Computer Science, Vol. 21(1), 2008.

      1. Jau-Ling Shih and Ling-Hwei Chen, Color Image Retrieval Based on Primitives of Color Moments, 1001 Ta Hsueh Rd., Hsinchu, Taiwan 30050, R.O.C., May 2011.

      2. David G. Lowe, Object Recognition from Local Scale-Invariant Features, Proc. of the International Conference on Computer Vision, Corfu (Sept. 1999).

      3. J F Dale Addison, Stefan Wermter, Garen Z Arevian, A comparison of feature extraction and selection Techniques, International Journal of Computer Applications (0975-8887), vol. 9, no. 12,

        pp. 36-40, November 2010.

      4. Noah Keen, Color Moments, February 10, 2005.

      5. CHRISTOPHER J.C. BURGES, A Tutorial on Support Vector Machines for Pattern Recognition, Kluwer Academic Publishers, Boston. Manufactured in The Netherlands.

      6. V. N. Vapnik. The Nature of Statistical Learning Theory. Springer, New York, 2nd edition,

        2000.

      7. S. V. N. Vishwanathan and M. Narasimha Murty. Geometric SVM: A fast and intuitive SVM algorithm. Technical Report IISC-CSA-2001-14, Dept. of CSA, Indian Institute of Science, Bangalore, India, November 2001. Submitted to ICPR 2002.

      8. J. Pradeep, E, Shrinivasan, S. Himavathi Diagonal based feature extraction for handwritten character recognition system using neural network, IEEE, 20011.

ABOUT AUTHORS :-

Amit Thakur received the B.E. From Pt. RSU Raipur(C.G.) India in Computer Science & Engineering in the year 2006. He is currently pursuing M.Tech. Degree in Computer Science Engineering from CSVTU Bhilai (C.G.), India. He is currently working as Assistant Professor with the Department of Computer Science & Engineering in BIT, Raipur (C.G.) India. His research areas include Feature Extraction, Pattern Recognition, and Image

Processing etc.

Prof. Avinash Dhole is Professor of Computer Science Engineering and Head of Computer Science &

Engineering Department at Raipur Institute of

Technology, Raipur (C.G.) India. He has obtained his M.Tech. degree in Computer Science & Engineering from RCET, Bhilai, India in 2005. He has published over 15 Papers in various reputed National & International Journals, Conferences, and Seminars. He is serving his duties as faculty in Chhattisgarh Swami Vivekanand Technical University, Bhilai, India (A State Government University). His area of research includes Operating Systems, Editors & IDEs, Information System Design & Development, Software Engineering, Modelling & Simulation, Operations Research, etc.

Leave a Reply