- Open Access
- Total Downloads : 5
- Authors : Nagarjun T N , Komal Rathod , Chandana V , Deepak V Kashyap
- Paper ID : IJERTV8IS030094
- Volume & Issue : Volume 08, Issue 03 (March – 2019)
- Published (First Online): 22-03-2019
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Object Recognition using Shape Features
Nagarjun T N , Komal Rathod, Chandana V, Deepak V Kashyap
Department of Computer Science and Engineering, BNM Institute of Technoglogy,
Bangalore
AbstractThe shape of an object has always been a key attribute through which humans have been able to distinguish and identify them. Object shape recognition deals with creating an automated computer-based approach to correctly identify the type of object in an image or video. Object shape recognition is an important pre-processing step for many applications such as recognition of a tiny cell under a microscope to outliner detection in an intelligent surveillance system. Although object shape recognition has seen a lot of progress in the past few years and many systems have been proposed, there is still a massive scope for improvement as a myriad of factors which include variation in object instances through pose and appearance, environmental factors such as the degree of occlusion, and lighting tend to cause failures in the existing system. In order to combat the above mentioned short-comings of the existing systems, a system has been developed that detects the contour of the object which serves as a major factor in the recognition process, and the objects whose contours are found are then labeled using SVM.
-
INTRODUCTION
Images have played an important role in human life since vision is one of their most important senses. Images are everywhere and a huge number of images are generated from different sources for various reasons. Huge strides in technological innovation in recent years have made computers and software so advanced that machines are entering into areas that was once thought to belong exclusively to humans, one such field being Image identification and recognition. In this day and age, the influence and impact of automating image cognition and recognition on the modern society is widespread. It has become a critical component in research and has many important applications. The field borrows its power from the many recent advances in artificial intelligence and machine learning, among others. With image processing today, we are going beyond the two-dimensional and going deeper to see what is actually in the image. The computer not only just sees the image and the various components present inside it. It also processes, analyses and recognizes the different features and objects and performs useful operations on it. This is a real computer vision. An Image is an artifact that depicts visual perception. An image is a picture that has been created or copied and stored in electronic form. An image can be of 2 dimensional such as photographs and screen display or 3 dimensional images such as hologram or statues. Digital images are electronic snapshots taken of a scene or scanned from documents, such as photographs, manuscripts, printed texts, and artwork. The digital image is sampled and mapped as a grid of dots or picture elements (pixels). Each pixel is assigned a tonal value (black, white, shades of grey or color),
which is represented in binary code (zeros and ones). The binary digits ("bits") for each pixel are stored in a sequence by a computer and often reduced to a mathematical representation (compressed). The bits are then interpreted and read by the computer to produce an analog version for display or printing. The digital image is then processed by following a series of steps that include importing the image via image acquisition tools, analyzing and manipulating the image and finally outputting in which results can be altered images or reports that are based on image analysis. This is followed by image identification is one of the key factors for scene understanding. It is still a challenge today to accurately determine an object from a background where similar shaped objects are present in large numbers. Hence individual categories of images identified using the feature set and the system is trained accordingly. Image identification in object models means that every object instance has a unique, unchanging identity. The recognition of the object is the final step involved which employs concepts like deep learning, neural network and other supervised, unsupervised or semi- supervised machine learning algorithms for correctly recognizing the objects.
-
EXISTING SYSTEM
Many object recognition models have been developed but most of it surrounds an object with the rectangular box which lead to the unwanted features. This indirectly reduces the precision of the object matching. The basic steps involved in Image Processing to identify and label the objects present within the image are: Edge detection – the various techniques for detecting the edges are Sobel operator, canny operator, Prewitt operator, Roberts operator, and Fuzzy logic. Feature Selection – wrappers, filters and embedded methods. Feature extraction There are two types of feature extraction. Local feature Speeded up Robust Features (SURF), KAZE, FREAK, Binary Robust Invariant Scalable Key points (BRISK), Scale Invariant Feature Transform (SIFT), Principal Component Analysis (PCA), Features from Accelerated Segment Test (FAST), Harris, and Shi & Tomasi, blob, Hessian-Laplacian and many more. Global feature – Shape Matrices, Invariant Moments (Hu, Zernike), Histogram Oriented Gradients (HOG) and Co-HOG and many more. Object recognition Artificial Neural Network (ANN), Convolutional Neural Network (CNN), Support Vector Machine (SVM), k-Nearest Neighbor (kNN) and others
-
PROPOSED SYSTEM
-
Edge detection
Detection includes a variety of mathematical methods that aim at identifying points in a digital image at which the image brightness changes sharply or, more formally, has discontinuities. The discontinuities are abrupt changes in pixel intensity which characterize boundaries of objects in a scene. Edge detection is a fundamental tool in image processing, machine vision and computer vision, particularly in the areas of feature detection and feature extraction. Applying an edge detection algorithm to an image can significantly reduce the amount of data that needs to be processed and may therefore filter out all information that may be regarded as less relevant, while preserving the important structural properties of an image. Edge detection method extracts the edge of the object from the image and identifies the features based on these edges. There are many different edge detection methods that are extant today. A classical method of edge detection involves the use of operators. The different operators that are present for this purpose are canny operator, Sobel operator, Prewitt operator, Robert's operator, etc. The two techniques important to the proposed system are- canny edge detecting algorithm and Sobel edge detecting algorithm. This has been done in order to get achieving better accuracy.
-
Feature extraction
Features are the information extracted from images in terms of numerical values. These numerical values are difficult to understand by humans. Features extracted are generally of a much lower dimension than the original image. This reduces the overhead of processing the image. There are two different types of features that can be extracted -Local features and Global features. Geometric feature extraction can help solve recognition problems without much effort. Geometric feature cues serve as an important tool for identifying good features. The various geometric feature cues may be Length, Breadth, Height, Diagonal length, Base, Radius. All these geometric feature parameters are extracted. Along with these SIF features are the information extracted from images in terms of numerical values. These numerical values are difficult to understand by humans. Features extracted are generally of a much lower dimension than the original image. This reduces the overhead of processing the image. There are two different types of features that can be extracted -Local features and Global features. Geometric feature extraction can help solve recognition problems without much effort. Geometric feature cues serve as an important tool for identifying good features. The various geometric feature cues may be Length, Breadth, Height, Diagonal length, Base, Radius. All these geometric feature parameters are extracted. Along with these SIFT features are extracted. The scale invariant feature transform (SIFT) is used to detect and describe local features in images. SIFT keypoints of objects are first extracted, from a set of reference images and stored in a database. An object is recognized in a new image by individually comparing each feature from the new image to this database and finding matching features based on Euclidean distance of their feature vectors. From the full set of matches, subsets of
keypoints that agree on the object and its location, scale, and orientation in a new image are identified to filter out good matches.
The above formulas help us to determine the gradient magnitude and orientation in SIFT.
-
SVM Implementation
Recognition can be done in several different ways. One of the most efficient algorithms is SVM. Support Vector Machine (SVM) is a supervised machine learning algorithm which is used for recognition of images. It is highly efficient for recognition of high dimensional datasets.
-
Labelling of Images
Accurate image identification after its recognition is done via labelling. Labelling objects in images plays a crucial role in many visual learning and recognition applications such as image retrieval, object detection and recognition. Manually creating object labels in images is time consuming and thus becomes impossible for labelling a large image dataset. The proposed model uses a supervised learning algorithm for labelling objects in images. The final outcome is to identify and subsequently label the objects in the scene.
Fig. 1 Proposed System Data Flow Diagram
-
-
DATA SET
A dataset is a collection of data. It is an integral part of the project which is dedicated to understand and evaluate the model for efficacy. In this project 80% of the dataset will be considered for training and 20% will be for testing. A dataset consisting of about 250 different images is developed. They are divided into two categories simple and complex. Simple set consists of two-dimensional black and white simple objects such as triangles, circles, etc. It also consists of a set of
complex images which is divided into 2 groups. Examples like apple and leaf, Teddy bear and bells etc. Consists of real and coloured images collected from various sources.
Fig 2. Image Dataset
-
IMPLEMENTATION
Image from the dataset is subjected to several pre-processing steps in order to convert it into a binary form which is easy to process. The binary image is fed as an input to the edge detectors, namely Canny and Sobel for precise edge detection. After this, the image is further fed as an input for feature extraction where first the geometric features namely, Area, Major Axis Length, Minor Axis Length and Perimeter are extracted. For the Local feature extraction Scale Invariant Feature Transform is employed. The feature vector obtained from the previous step is provided as an input to the Support Vector Machine (SVM). Once the images have been recognized they are labeled accordingly.
-
Training
For Training we have created two classes of image groups for instance, apple and leaf. This set of images is fed as an input to the system which constructs a feature vector, where it is a combination of SIFT and Geometric features extracted. The SVM in MATLAB further takes this feature vector and class that is specified as an input and returns a SVMModel as an output.
-
Testing
Testing is essential for verification and validation. Testing is carried out during the implementation phase to verify that the software behaves as intended by its designer and after the implementation phase is complete. Later testing phase confirms with the requirements of the system.
Fig 3. Outputs Observed
-
-
RESULTS
100
80
60
40
20
0
100
80
60
40
20
0
Percentage of Detection
Percentage of Detection
Result analysis is necessary to determine how effective the system is. The system is checked to see if accuracy is maintained when the training set of images is reduced to 75% and 70% while simultaneously the test set is increased to 25% and 30% respectively.
Accuracy
Accuracy
80-20
80-20
75-25
75-25
70-30
70-30
Training and Testing set percentages
Training and Testing set percentages
Fig 4. Result Analysis
CONCLUSION
In this paper a system has been developed that successfully identifies the outline or contour of the object and correctly labels them. A host of techniques have been used for edge detection, feature extraction, object recognition which have proven to give the correct outline and labelling the objects in an uncluttered, clear image. The combination of canny edge and sobel edge detecting algorithm proves to be very effective in correctly identifying the edges. The same goes for scale invariant feature transform which provides an ambient way for extracting the local features which proves to be a very effective input for the support vector machine which helps recognize the object. Along with the SIFT features the extraction and usage of geometric features only allows the system to be more robust. The system is tested for a varying range of testing set and training set percentages and is shown to have a good range of accuracy. The system is tested on both simple shaped images and complex shaped images and it successfully detects the contour and labels the objects in each case.
FUTURE ENHANCEMENT
The system that has been developed can be enhanced by incorporating cluttered scenes, considering camouflaged images or images having occlusion. In a cluttered environment there are many objects that are located randomly in the image and has to be camouflage identified individually. In a camouflaged image the background merges with the foreground making it hard to identify the contour of the object precisely and consequently toughens the task of recognizing the object. The system can also be further enhanced to work on occluded images where one of the objects in the image overlaps another object. Here the biggest challenge is reconstruction of the occluded part.
REFERENCES
-
Prerana Mukherjee, Siddharth Srivastava and Brejesh Lall, Salient keypoint selection for object representation, Communication (NCC),
Twenty second national conference on communication, 2016 IEEE
-
Feng-ying Cui, Li-jun Zou and Bei Song, Edge Feature Extraction Based on Digital Image Processing Techniques, Proceedings of the IEEE International Conference on Automation and Logistics, 2008 IEEE.
-
Kiron K. Rao and R. Krishnan, Shape Feature Extraction from Object Corners, 1994 IEEE.
-
Shalu Gupta and Y. Jayanta Singh, Object Detection using Shape Features, International Conference on Computational Intelligence and Computing Research, 2014 IEEE.
-
Meera M K, Shajee Mohan B S, Object recognition in images, International Conference On Information Science, 2016 IEEE.
-
R.Muralidharan and C.Chandrasekar,Combining local and global feature for object recognition using SVM-KNN, Proceedings of the International Conference on PatternRecognition and Medical Engineering, 2012.
-
Weili Ding and Wenfeng Wang, A Novel Line Detection Algorithm based on Endpoints Estimation, 2013 6th International Congress on Image and Signal Processing (CISP 2013), IEEE.
-
Gaurav Kumar and Pradeep Kumar Bhatia, A Detailed Review of Feature Extraction in Image Processing Systems, 2014 Fourth International Conference on Advanced Computing & Communication Technologies, IEEE.
-
H.S.Nagendraswamy and D.S.Guru, Symbolic representation scheme for matching and retrieval of two dimensional shapes, IEEE 2006.
-
Stef Vandermeeren, Samuel Van de Velde, Herwig Bruneel and Heidi Steendam A Feature Ranking and Selection Algorithm for Machine Learning Based Step Counters, IEEE Sensors Journal 2017.
-
Mohamed Al Mashrgy, Nizar Bouguila and Khalid Daoudi, A Statistical Framework For Positive Data Clustering With Feature Selection: Application To Object Detection, IEEE 2016.
-
Michaël Clément, Camille Kurtz and Laurent Wendling,Bags of Spatial Relations and Shapes Features for Structural Object Description, 2016 23rd International Conference on Pattern Recognition (ICPR) , IEEE.