- Open Access
- Authors : Shital S. Jadhav , Dr. Sonal P.Patil
- Paper ID : IJERTV11IS100009
- Volume & Issue : Volume 11, Issue 10 (October 2022)
- Published (First Online): 11-10-2022
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Content Based Image Retrieval Using Multiple Features
Ms. Shital S. Jadhav Assistant Professor
Computer Science and Engineering
G H Raisoni Institute of Engineering & Business Management,Jalgaon,India
Dr.Sonal P.Patil Assistant Professor
Computer Science and Engineering
G H Raisoni Institute of Engineering & Business Management,Jalgaon,India
AbstractSince the last decade, Content-Based Image Retrieval was a hot topic research. The computational complexity and the retrieval accuracy are the main problems that CBIR systems have to avoid. To avoid these problems, this paper proposes a new content-based image retrieval method that uses color texture and edge direction feature . Color features are the fundamental characteristics of the content of images. Color feature is one of the most widely used features in low level feature. Texture provides the measures of properties such as smoothness, coarseness, and regularity. The edge of the image is another important feature that represented the content of the image. Using color ,texture and edge direction feature to describe the image and use them for image retrieval is more accurate than using one of them
Keywords Content-Based Image Retrieval (CBIR), Color Moment, Texture, Local Binary Pattern(LBP),Edge Histogram
-
INTRODUCTION
Application of World Wide Web (www) and the internet is increasing exponentially, and with it the amount of digital image data accessible to the users. A huge amount of Image databases are added every minute and so is the need for effective and efficient image retrieval systems. There are many features of content-based image retrieval but four of them are considered to be the main features. They are color, texture, shape, and spatial properties. Spatial properties, however, are implicitly taken into account so the main features to investigate are color, texture and shape[1].
Content Based Image Retrieval (CBIR) is the retrieval of mages based on their visual features such as color, texture, and shape. Content-based image retrieval systems have become a reliable tool for many image database applications. There are several advantages of image retrieval techniques compared to other simple retrieval approaches such as text- based retrieval techniques CBIR provides a solution for many types of image information management systems such as medical imagery, criminology, and satellite imagery. In this computer age, virtually all spheres of human life including commerce, government, academics, hospitals, crime prevention, surveillance, engineering, architecture, journalism, fashion and graphic design, and historical research use images for efficient services. A large collection of images is referred to as image
a.
database. An image database is a system where image data are integrated and stored . Image data include the raw images and information extracted from images by automated or computer assisted image analysis[2].
A typical CBIR uses the contents of an image to represent and access. CBIR systems extract features (color, texture, and shape) from images in the database based on the value of the image pixels. These features are smaller than the image size and stored in a database called feature database. Thus the feature database contains an abstraction (compact form) of the images in the image database; each image is represented by a compact representation of its contents (color, texture, shape, and spatial information) in the form of a fixed length real- valued multicomponent feature vectors or signature. This is called offline feature extraction. The main advantage of using CBIR system is that the system uses image features instead of using the image itself. So, CBIR is cheap, fast, and efficient over image search methods[2].
A key component of the CBIR system is feature extraction. A feature is a characteristic that can capture a certain visual property of the image. One of the key issues with any kind of image processing is the need to extract useful information from the raw data (such as recognizing the presence of particular shapes or textures) before any kind of reasoning about the images contents is possible[2].
All CBIR systems view the query image and the target images as a collection of features. These features, or image signatures, characterize the content of the image. The advantages of using image features instead of the original image pixels appear in image representation and comparison for retrieving. When the image features are used for matching, it almost do compression for the image and use the most important content of the image. This also bridges the gaps between the semantic meaning of the image and the pixel representation[2].
Early studies on CBIR used a single visual content such as color, texture, or shape to describe the image. The drawback of this method is that using one feature is not enough to describe the image since the image contains various visual characteristics[2]. This paper propose to
extract color , texture and edge direction features from the image.
-
Retrieval Based On Color
Several methods for retrieving images on the basis of color similarity are being used. Each image added to the database is analyzed and a color histogram is computed which shows the proportion of pixels of each color within the image. Then this color histogram for each image is stored in the database. During the search time, the user can either specify the desired proportion of each color or submit a reference image from which a color histogram is calculated. The matching process then retrieves those images whose color histograms match those of the query most closely[3].
-
Retrieval Based On Texture
The ability to match on texture similarity can often be useful in distinguishing between areas of images with similar color. A variety of techniques has been used for measuring texture similarity. Essentially, these calculate the relative brightness of selected pairs of pixels from each image. From these it is possible to calculate measures of image texture such as the degree of contrast, coarseness, directionality and regularity, or periodicity, directionality and randomness. Other methods of texture analysis for retrieval include the use of Gabor filters and fractals. Texture queries can be formulated in a similar manner to color queries, by selecting examples of desired textures from a palette, or by supplying an example query image[3].
-
Retrieval Based on Edge direction Feature
The edge of the image is another important feature that represented the content of the image. The visual character of the human eyes is sensitive to edge features. In MPEG-7, there is a descriptor for edge distribution in the image. This edge histogram descriptor proposed for MPEG-7 consists only of local edge distribution in the image. That is, since it is important to keep the size of the histogram as small as possible for the efficient storage of the metadata, the normative edge histogram for MPEG-7 is designed to contain only local edge distribution[4] .
-
-
REVIEW OF RELATED WORKS
Content based image retrieval for general-purpose image databases is a highly challenging problem because of the large size of the database, the difficulty of understanding images, both by people and computers, the difficulty of formulating a query, and the issue of evaluating results properly. A number of general-purpose image search engines have been developed.
The first use of the concept content based image retrieval was by Kato to describe his experimentsfor retrieving imaged from a database using color and shape features. After that this term (CBIR) has been used widely for the process of retrieving images from a large collection of images based on features (colour, shape, and texture) that is the signature of the image
All CBIR systems view the query image and the target images as a collection of features. These features, or image signatures, characterize the content of the image. The advantages of using image features instead of the original image pixels appear in image representation and comparison for retrieving. When we use the image features for matching,
we almost do compression for the image and use the most important content of the image. This also bridges the gaps between the semantic meaning of the image and the pixel representation.
-
CBIR WITH MULTIPLE FEATURE
CBIR is the mainstay of current image retrieval system. Content Based Image Retrieval (CBIR) is an emerging and developing trend in Digital Image Processing. CBIR is used to search and retrieve the query image from wide of databases . In CBIR, each image that is stored in the database has its features extracted and compared to the features of the query image . It is the retrieval of images based on visual features such as color, texture and shape.Some CBIR methods uses the color and texture feature .Some uses the color and edge direction feature. Some users may add shape feature also. There are different methods for different feature. For color feature some users use color moment method some uses the color histogram method. For texture there are ranklet transform method, Local Binary Pattern method is used. For edge direction feature ,Edge Histogram Descriptor (EHD) is mostly used which places the edge detectors.
Early studies on CBIR used a single visual content such as color, texture, or shape to describe the image. The drawback of this method is that using one feature is not enough to describe the image since the image contains various visual characteristics[2]. This paper propose to extract color , texture and edge direction features from the image.
A.Color Feature:
Color features are the fundamental characteristics of the content of images[2]. Color feature is one of the most widely used features in low level feature. Compared with shape feature and texture feature, color feature shows better stability and is more insensitive to the rotation and zoom of image.
Color not only adds beauty to objects but also more information, which is used as powerful tool in content-based image retrieval[1]. Color is the sensation caused by the light as it interacts with our eyes and brain Human eyes are sensitive to colors, and color features enable human to distinguish between objects in the images. Colors are used in image processing because they provide powerful descriptors that can be used to identify and extract objects from a scene. Color features provide sometimes powerful information about images, and they are very useful for image retrieval. Many methods can be used to describe color feature. There are color histogram, color correlation, color moments, color structure descriptor (CSD), and scalable color descriptor (SCD). In this seminar, I will use color moment method because it has the lowest feature vector dimension and lower computational complexity[10].
To extract the color features from the content of an image, we need to select a color space and use its properties in the extraction. In common, colors are defined in three-
dimensional color space. In digital image purposes, RGB color space is the most prevalent choice. The main drawback of the RGB color space is that it is perceptually non-uniform and device dependent system. The HSV color space is an intuitive system, which describes a specific color by its hue, saturation, and brightness values . First the input image is converted to HSV color space and then calculate color moments This color system is very useful in interactive color selection and manipulation[10]. The first-order (mean), the second (standard deviation), and the third-order (skewness) color moments have been proved to be efficient and effective in representing color distributions of images.
B.Texture Feature:
Texture provides the measures of properties such as smoothness, coarseness, and regularity. Furthermore, texture can be thought as repeated patterns of pixels[4]. Texture is an innate property of all surfaces that describes visual patterns, each having homogeneity. It contains important information about the structural arrangement of a surface, such as; clouds, leaves, bricks, fabric, etc. It also describes the relationship between the surfaces to the surrounding environment. In short it is a feature that describes the distinctive physical composition of a surface[5]. To extract the texture feature, Local Binary Pattern (LBP) is found to be a powerful feature.
The local binary pattern operator is an image operator which transforms an image into an array or image of integer labels describing small-scale appearance of the image. These labels or their statistics, most commonly the histogram, are then used for further image analysis. It is an operator for image description that is based on the signs of differences of neighboring pixels. Despite being simple, it is very descriptive, which is attested by the wide variety of different tasks it has been successfully applied to. The LBP histogram has proven to be a widely applicable image feature for, e.g. texture classification, face analysis, video background subtraction, etc. A possible drawback of the LBP operator is that the thresholding operation in comparing the neighboring pixels could make it sensitive to noise. Practical experiments with images of good quality have not supported this argument but under difficult conditions or with images taken with noisy special cameras, noise might present a problem to the traditional LBP operator[11].
Local Binary Pattern (LBP) is a feature extraction technique that gives satisfactory results in various computer vision based applications. The LBP operator forms labels for the image pixel by thresholding the 3 × 3 neighborhood of each pixel with the center value and storing it as a binary number. This is shown in Figure. The maximum value that a center pixel can have is 256. The histogram of these 256 labels is used as a texture descriptor. This operator can be extended to different neighborhoods of different shapes[5]. Circular neighborhood and bilinear interpolation allow any radius (R) and number of pixels(P) in the neighborhood for any center pixel as shown in Figure .
C. Edge Direction Feature
The edge of the image is another important feature that represented the content of the image. The visual character of the human eyes is sensitive to edge features[4]. One way of representing such an important edge feature is to
use a histogram. An edge histogram in the image space represents the frequency and the directionality of the brightness changes in the image. It is a unique feature for images, which cannot be duplicated by a color histogram or the homogeneous texture features. The edge extraction scheme should be based on the image-block as a basic unit for edge extraction rather than on the pixel. That is, to extract directional edge features, we need to define small square image-blocks in each sub-image[11].Specifically,divide the image space into nonoverlapping square image-blocks and then extract the edge information from them. Regardless of the image size, we divide the image space into a fixed number of image-blocks. The purpose of fixing the number of image-blocks is to cope with the different sizes (resolutions) of the images[5].
A simple method to extract an edge feature in the image-block is to apply digital filters in the spatial domain. To this end, first divide the image-block into four sub-blocks Then, by assigning labels for four sub-blocks from 0 to 3. To represent this unique feature, in MPEG-7, there is a descriptor for edge distribution in the image[11]. The Edge Histogram Descriptor (EDH) is MPEG7 standard that represent the edge character of image . Edges in the images are categorized into 5 types: vertical, horizontal, 45- degree diagonal, 135-degree diagonal and non-directional edges (shown in Fig.1 ).
Each of the image-blocks is classified into one of the five edge categories or as a non edge block. This method first divided the image space into 4×4 sub-images, each sub-image further divided into same sized sub-blocks, each sub-block regard as a 2×2 super pixels image block and apply appropriate oriented edge detectors to compute the corresponding edge strengths. The edge detector with the maximum edge strength is then identified. If this edge strength is above a given threshold, then the corresponding edge orientation is associated with the image-block. If the maximum of the edge strengths is below the given threshold, then that block is not classified as an edge block. Fig. 2 shows the edge detector type[4].
Fig 1. Five types of edges
Fig 2 Five types of edge detectors
images to extract features and use their features for similarity matching.
This paper proposed a Multi feature model for the Content Based Image Retrieval System by combining the Color Moment, texture, and edge Histogram descriptor features. The method uses multi-features which is advantageous than by using a single feature .Retrieval using multifeature gives more accurate retrieval result than a single feature.
Fig 3. Comparision of sky and ocean
As shown in the figure 3the ocean and the sky have the similar color, but the sky is smooth which has no edge direction. The ocean has waves which is not smooth and has the edge direction. So retrieval images just using color feature will not achieve the result that we expected sometimes. So the edge direction feature is used along with color feature for accurate image retrieval [4].
-
SIMILARITY MEASURE
One fundamental step in CBIR system is the similarity measures. Similarity between two images is to find the distance between them. The distance between two images can be calculated using feature vectors that are extracted from the images. Therefore, the retrieval result is not a single image, but many images will be retrieved similar to the input image[4]. Different similarity measures have been proposed based on the empirical estimates of the distribution of features, so the kind of features extracted from the image and the arrangement of these features in a vector will determine the kind of similarity measures to be used. Different similarity measures will affect the retrieval performance of image retrieval significantly.
One of the most popular similarity measurements is Euclidean Distance. Euclidean Distance is used to measure the similarity between two images with N-dimensional feature vector.
-
CONCLUSION AND FUTURE WORK
Although CBIR has been a very active research area since 1990s, many challenges are issued because of the complexity of image data. Many researchers have been done to develop some algorithms that solve some problems and achieve the accuracy when retrieving images and distinguishing between them. Many proposed algorithms use
In the future, there is a provision to add new features for better retrieval efficiency. Also the relevance feedback technique will added with CBIR for better retrieval efficiency and effectiveness.
REFERENCES
[1] P. V. N. Reddy, K. Satya Prasad, Color and Texture Features for Content Based Image Retrieval,2011 P V N Reddy et al, Int. J.Comp. Tech. Appl., Vol 2 (4), 1016-1020
[2] Ahmed J. Afifi, Wesam M. Ashour, Content-Based Image Retrieval Using Invariant Color and Texture Features,2012 IEEE 978-1- 4673-2181-5/12 [3] Ms. K. Arthi , Mr. J. Vijayaraghavan , Content Based Image Retrieval Algorithm Using Colour Models, March 2013,International Journal of Advanced Research in Computer and Communication Engineering Vol. 2, Issue 3, March 2013 [4] Jianlin Zhang, Wensheng Zou, Content-Based Image Retrieval Using Color and Edge Direction Features, 2010 IEEE, 978-1- 4244-5848-6/10 [5] Ms.Shital S. Jadhav,Ms.Sonal Patil,Mr.Hiralal Solunkhe, Comprehensive Review of Content Based Image Retrieval,IJCA,Volume 183,September 2021 [6] Timo Ahonen, Matti Pietikäinen, Soft Histograms For Local Binary Patterns, Machine Vision Group, Infotech Oulu [7] Promila, V.Laxmi, Palmprint Matching Using LBP, 2012 International Conference on Computing Sciences, 978-0-7695-4817- 3/12 [8] Chee Sun Won, Dong Kwon Park, Soo-Jun Park, Efficient Use of MPEG-7 Edge Histogram Descriptor, February 2002, ETRI Journal, Volume 24, Number [9] Rajshree S. Dubey, Rajnish Choubey, Joy Bhattacharjee, Multi Feature Content Based Image Retrieval ,2010, Rajshree S. Dubey et. al. / (IJCSE) International Journal on Computer Science and Engineering Vol. 02, No. 06, 2010, 2145-2149 [10] Ms.Shital S.Jadhav,Ms.Swati Patil, Relevancr Feedback in Content Based Image Retrieval , International Journal of Engineering Research & Technology (IJERT) Vol. 3 Issue 2, February 2014,ISSN: 2278-0181 [11] Ms.Shital S.Jadhav, Prof.Mrs.Swati Patil, Content Based Image Retrieval Using Color and Texture Feature with Efficient Relevance Feedback,IJARCSSE, ,Volume 4, Issue 10, October 2014 ISSN:2277 128X