- Open Access
- Total Downloads : 241
- Authors : Sreedevi S.
- Paper ID : IJERTV3IS031148
- Volume & Issue : Volume 03, Issue 03 (March 2014)
- Published (First Online): 27-03-2014
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Image Retrieval with Interactive Query Description and Database Revision
Sreedevi S.
Emvigo Technologes, Trivandrum, Kerala
AbstractUsers desire to have systems that understand human intentions and behave accordingly.In order to retrieve the most similar images from a large group of images, content-based image retrieval (CBIR) systems are used. Efficient image retrieval is challenging when the user is very specific about the image content and the database contains a wide variety of images.An interactive image retrieval system that gives satisfactory results intelligently is proposed here. Novel methods of feature levels and Database Revision (DR) are used for fast and precise retrieval. To obtain fast results from large databases, the retrieval of images based on various features of the content is classified into a number of levels and the database if revised accordingly by discarding images that does not qualify within a cutoff. With Query Description (QD) the users can constantly describe the query image by selecting relevant images from the results, based on which the further retrieval process progresses and database rewritten accordingly.This allows the users to control the direction of search and obtain satisfactory results.
Keywords-Content-based image retrieval (CBIR), feature extraction, relevance feedback.
-
INTRODUCTION
The development of digital imaging led to images being generated today at an ever increasing rate. With the arrival of World Wide Web, images of any type can be obtained easily. But efficient systems capable of highly precise retrieval of images are not available. The users desire for highly specific image challenges the retrieval process. The conventional method of image retrieval is by using keywords or image tags associated with the images. The images have to be annotated for the proper working of such systems. Image indexing is a difficult task as a single image can be described in multiple ways according to users subjectivity. Yet, words could not fully describe images. Resources are wasted for assigning images with keywords and storing them.Due to these drawbacks, researchers developed a new method of retrieval that does not require image tags, but the content of the image extracted for the search. Content based image retrieval (CBIR) systems are advantageous compared to conventional search to retrieve images effectively and efficiently. Such a system help users to retrieve relevant images based on visual properties such as color, texture and pictorial entities such as the shape of an object in the picture [1].
The CBIR system extracts the image features in terms of numerical values. The system can work with any one of the image features, but each feature represents a different aspect of the image. Thus the combination of all the three features gives a better representation of the image. The similarity features of images are combined to find the matching images.
In every image retrieval system there would be some irrelevant results obtained among the retrieved images. This is called semantic gap and it occurs because some images show similarity in features but the content may be different. In order to eliminate semantic gap, human intervention is essential. Thus CBIR systems are associated with relevance feedback and various other algorithms to find the users need. One such noteworthy algorithm is Interactive Genetic Algorithm (IGA), where the Generic Algorithm is combined with relevance feedback [4].
The proposed system implement relevance feedback along with database revision (DR) and query description (QD), where the user can continuously select images that add descriptions to the original query image. The following sections explain the proposed system. Section II describes the feature extraction for CBIR. Feature extraction utilizes three different features of the image, divided into two levels. Section III presents the DR method and Section IV describes the QD method. Section V explains the proposed system, Section VI lists the steps in the working of system, and two cases are explained. The results shown in Section VII verify the significance of the proposed method.
-
LEVELS OF FEATURE EXTRACTION Content based image retrieval is based on extracting the
features of the image and searching the database for images having similar features. Features like color, texture, shape etc., are analyzed for retrieval. In order to have a fast image retrieval, feature extraction is classified into groups. The database is modified to include only images showing a satisfactory level of similarity. Two levels of feature extraction are employed here and database of any size is reduced to a number of 100 images.
-
Level 1: Color and Texture Search
Color is the most distinguishing feature of an image. HSV color space images are used for color feature extraction because this color space represents images in the terms of color, saturation and intensity components and these can be separated easily.HSV color space allows separating the chromatic and achromatic components of the image. This is not possible in other colorspaces like the RGB color space. The features extracted from these components can be used in
Texture features are the meaningful statistics extracted from the GLCM. Texture features used in the proposed system are energy, entropy, contrastand homogeneity.Contrast reflects the image clarity and texture of shadow depth. Contrast is large means texture is deeper. Entropy measures image texture randomness, when the GLCM matrix values are same, entropy is minimum, if the values are very uneven, its value is greater [6].
any proportion for image retrieval.
= 2 ,
4
Two types of color features are extracted from the H, S and V planes of the image, global color descriptors and local color descriptors [4].The global color descriptors are the mean and standard deviation of the image.The mean of an image represents the principalcolor of the image. The standard deviation depicts the variation of colors.The mean () and the standarddeviation () of a color image are defined as follows:
1
= (5)
, =0
1
= 2 6
, =0
1
= (1)
=1
1 2
1
=
1 + 2
7
= 1
1
2
=1
(2)
, =0
The database is searched for similar images based on color and texture descriptors separately and results are
where = , , , = , , and indicates
th pixel of the image.
The global color descriptors do not completely represent the color feature of an image. Thus local color descriptors are also employed for color feature extraction. Binary bitmap of the image is used as the local color descriptor of the image. Binary bitmap is based on block truncation coding [3] where the local mean of the image is analyzed. Thus the color information about every pixel of the image is obtained from binary bitmap. In binary bitmap the image is divided into groups of 4×4 pixels and the local mean of each of these blocks is calculated. Then the local mean is compared with the global mean of the image to obtain a binary bitmap image. If the local mean of the pixel block if greater than the global mean, the pixels in the block are replaced with 1s and otherwise 0s. Binary bitmap images corresponding to the three planes of the image are created separately.
1, if
= 0, otherwise (3)
where , , , is the mean of the th pixel block and is the global mean of color plane.
Texture feature represents the recurring pixel patterns in the image content. We use a gray level co-occurrence matrix (GLCM) to represent the relative occurrence of pixels and their relationship. GLCM is an 8-level matrix with the number of times two pixels and present in the image, separated by a distance and at an angle [8].
combined to rearrange the database according to the order of similarity.Here, the database is re-written with the top 200 images. Thus only 200 images from the database are qualified to enter the level 2 search process.
-
Level 2: Shape Search
Shape search is performed to determine the shape of the object depicted in the image and search for images containing the same object. Shape search uses the edge features of the object and extract these features to form an Edge Histogram Descriptor(EHD). The EHD presents the local edges in the image and their frequency of occurrence.The vertical horizontal, diagonal and anti-diagonal edges of the image are extracted using corresponding filters. The filtered images are divided into 4×4 sub-images and each sub-image again divided to 4×4-pixel non-overlapping blocks and the contribution of each edge feature in these blocks is calculated. For each block the edge with highest contribution is determined and a histogram, normalized by the total number of blocks in the sub-image, representing the relative frequency of occurrence of the four types of edges in the corresponding sub-images is plotted. This normalized and quantized binary representation gives the EHD.
The shape search is carried out only for the 200 images that qualified the level 1. This considerably increased the speed of retrieval without any compromise on the results. The database is again revised to 100 top images qualifying the shape search and arranged in the order of similarity.
-
-
DATABASE REVISION
A database of images is associated with every image retrieval system. The database after the two levels of feature searchcontains a number of k-images with high similarity level.In the proposed system, after the second level of feature search, the database is revised to 100 images and if the user is not satisfied with the result, the search process continues with DR. DR is a process based on changing the database where the system searches for images, by seeking the users opinion about each image. Thisis an interactive process.The DR consists of the following stages.
-
The top k-images after the previous search is displayed to the user, k being any reasonable number manageable for viewing and analyzing by the user. Here, 20 images are presented. Initially, theimages are the results of level 2 search.
-
The user can interact with the system and add his/her satisfaction for each of these displayed images by tick marking satisfactory images in the associated check box.
-
The images in the database are re-ranked based on the user-satisfaction entered for each image. Relevant images selected by the userwill remain in the database while the images not marked by the user will be discarded.
-
Top 20 images of the revised database are then displayed. The images previously selected by the user will have the top ranks among the displayed images.
-
If the user is not satisfied with the result, the DR process continues.
The process of DR continues until all the displayed images are relevant and satisfactory to the user.Semantic gap is part of any image retrieval system because no system can understand the object or scene depicted in the image like a human. User-interactive system is essential for deleting the semantic gap occurring in the retrieval process. DR eliminates semantic gap and modify the database by considering the users opinion. Also, DR prevents the repeated appearance of irrelevant images by deleting them from the database.
-
-
QUERY DESCRIPTION
In the proposed system, a new method is added along with DR to obtain more precise results. Here,the user can interactively control the search process and results. The results of each search are sorted and the displayed images have to be selected by the user.The images selected by the user act as descriptors of the users original query image, in other words the search process is not depending on a single query image but on all the images selected by the user as relevant results.
-
QD Similarity search
The query is distributed to the user-selected images by employing a third similarity search with each of these images
as query and the same revised database.This similarity search is based on extracting the color histogram of the RGB color image. Before extracting the histogram, the image is cropped to find the histogram of only the central portion of the image. This is done in view of that the object of the image is always located near the center of the image. So by this color histogram we try to extract the localized color feature of the object depicted in the image discarding the surroundings.
Color histogram represents the distribution of intensity of the color in the image [7]. For each of the user-selected images, the image retrieval consists of the following stages.
-
Query image is a user-selected image from the group of images displayed to the user as a result of a previous QD retrieval or a level-2 similarity search.
-
The image is cropped to give emphasis on the central portion of the image that depicts the main content of the image.
-
Histogram of the color image is calculated.
-
Color histogram of the database images is calculated. Here the database is the revised database.
-
Euclidean Distance is calculated.
-
Sorted the distance in ascending order and the top n-
images form a retrieval set for the query image.
-
-
Ranking Retrieved Images
For each of the distributed query images , 1 a set of retrieval results within a cutoff number is formed. Let
= 1, 2, , represent the set of images retrieved in
response to the query where , the number of images present in the retrieval setand m is the number of distributed queries. The size of retrieval set is limited to 10 images.
The number of times an image appears in the retrieval
sets 1 , 2 , , is determined and the images are rearranged accordingly. The image appearing in all the sets attain higher position while images with less number of appearances will have lower positions. The query distribution images along with the images in the retrieval sets in their order of ranking are written to the database. This forms the new revised database based on QD. Then the top k-images from this database are displayed to the user. If the user is not
satisfied with the results or all the kimages are not relevant, then the process repeats until the user is satisfied.
-
-
PROPOSED SYSTEM
Existing image retrieval techniques use different algorithms for efficient retrieval and add intelligence to the system. But it is challenging to enable a system to understand the object shown in the image and find similar images depicting the same object. How can a system understand whether the image shows a horse or an elephant?Thus we add a semantic base relevance feedback in the system for the user
to interact with the system and thereby increase retrieval performance.
In the proposed system QD is implemented along with DR to improve retrieval precision. The concept of QD is based on feature extraction and similarity matching of images marked by the user as relevant images among the results. Thus, QD is also based on relevance feedback.With the aforementioned concepts, we design an image retrieval system with the aim of fast, precise retrieval and minimizing semantic gap as far as possible. The different stges of operation of our system are:
-
Querying: The user provides a sample image as the query for the system.
-
Level-1 similarity computation: The system computes the similarity between the query image and all the database images according to the aforementioned level-1 visual features.
-
First database revision: The system retrieves and rearrangesthe database with a sequence of p-images ranked in decreasing order of similarity where p is a cutoff in the number of images qualifying for next level of search.
-
Level-2 similarity computation: The system computes the similarity between the query image and the revised database images according to the level-2 visual feature.
-
Second database revision: The system retrieves and rewrites the database according to level-2 similarity search results, with a sequence of q-images ranked in decreasing order of similarity where q is a second cutoff in the number of images.
Retrieval: The system displays top k-images from the A. Case 1: Database Revision Alone
revised database. In this case, the QD is turned off and the system is
-
Relevance feedback: With each of the displayed images, the system provides an interactive mechanism that allows the user to evaluate and mark relevant images.
-
QD similarity search: The system then performs similarity search with each of the images marked as relevant by the user and retrieves a set of images corresponding to each query. Further, the retrieved images are ranked based on their number of appearances in the retrieval sets.
-
QD database revision: The system then updates the database with theuser-marked imagesand their retrieved images in the order of their ranks.
-
Retrieval: The top k-images from the revised database are again displayed to the user for evaluation. If the user is not satisfied the process continues with relevance feedback.
-
-
WORKING OF THE SYSTEM
The implementation of the proposed system is carried out with a database of 1000 images belonging to 10 different categories. The images in this database show a wide variety with respect to color, shape and content. Each category contains 100 images which are considered relevant images for queries within the category.The experiment is done by selecting two images randomly from each category and the average of the results is plotted in the graphs.In order to verify the validity and significance of query distribution, the experiment is implemented as two cases, case-1 database revision alone and case-2 database revision with query description.
analyzed with database revision alone. The retrieval process now consists of two stages, the feature levels of similarity search and the interactive database revision. In the first stage,
the original image database is modified to a database of 100 images in the order of similarity. In the second stage, the database revision method is evaluated with QD disabled. Since the similarity searching of user-marked images is disabled, the database revision is limited to discarding the images that are not selected by the user. Therelevant images
-
EXPERIMENTAL RESULTS
For evaluation of image retrieval systems the commonly used parameter is precision percentage [4]. This represents the percentage of relevant results obtained from each retrieval step.
selected by the user will remain in the database, their position in the database will move upwards when deleting the
=
8
irrelevant images, the top k-images are again selected from this revised database for displaying.
NA(q)
denotes the number of relevant images similar to the
The size of the database after level 2 feature similarity search is fixed as 100 images. This modified database is the initial image set for DR. Top k-images are displayed to the user. Let the user marks r-images,1 as relevant. The relevant images may be selected by the user in any order according to users discretion and requirement. Then the (k-r) irrelevant images areomitted from the database. The marked r- images are re-ranked when images in between them are deleted but the order among themselves remains the same. Then another set of (k-r)images from the remaining set of (100-k)images fill the k-image set to be displayed to the user. The user again selects the relevant images from this group and the process continuous until the user is satisfied or when there is not enough images remaining in the database to display. The performance of DR depends on the efficiency of similarity search.
-
Case 2: Database Revision with Query Discription
In this case, the proposed methodis implemented and generates combined result of feature level similarity searching, DR and QD. In this case too, there are two stages of performance. The first stage of feature level similarity search is the same in both the cases. In the second stage, the system follows the aforementioned steps of operation of the proposed system. Here, DR includes re-ranking images according to QD similarity search and retrieval.Unlike the previous case, the database modification does not limit to discarding some images and slight re-ranking of the remaining, but also faces large modifications. In addition to user-omissions, the system too omits a major portion of images after analyzing the user selections. In this case, the number of images in the database after each modification process cannot be predicted. But it is observed from our system that the database size is much less compared to that with QD deactivated. This is because for all retrieval sets the cutoff is fixed to 10 images and the rest of the images are omitted. The database size can be increased if this cutoff is increased but the precision will suffer. Another limitation is the search processes may not repeat many times, but the high precision acquires satisfactory results before many repeated searches.
query, NR(q) indicates the number of images retrieved by the
system in response to the query [4]. In the system, 20 images are displayed to the user during each retrieval process, so NR(q) is 20 for the system. NA(q) is selected by the user manually.
Consider the working of the system with an example. Query input is the image of a bus. After the two levels of similarity search the system displays 20 output images out of which 18 are relevant. The user then selects the relevant images by tick marking the check box associated with each image. The QD search of the system is then carried out. In this search, the 18 relevant images produce their own sets of retrieved results by QD similarity search. After ranking these retrieved images, two topmost images will appear in the display instead of the irrelevant results. It is seen from the result that the QD search produces 100% precise results.Fig. 2shows display of returned images after level-2 similarity search and the retrieved results after applying the QD process. The results are ranked in ascending order of similarity to the query image from left to right and then from top to bottom.
The performance of the system is evaluated by average retrieval precision of any two images selected randomly from each image category in the database. Average precision for all the 10 categories are plotted as two cases, DR alone (Fig. 3) and DR along with QD (Fig. 4).
Comparing the two cases, the precision percentage of each class of image shows an improvement in Fig. 4. Also 100% precision is obtained with less number of searches. These are clearly due to the effect of the QD method of retrieval.
Even with databases of high variability in images, the experiment in all the classes proved the significance of the proposed method. The rate of improvement is found to be different for various image classes. In the retrieval process without using QD, two classes, Buildings and Africans showed least precision. In the evaluatio with QD, the class
Africans showed great improvement, while Buildings, comparatively less. But, the latter achieved 100% precision faster. Unlike the other classes, in case of the class Africans, the retrieval finished with 90% precision. One reason for this may be the wide variety of images in this class and their similarity to some images in other classes like Food items,
Elephants etc. A second reason may be that the level-2 similarity search in this case do not include sufficient amount of relevant results. But the QD similarity search is efficient to select only the relevant results from the revised database. Thats why it shows high precision like 75% for search 1,
80% for search 2 and 90% for search 4. But it finishes when there are no other relevant results. Fig. 5 shows the final results obtained for two different classes of images from the QD activated proposed system.
(a)
(b)
-
-
CONCLUSION
The paper has presented a CBIR system based on continuous interactive query modification. The system is made user-friendly by revising the database continuously, deleting irrelevant images. Feature extraction and similarity search of the CBIR is modified from conventional approaches to a new technique of feature levels associated with database revision. The QD method increases the efficiency of the system. Significance of the proposed method is validated by evaluating the results. Experimental results of the proposed approach have shown the significant improvement in retrieval performance.
REFERENCES
-
N. Jhanwar, S. Chaudhuri, G. Seetharaman, B. Zavidovique, Content Based Image Retrieval Using Motif Cooccurence Matrix, Image Vis.Comput., vol. 22, no. 14, pp. 12111220, Dec. 2004.
-
H.-W. Yoo, H.-S.Park and D.-S.Jang, Expert system for color image retrieval, Expert Syst. Appl., vol. 28, no. 2, pp. 347357, Feb. 2005.
-
E. J. Delp and O. R. Mitchell, Image coding using block truncation coding, IEEE Trans. Commun., vol. COM-27, no. 9, pp. 13351342, Sep. 1979.
-
C-C Lai and Y-C Chen, A user-oriented image retrieval system based on interactive genetic algorithm, IEEE Trans. Instru.Measur, vol. 60, no. 10, pp. 3318-3325, Oct.2011.
-
S.-B. Cho and J.-Y. Lee, A human-oriented image retrieval system using interactive genetic algorithm, IEEE Trans. Syst., Man, Cybern. A, Syst.,Humans, vol. 32, no. 3, pp. 452458, May 2002.
-
V-Srikanth, C-Srujana, P-Nataraju, S-NagarajuandCh-Vijayalakshmi,
Image gathering usin both color and texture features,IJECT, vol. 2, SP-1, pp. 55-57, Dec. 2011.
-
S-Pattanaik, D.G.Bhalke, Beginners to Content Based Image Retrieval, IJSRET, vol. 1, issue 2, pp.40-44,May 2012.
-
Sreedevi, S., and Shinto Sebastian. "Content based image retrieval based on Database revision." Machine Vision and Image Processing (MVIP), 2012 International Conference on, pp. 29-32. IEEE, 2012.
-
Sreedevi, S., and Shinto Sebastian. "Fast image retrieval with feature levels." Emerging Research Areas and 2013 International Conference on Microelectronics, Communications and Renewable Energy (AICERA/ICMiCR), 2013 Annual International Conference on, IEEE, 2013.