Content Based Image Retrieval with Semantic Features using Object Ontology

DOI : 10.17577/IJERTV1IS4114

Download Full-Text PDF Cite this Publication

Text Only Version

Content Based Image Retrieval with Semantic Features using Object Ontology

Anuja Khodaskar

Research Scholar

College of Engineering & Technology, Amravati, India

Dr. S.A. Ladke

Principal

Sipnas College of engineering and Technology, Amravati ,India

Abstract

Content-based image retrieval is a very important problem in image processing and analysis field. An important requirement for constructing effective content-based image retrieval (CBIR) systems is accurate characterization of visual information. Traditional image retrieval method has some limitations. In order to improve the retrieval accuracy of content-based image retrieval systems, research focus has been shifted from designing sophisticated low-level feature extraction algorithms to reducing the semantic gap between the visual features and the richness of human semantics. This paper presents the technique for efficient CBIR with high level semantic features by using object ontology.

  1. Introduction

    Advances in data storage and image acquisition technologies have enabled the creation of large image datasets. In this scenario, it is necessary to develop appropriate information systems to efficiently manage these collections. The commonest approaches use the so-called Content-Based Image Retrieval (CBIR) systems. Basically, these systems try to retrieve images similar to a user-defined specification or pattern (e.g., shape sketch, image example). Their goal is to support image retrieval based on content properties (e.g., shape, color, texture), usually encoded into feature vectors. One of the main advantages of the CBIR approach is the possibility of an automatic retrieval process, instead of the traditional keyword-based approach, which usually requires very laborious and time-consuming previous annotation of database images. The CBIR technology has been used in several applications such as fingerprint identification, biodiversity information systems, digital libraries, crime prevention, medicine, historical research, among others. During the past

    decade, remarkable progress has been made in both theoretical research and system development. However, there remain many challenging research problems that continue to attract researchers from multiple disciplines. Not many techniques are available to deal with the semantic gap presented in images and their textual descriptions.

    1. The semantic gap

      The fundamental difference between content-based and text-based retrieval systems is that the human interaction is an indispensable part of the latter system. Humans tend to use high-level features (concepts), such as keywords, text descriptors, to interpret images and measure their similarity. While the features automatically extracted using computer vision techniques are mostly low-level features (colour, texture, shape, spatial layout, etc. In general, there is no direct link between the high-level concepts and the low-level features. Though many sophisticated algorithms have been designed to describe colour, shape, and texture features, these algorithms cannot adequately model image semantics and have many limitations when dealing with broad content image databases. Extensive experiments on CBIR systems show that low-level contents often fail to describe the high level semantic concepts in users mind. Therefore, the performance of CBIR is still far from users expectations. There are three levels of queries in CBIR. Level 1: Retrieval by primitive features such as colour, texture, shape or the spatial location of image elements. Typical query is query by example, find pictures like this.

      Level 2: Retrieval of objects of given type identified by derived features, with some degree of logical inference. For example, find a picture of a flower.

      Level 3: Retrieval by abstract attributes, involving a significant amount of high-level reasoning about the purpose of the objects or scenes depicted. This includes retrieval of named events, of pictures with emotional or religious significance, etc. Query example, find pictures of a joyful crowd. Levels 2 and 3 together are referred to as semantic image retrieval, and the gap between Levels 1 and 2 as the semantic gap. More specifically, the discrepancy between the limited descriptive power of low-level image features and the richness of user semantics is referred to as the semantic gap. Users in Level 1 retrieval are usually required to submit an example image or sketch as query. But what if the user does not have an example image at hand? Semantic image retrieval is more convenient for users as it supports query by keywords or by texture. Therefore, to support query by high-level concepts, a CBIR systems should provide full support in bridging the semantic gap between numerical image features and the richness of human semantics [1]

    2. High-level semantic-based image retrieval

      Low level image features can be related with the high level semantic features for reducing the semantic gap. There are five categories of techniques to accomplish this (1) using object ontology to define high-level concepts, (2) using machine learning tools to associate low level features with query concepts, (3) introducing relevance feedback (RF) into retrieval loop for continuous learning of users intention, (4) generating semantic template (ST) to support high-level image retrieval, (5) making use of both the visual content of images and the textual information obtained from the Web for WWW (the Web) image retrieval[1].

      1.3 Object-ontology

      In some cases, semantics can be easily derived from our daily language. For example, sky can be described as upper, uniform, and blue region. In systems using such simple semantics, firstly, different intervals are defined for the low level image features, with each interval corresponding to an intermediate-level descriptor of images, for example, light green, medium green, dark green. These descriptors form a simple vocabulary, the so-called object-ontology which provides a qualitative definition of high-level query concepts. Database images can be classified into different categories by mapping such descriptors to high-level semantics (keywords) based on our knowledge, for example, sky can be defined as region of light blue (colour), uniform (texture), and upper (spatial location) [1].

      Color Features

      Position

      Shape

      Size

      Object Ontology

      Fig 1 : Object Ontology

  2. Related work

    Many researchers tend to use natural scenery images as test bed for semantic extraction as such images are easier to analyse than other images. The reasons are tow-fold. Firstly, the types of objects are limited. Main scenery object types include sky, tree, building, mountain, grass, water, and snow, etc. Secondly, compared with other features of image regions, shape features are less important in analysing scenery images than in other images. Thus we can avoid our weakness in extracting high-level semantics from shape features due to segmentation inaccuracy.

    Research in content-based image retrieval (CBIR) in the past has been focused on image processing, low- level feature extraction, etc. Extensive experiments on CBIR systems demonstrate that low-level image features cannot always describe high-level semantic concepts in the users mind. It is believed that CBIR systems should provide maximum support in bridging the semantic gap between low-level visual features and the richness of human semantics. In Ref [1], the authors have done very rigorous and comprehensive survey of recent wok towards narrowing down the semantic gap. They have identified five categories of the techniques to bridge the semantic gap.

  3. Proposed work

    We propose an algorithm for the implementation of Content Based Image Retrieval System with Semantic Features using Object Ontology.

      1. Algorithm

        Algorithm :

        Let

        N : Total number of database images I = {img1, img2..imgN}

        NF : Number of Semantic Features F= {SF1,SF2,SFNF}

        1. for i=1 to N

        for j=1 to NF [SemFeature]=FeatureExtract(img(i),j);

        f(j)=SemFeature;

        end

        Let f be a mapping from I to F

        i.e. f: I F

        Assign Semantic Feature F(j) to Image I(i)

        end

        1. Get the Query for image retrieval

        2. Extract all the semantic features of this query

        3. Apply the similarity measurements to find exact match

  4. Implantation

    The experiment is carried out by applying this algorithm on some test/query images.

  5. Result

Fig 2 : Query Images

The result of the experiment for CBIR using Object Ontology is summarized in table 1

function [SemFeature]=FeatureExtract(img,j)

% This subroutine extracts semantic features

% based on object ontology

      1. if j=1 label=Colour

else

if j=2

Table 1 : Features with Object Ontology

www.ijert.org 3

Image

Average Color

Dominant color

Position

Size

Shape

Flower1

red

red

middle

small

little

oblong

Flower2

yellow

yellow

middle

small

little

oblong

Elephant

gray

gray

middle

large

very

oblong

Car

yellow

yellow

middle

medium

little

oblong

Ocean

sky blue

blue

left

large

very

oblong

Frog

green

green

middle

small

little

oblong

Map

blue

blue

middle

medium

little

oblong

Bus

red

red

middle

medium

very

oblong

  1. Conclusion

    In this paper, we have presented an object ontology technique for extracting semantic features for content based image retrieval. As we are providing additional information to the search algorithm, the search for the relevant images will be optimized. Therefore, the semantic features with object ontology can increase the precision and recall values.

  2. References

  1. Ying Liu, Dengsheng Zhang, Guojun Lu,Wei-Ying Ma, A survey of content-based image retrieval with high-level semantics, Elsevier, The Journal of Pattern Recognition Society, 40 (2007), pages 262-282

  2. J. Eakins, M. Graham, Content-based image retrieval, Technical

    Report, University of Northumbria at Newcastle, 1999.

  3. I.K. Sethi, I.L. Coman, Mining association rules between low-level

    image features and high-level concepts, Proceedings of the SPIE

    Data Mining and Knowledge Discovery, vol. III, 2001, pp. 279290.

  4. S.K. Chang, S.H. Liu, Picture indexing and abstraction techniques for pictorial databases, IEEE Trans. Pattern Anal. Mach. Intell. 6 (4)

    (1984) 475483.

  5. C. Faloutsos, R. Barber, M. Flickner, J. Hafner, W. Niblack, D.Petkovic, W. Equitz, Efficient and effective querying by image content, J. Intell. Inf. Syst. 3 (34) (1994) 231262.

  6. A. Pentland, R.W. Picard, S. Scaroff, Photobook: content-based manipulation for image databases, Int. J. Comput. Vision 18 (3) (1996) 233254.

  7. A. Gupta, R. Jain, Visual information retrieval, Commun. ACM 40 (5) (1997) 7079.

  8. J.R. Smith, S.F. Chang, VisualSeek: a fully automatic content based query system, Proceedings of the Fourth ACM International Conference on Multimedia, 1996, pp. 8798.

  9. W.Y. Ma, B. Manjunath, Netra: a toolbox for navigating large image databases, Proceedings of the IEEE International Conference on Image Processing, 1997, pp. 568571.

  10. J.Z. Wang, J. Li, G. Wieder hold, SIMPLIcity: semantics-sensitive integrated matching for picture libraries, IEEE Trans. Pattern Anal. Mach. Intell. 23 (9) (2001) 947963.

  11. F. Long, H.J. Zhang, D.D. Feng, Fundamentals of content-based image retrieval, in: D. Feng (Ed.), Multimedia Information Retrieval and Management, Springer, Berlin, 2003.

  12. Y. Rui, T.S. Huang, S.-F. Chang, Image retrieval: current techniques, promising directions, and open issues, J. Visual Commun. Image Representation 10 (4) (1999) 3962.

  13. A. Mojsilovic, B. Rogowitz, Capturing image semantics with

    low-level descriptors, Proceedings of the ICIP, September 2001,

    pp. 1821.

  14. X.S. Zhou, T.S. Huang, CBIR: from low-level features to high level semantics, Proceedings of the SPIE, Image and Video Communication and Processing, San Jose, CA, vol. 3974, January

    2000, pp. 426431.

  15. Y. Chen, J.Z.Wang, R.Krovetz, An unsupervised learning approach to content-based image retrieval, IEEE Proceedings of the International Symposium on Signal Processing and its Applications, July 2003,

    pp. 197200.

  16. A.W.M. Smeulders, M. Worring, A. Gupta, R. Jain, Content-based image retrieval at the end of the early years, IEEE Trans. Pattern

Anal. Mach. Intell. 22 (12) (2000) 13491380.

International Journal of Engineering Research & Technology (IJERT)

ISSN: 2278-0181

Vol. 1 Issue 4, June – 2012

Leave a Reply