Image Classification Using Multiresolution Color , Texture and Shape Features Hybrid Features

DOI : 10.17577/IJERTV2IS3315

Download Full-Text PDF Cite this Publication

Text Only Version

Image Classification Using Multiresolution Color , Texture and Shape Features Hybrid Features

Kanchan saxena(M.Tech,LNCT(RGPV,Bhopal)

Mr. Vineet Richaria(HOD of the computer science ,LNCT(RGPV,BHOPAL) Mr.Vijay Trivedi (Assistant professor of the computer science ,LNCT Bhopal

Abstract

Image retrieval is an advance research area in image processing. It is useful in many image processing applications. In content based image retrieval searching are based on the comparision of the query image and it is working on the whole image. Color, texture and shape are the important feature in the image retrieval system. It provides the better performance result if we applied the combination of color, texture and shape features. In this paper we presented a framework in content- based image retrieval (CBIR) by combining the color, texture and shape color features. Firstly, transforming color space from RGB model to HSV model, and then extracting color using the dominant color descriptor (DCD) color feature vector. Secondly, extracting the texture feature by using BDIP (block difference inverse probabilities) and BVLC(block variation of local correlation coefficients) texture features. Thirdly applying the sobel operator to extracting the shape features. Finally, combining the color, texture and shape features to sets show that the proposed scheme form the fused feature vectors of entire image. Experiments on commonly used image datasets show that the proposed scheme achieves a very

good performance in terms of the precision, recall compared with other methods.

Key words: Image Retrieval, Feature Extraction, sobel operator.

  1. Introduction

    Image retrieval is the most important function and services which is supported by all digital system. The most fundamental operation of the content based image retrieval extraction of the visual feature. The digital images in the area of medicine, engineering, sciences, digital photography always the ever increasing use and the demand. [1,2]. color descriptor is the first choices in the content based image retrieval because if one chooses a proper representation, it can be partially reliable even in presence of changes in lighting, view angle, and scale. As a global feature color histogram is most commonly used in the image retrieval in which it represents the probability of the intensities of the three color channels. Typical characterization of color composition is done by color histograms. In 1991 Swain and Ballard [3] proposed the method, called color indexing, which identifies the object using color histogram indexing. Color histogram is the most commonly used color representation, but it does not include any spatial information.On the other hand, color correlogram describes the probability of

    finding color pairs at a fixed pixel distance and provides spatial information[5].The color histogram is obtained by counting the number of times each color occurs in the image array. Histogram is invariant to translation and rotation of the image plane, and change only slowly under change of angle of view [6]. A color histogram H for a given image is defined as a vector.

    H H 0, H 1,……….H i,…………HN

    …………………(1)

    Where i represent the color in color histogram and H[i] represent the number of pixels of color i in the image, and N is the number of bins used in color histogram. For comparing the histogram of different sizes, color histogram should be normalized. The normalized color histogram is given as

    H ' H ……………….(2)

    P

    where p is the total number of pixels in the image.

    The alternative to the RGB color space is the Hue- Saturation-Value (HSV) color space. Instead of looking at each value of red, green and blue individually, a metric is defined which creates a different continuum of colors, in terms of the different hues each color possesses. The hues are then differentiated based on the amount of saturation they have, that is, in terms of how little white they have mixed in, as well as on the magnitude, or value, of the hue. In the value range, large numbers denote bright colorations, and low numbers denote dim colorations [9].Color autocorrelogram is a subset of color correlogram, which captures the spatial correlation between identical colors only. Since it provides significant computational benefits over color correlogram, it is more suitable for image retrieval. DCD is MPEG-7 color descriptors [7]. DCD describes the salient color distributions in an

    image or a region of interest, and provides an effective, compact, and intuitive representation of colors presented in an image.

    Textual image retrieval depends on attaching textual description, captioning or metadata with every image stored digitally. Then traditional database queries are used to retrieve images containing the query keywords in their metadata[8].. Directional feature are extracted to capture image texture information [10]. In this paper we used the BDIP (block difference inverse probabilities) and BVLC (block variation of local correlation coefficients). BVLC is a maximal difference between local correlations according to orientations normalized by local variance, which is known to measure texture smoothness well [9]. The excellent performance of BDIP and BVLC comes from that both of them are bounded and well-normalized to reduce the effect of illumination [11]. In image retrieval depending on the applications, some require the shape representation to be invariant to translation, rotation and scaling; while others do not [12]. Shape features are considered very important in describing and differentiating the objects in an image. Shape features can be extracted from an image by using two kinds of methods: contour and regions. Contour based methods are normally used to extract the boundary features of an object shape. Such methods completely ignore the important features inside the boundaries Region-based image retrieval methods firstly apply segmentation to divide an image into different regions/segments, by setting threshold values according to the desirable results. On the other hand the boundary of an image can be obtained by applying any edge detection method to an image [13].

    In this paper for image retrieval using the combined feature is presented. For the color feature extraction we apply the dominant color descriptor (DCD). Texture feature extraction we are using the BDIP (block difference inverse probabilities) and BVLC (block variation of the local correlation coefficients). Sobel operator are using in this framework for the shape feature extraction. The rest of the paper is organized as follows: section 2 illustrates the conventional feature which will be adopted in the proposed method. Section

    3 explains the proposed retrieval method. Section 4 discuss experimental results and the section 5 finally provides the conclusion.

  2. Features

    In this section we explain the conventional features which is used in the proposed retrieval method: Domain color descriptor as a color feature, BDIP and BVLC as the texture feature and the sobel matrix are

    used as the shape feature.

    last two parameters are optional. Then the DCD is defined by:

    F (ci , pi , vi ), s1,i 1……… N

    Where N is the number of color

    There is one overall Spatial Coherency (SC) value for the whole image and several groups of (ci, pi, vi) for the corresponding dominant colors. It can be used to compute the visual difference between images based on color. The distance algorithm uses an estimate of the meansquare error, based on the assumption that the sub distributions described by dominant colors and variances are Gaussian. Consider 2 descriptors:

    1

    1

    F1 (c1, p1, v1 ), s1,(i 1……… N )

    F2 (c2 , p2 , v2 ), s2 ,(i 1……… N2 )

    Where

    p0, 31, ci rgb2luv(ci)

    vi 60.0vi0 , pi p

    0.5 / 31.999 / pi

    A)Domain color descriptor (DCD)

    90.0vi1 i

    f 1

    Color is an important visual attribute for both human

    xiyj

    2 2 v(l )

    v(l ) v(u) v(u) v(v) v(v)

    vision and computer processing. The Dominant Color

    xi yj xi yj xi yj

    Descriptor allows specification of a small number of

    1 (c (l ) c

    (l ) )2

    (c (u) c

    (u) )2

    c (v) c

    (v) 2

    exp

    xi yj xi yj

    xi yj

    dominant color values as well as their statistical

    2

    v (l ) v (l ) v (u) v (u) v (v) v (v)

    properties like distribution and variance. Its purpose is to provide an effective, compact and intuitive representation of colors present in a region or image [14]. Dominant Color Descriptor (DCD) provides an

    xi yj xi yj xi yj

    ………………………………………………..(3)

    effective, compact and intuitive description of the representative colors in an image or region . The descriptor consists of the Color Index (ci), Percentage (pi), Color Variance (vi) and Spatial Coherency (s); the

    And the distance is calculated as: D= [ 0.3 abs (

    B) BDIP (block difference inverse probabilities)

    Texture is not a simple feature and still is not properly

    Where

    0,0

    and 0,0

    represent the local mean value

    defined. So proper texture features have not been

    defined, however, different methods had been tried to gain information about the texture of an image using

    and standard deviation of the block with size MxM. The (k,l) term denotes four orientations (90º, 0º, 45º,

    different aspects[5]. In this work we used the BDIP and

    45º). As a result,

    k,l and k,l

    repressent the mean

    BVLC texture feature . BDIP and BVLC are known to be good texture features which are bounded and well normalized to reduce the effect of illumination and catch the own properties of textures effectively[18].

    value and standard deviation of the shifted block, respectively. The larger BVLC value indicated that the ingredients in the block are rough.

    D) Sobel Operator:

    BDIP M 2

    (i, j)B

    I (i, j)

    Shape information is captured in terms of the edge image of the gray scale equivalent of every image in

    max(i, j )B I (i, j) ..(4)

    Where B denotes a block of size M x M. The larger the variation of intensities there is in a block, the higher the value of BDIP. whereI(i, j ) denotes the value at a pixel (i, j) in the image I. BVLC, the second texture feature, is known to measure texture smoothness well. BVLC represents the variation of block-based local correlation coefficients according to four orientations. It is known to measure texture smoothness well. Each local correlation coefficient is defined as local covariance normalized by local variance.

    C ) BVLC (block variation of local correlation coefficient):

    the database[7]. We use the sobel operator for the shape feature extraction.

    We apply Sobel Operator :-

    Column mask Row mask

    BVLC represents the variation of block-based local correlation coefficients according to four orientations. It is known to measure texture smoothness well. The

    value of BVLC is defined as follows[17]:

    1 2 1

    0 0 0

    1 2 1

    1 0 1

    2 0 2

    1 0 1

    1 As a mask to a sub-field of a picture:-

    M 2 (k,l)o4 I (i, j)I (i k, j l) 0,0k ,l

    P0 P1 P2 1 2

    1

    p(k, l)

    P3 P4 P5 0 0 0 (P6 P0) 2(P7 P1) (

    0,0 k ,l

    ……………………(5)

    P6 P7 P8 1 2 1

    The final step of the convolution equation, dividing by the weight ,must be ignored. The result of the above calculation for the column mask is the horizontal difference and for the row mask is the vertical difference.

    EDGE DETECTION WITH SOBEL OPERATOR:-

  3. PROPOSED IMAGE RETRIEVAL METHOD:-

    A. Structure of the proposed method:-

    Shows the block diagram of the proposed retrieval method. When RGB query image whose components are IR,IG AND IB enters the retrieval system, it is first transformed into an HSV color image whose components are IH,IS ,and IV. The proposed retrieval system next combines the color feature vector fc, we used the dominant color descriptor for the color feature extraction. Texture feature extraction we apply the

    BDIP and BVLC texture feature and the texture feature

    The weight of a mask determines the grey level of the image after convolution.

    Like the weight of Sobel Mask W

    W= (-1) + (-2) + (-1) + 0 + 0 + 0 +1 + 2 +1= 0

    The resulting image lost its lightness to be dark. Column mask Row mask

    vector fT and for the shape feature extraction sobel operator are used fs to generate the query feature vector fq . it calculates the similarity between the query feature vector fq and each target feature vector ft.

    According to the similarity ranks, it finally retrievas a

    given number of target images from the image DB.

    1 2 1

    1 0 1

    0 0 0

    1 2 1

    2 0 2

    1 0 1

    Gy Gx

    a tan1 Gy

    G Gx2 Gy2

    a tan1 Gy

    G Gx2 Gy2

    Gx

    Gx

    ……………….(6) ……………….(7)

    Block Diagram of The proposed work

    1 2 1

    1

    1

    S 0 0 0

    1 2 1

    1 0 1

    2

    2

    S 2 0 2

    1 0 1

  4. PROPOSED ALGORITHM:

    STEP 1:- Input the query image.

    1 2

    1 2

    STEP 2:- Transform the RGB query image whose

    EDGE MAGNITUDE

    S 2 S 2

    ………………(8)

    components are IR,IG

    and IB

    to HSV color transform

    s

    EDGE DIRECTION tan 1 …………………(9)

    s2

    whose components are IH,IS and IV.

    STEP 3:- Apply the BDIP (block difference inverse probability) and BVLC (block variation of local corelation ) to the texture feature extraction ft,

    Dominant color descriptor (DCD) to the color feature extraction fc, Sobel operator to the shape feature extraction.

    STEP 4:- Combining the fc,ft and fs to the feature vector combination.

    STEP 5:- Finally according to the similarity ranks, retrieves a given number of target images From the image database.

  5. EXPERIMENTAL RESULT AND ANALYSIS

    To validate the effect of the proposed framework, in this section, we present the details of our performance evaluation, including the test image database, the evaluation metrics and the results of the performance comparison with two existing methods.

    1. Image Database

      The image database was downloaded from the http://wang.ist.psu.edu/docs/related/. This image database consists 319 images. These images are groped into a 10 cluster and in which each cluster contains 9 images. The cluster names of these images are: Elephants, Flower, Horse, Building, Buses, Mountains, Dinosaurs, Beach, Food, Africa. The images in the same row in fig.2 belong to the same cluster, classes names are listed in Table 1.

      Some examples of Image Dataset

      Table 1. Five classes of Image Dataset Classes Semantic Name

      1. Elephants

      2. Flower

      3. Horse

      4. Buildings

      5. Dinosaurs

      6. Buses

      7. Mountains

      8. Africa

      9. Food

      10. Duck

      11. Frog

      12. Box

    2. Evaluation Methods:

      For retrieval efficiency we have consider two parameters namely recall and precision. We calculated recall and precision value in both case output after applying the dominant color descriptor for color feature extraction , BDIP and BVLC texture feature for the texture feature extraction and Sobel operator for the shape feature extraction. For the similarity measurement we applied the Euclidian Metrics. In our experiment, the precision and recall are calculated as:

      PRECISION =

      RECALL=

    3. Retrieval Results:

      We implement the proposed method on the image database described in section 5.1 . Moreover test on the same database using the method proposed in 2 (A), 2(B) and 2(C) are conducted for the purpose of comparison. The experimental results in terms of average precision and recall using proposed method and the other two method are shown in Table 2 and Table 3 respectively.

      Result snapshot of food.

      Result snapshot of elephant

      Result snapshot of africa

      Result snapshot of horse

      Table 2

      Precision of the proposed work

      Semantic name

      The method [12]

      The method[5]

      Proposed previous

      method

      The proposed method

      Africa

      0.534

      0.587

      0.69

      0.999

      Food

      0.467

      0.445

      0.698

      0.999

      Horse

      0.724

      0.693

      0.813

      0.824

      Mountain

      0.534

      0.448

      0.645

      0.656

      Elephant

      0.726

      0.65

      0.708

      0.732

      Duck

      0.999

      Frog

      0.999

      Dinosaur

      0.908

      0.745

      0.932

      0.621

      Box

      0.999

      Flower

      0.745

      0.832

      0.884

      0.894

      Elephant

      0.726

      0.65

      0.708

      0.732

      Duck

      0.999

      Frog

      0.999

      Dinosaur

      0.908

      0.745

      0.932

      0.621

      Box

      0.999

      Flower

      0.745

      0.832

      0.884

      0.894

      Table 3

      Semantic Name

      The method[12]

      The method5]

      Proposed previous

      Method

      The proposed method

      Africa

      0.119

      0.122

      0.147

      0.257

      Food

      0.135

      0.137

      0.153

      0.333

      Horse

      0.121

      0.139

      0.121

      0.824

      Mountain

      0.143

      0.183

      0.19

      0.656

      Elephant

      0.129

      0.145

      0.163

      0.732

      Duck

      0.999

      Frog

      0.999

      Dinosaur

      0.099

      0.123

      0.112

      0.621

      Box

      0.999

      Flower

      0.109

      0.115

      0.127

      0.894

      Semantic Name

      The method[12]

      The method5]

      Proposed previous

      Method

      The proposed method

      Africa

      0.119

      0.122

      0.147

      0.257

      Food

      0.135

      0.137

      0.153

      0.333

      Horse

      0.121

      0.139

      0.121

      0.824

      Mountain

      0.143

      0.183

      0.19

      0.656

      Elephant

      0.129

      0.145

      0.163

      0.732

      Duck

      0.999

      Frog

      0.999

      Dinosaur

      0.099

      0.123

      0.112

      0.621

      Box

      0.999

      Flower

      0.109

      0.115

      0.127

      0.894

      Recall of the proposed work

      Both from Table 2 and Table 3, we can see that for the most classes of images the proposed method has achieved better precision and recall than the other three methods. In summary, it is clear that the overall performance of our approach is better than the performance of other three methods.

  6. CONCLUSION:

    In this paper , we proposed a framework for the image retrieval using combined features, i.e, dominant color descriptor as a color feature, BDIP and BVLC texture feature and Sobel operator used for the shape feature . Experimental results on the test image dataset shown that our proposed method outperforms other two methods in terms of precision and recall. In the future, larger benchmark image dataset will be used to further evaluate the effectiveness and efficiency of our proposed method.

  7. REFERENCES:

  1. Jiayin Kang1, Wenjuan Zhang2, A Framework for Image Retrieval with Hybrid Features,

    24th Chinese Control and Decision Conference (CCDC)2012.

  2. M. E. ElAlami, Unsupervised image retrieval framework based on rule base system, Expert Systems with Applications, Vol.38, pp.3539 3549, 2011.

  3. Z. Y. He, X. G. You, Y. Yuan, Texture image retrieval based on non-tensor product wavelet filter banks, Signal Processing, Vol.89, pp.15011510, 2009.

DOMINANT COLOR, TEXTURE AND

SHAPE, M. Babu Rao et al. / International Journal of Engineering Science and Technology (IJEST) ISSN : 0975-5462 Vol. 3 No. 4 Apr 2011.

8. 1Tamer Mehyar, 2 Jalal Omer Atoum, An Enhancement on Content-Based Image Retrieval using Color and Texture Features

,Journal of Emerging Trends in Computing and Information Sciences ISSN 2079-8407 VOL. 3, NO. 4, April 2012.

4. R. Min, H. D. Cheng, Effective image retrieval using dominant color descriptor and

1

9. NARENDRA GALI

2

B.VENKATESHWAR

,

3

fuzzy support, vector machine ,Pattern

RAO

, ABDUL SUBHANI SHAIK

, Color

Recognition, Vol. 42, pp. 147157, 2009.

  1. Young Deok Chun, Nam Chul Kim, Member, IEEE and Ick Hoon Jang, Member, IEEE Content- Based Image Retrieval Using Multiresolution Color and Texture Features IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 10, NO. 6, OCTOBER 2008

  2. Mrs. Saroj Shambharkar, Ms. Shubhangi C. Tirpude, Content Based Image Retrieval Using Texture and Color Extraction and Binary Tree Structure, International Journal of Computer Technology and Electronics Engineering (IJCTEE) National Conference on Emerging Trends in Computer Science & Information Technology (NCETCSIT-2011).

  3. M.BABU RAO*,Dr. B.PRABHAKARA RAO, Dr. A.GOVARDHAN , CONTENT BASED IMAGE RETRIEVAL USING

and Texture Features for Image Indexing and Retrieval, International Journal of Electronics Communication and Computer Engineering Volume 3, Issue (1) NCRTCST, ISSN 2249

071X (2012).

  1. Michele Saad, Low-Level Color and Texture Feature Extraction for Content-Based Image Retrieval, EE 381K: Multi-Dimensional Digital Signal Processing May 09, 2008.

  2. Hyun Joo So, Mi Hye Kim, and Nam Chul Kim, TEXTURE CLASSIFICATION USING WAVELET-DOMAIN BDIP AND BVLC FEATURES, 17th European Signal Processing Conference (EUSIPCO 2009) Glasgow, Scotland, August 24-28, 2009.

  3. M. E. ElAlami, A novel image retrieval model based on the most relevant features,

    Knowledge-Based Systems, Vol. 24, pp. 23

    32, 2011

  4. Kashif Iqbal , Michael O. Odetayo, Anne James Department of Computing and the Digital Environment, Coventry University, UK , Content-based image retrieval approach for biometric security using colour, texture and shape features controlled by fuzzy heuristics, Journal of Computer and System Sciences 78 (2012) 12581277.

  5. Wang Surong, Chia Liang-Tien, Deepu Rajan, EFFICIENT IMAGE RETRIEVAL USING MPEG-7 DESCRIPTORS, Nanyang Technological University, Singapore 639798{pg02759741,asltchia, asdrajan}@ntu.edu.sg

Techniques,Promising Directions And Open Issues,(2006).

19. J. Yue, Z. B. Li, L. Liu, et al, Content-based image retrieval using color and texture fused features, Mathematical and Computer Modelling, Vol. 54, pp. 11211127, 2011.

  1. Jens-Rainer Ohm, Heon Jun Kim, Dean S. Messing , The MPEG-7 Color Descriptors.

  2. Peter Stanchev, David Green Jr., and Boyan Dimitrov, MPEG-7: THE MULTIMEDIA CONTENT DESCRIPTION INTERFACE, International Journal "Information Theories & Applications" Vol.11.

  3. Yu-Len Huang Æ Kao-Lun Wang Æ Dar-Ren Chen, Diagnosis of breast tumors with ultrasonic texture analysis using support vector machines, Neural Comput & Applic (2006).

  4. Young Rui and Tomas S. Huang, Shih Fu Chang ,Image Retrieval: Current

International Journal of Engineering Research & Technology (IJERT)

ISSN: 2278-0181

Vol. 2 Issue 3, March – 2013

  1. Footnotes

    Use footnotes sparingly (or not at all) and place them at the bottom of the column on the page on which they are referenced. Use Times 8-point type, single-spaced. To help your readers, avoid using footnotes altogether and include necessary peripheral observations in the text (within parentheses, if you prefer, as in this sentence).

  2. References

    List and number all bibliographical references in 9- point Times, single-spaced, at the end of your paper. When referenced in the text, enclose the citation number in square brackets, for example [1]. Where appropriate, include the name(s) of editors of referenced books.

    1. A.B. Smith, C.D. Jones, and E.F. Roberts, Article Title, Journal, Publisher, Location, Date, pp. 1-10.

    2. Jones, C.D., A.B. Smith, and E.F. Roberts, Book Title, Publisher, Location, Date.

  3. Copyright forms and reprint orders You must include your fully-completed, signed IJERT copyright release form when you submit

your paper. We must have this form before your paper can be published in the proceedings. The copyright form is available as a Word file in author download section, <IJERT-Copyright-Agreement- Form.doc>.

Leave a Reply