CBIR Based Brain Tumor Detection

DOI : 10.17577/IJERTV2IS3157

Download Full-Text PDF Cite this Publication

Text Only Version

CBIR Based Brain Tumor Detection

Ms. Sneha S. Lad , Dr. Mrs. S.R. Chougule

BVCOEK, Kolhapur, Maharashtra, India.

Abstract

Content-based image retrieval (CBIR) makes use of image features, such as color and texture, to index images with minimal human intervention. Content-based image retrieval can be used to locate medical images inlarge databases. This chapter introduces a content-based approach to medical image retrieval. Fundamentals of the key components of content based image retrieval systems are introduced first to give an overview of this area. A case study, which describes the methodology of a CBIR system for retrieving digital mammogram database, is then presented. This chapter is intended to disseminate the knowledge of the CBIR approach to the applications of medical image management and to attract greater

interest from various research communities to rapidly advance research in this field. the given database by inputting the features of image. This technique if deployed in the agriculture application will be helpful in solving problems in agriculture

  1. 1 Introduction to Image Retrieval System:

    The Image retrieval system is effective technique in obtaining the exact image from field. Many retrieval systems have been developed, but the problem of retrieving images on the basis of their pixel content remains largely unsolved number of querying techniques like query by example, semantic retrieval, browsing for example images, navigating customized/hierarchical categories, querying by image region (rather than the entire image), querying by multiple example images, querying by visual sketch, querying by direct specification of image features, and multimodal queries (e.g.

    combining touch, voice, etc.) can be used to retrieve exact image. Content comparison can be done using image distance measurement, color, shape, texture manipulation.

    1. Introduction to Content Based Image Retrieval

      "Content-based" means that the search will analyze the actual contents of the image rather than the metadata such as keywords, tags, and/or descriptions associated with the image. The term 'content' in this context might refer to colors, shapes, textures, or any other information that can be derived from the image itself. CBIR is desirable because most web based image

      search engines rely purely on metadata and this produces a lot of garbage in the result. Also having humans manually enter keywords for images in a large database can be inefficient, expensive and may not capture every keyword that describes the image. Thus a system that can filter images based on their content would provide better ing and return more accurate results.

      Block diagram of Proposed System

  2. Content Comparison Techniques 2.1Color Retrieval

    Computing distance measures based on color similarity is achieved by computing a color histogram for each image that

    identifies the proportion of pixels within an image holding specific values (that humans express as colors). Current research is attempting to segment color proportion by region and by spatial relationship among several color regions.

    1. Texture Retrieval

      Texture measures look for visual patterns in images and how they are spatially defined. Textures are represented by texels which are then placed into a number of sets, depending on how many textures are detected in the image. These sets not only define the texture, but also where in the image the texture is located.

      .

    2. Shape Retrieval

Shape does not refer to the shape of an image but to the shape of a particular region that is being sought out. Shapes will often be determined first applying segmentation or edge detection to an image. Other methods like [Tushabe and Wilkinson 2008] use shape filters to identify given shapes of an image. In some case accurate shape detection will require human intervention because methods like segmentation are very difficult to completely automate.

Query Techniques

Different implementations of CBIR make use of different types of user queries. Query by example is a query technique that involves providing the CBIR system with an example image that it will then base its search upon. The underlying search algorithms may vary depending on the application, but result images should all share common elements with the provided example.

    1. Semantic retrieval

      The ideal CBIR system from a user perspective would involve what is referred to as semantic retrieval, where the user makes a request like "find pictures of dogs" or even "find pictures of Abraham Lincoln". This type of open-ended task is very difficult for computers to perform – pictures of chihuahuas and Great Danes look very different, and Lincoln may not always be facing the camera or in the same pose. Current CBIR systems therefore generally make use of lower-level features like texture, color, and shape, although some systems take advantage of very common higher-level features like faces (see facial recognition system).

    2. Other query methods

      Other query methods include browsing for example images, navigating customized/hierarchical categories, querying by image region (rather than the entire image), querying by multiple example images, querying by visual sketch, querying by direct specification of image features, and multimodal queries (e.g. combining touch, voice, etc.). CBIR systems can also make use of relevance feedback, where the user progressively refines he search results by marking images in the results as "relevant", "not relevant", or "neutral" to the search query, then repeating the search with the new information.

      • · Supervised techniques

  1. Artificial Neural Networks (ANN)

  2. Support Vector Machine (SVM)

  3. k-Nearest Neighbors (k-NN)

    • · Unsupervised techniques

a. Self Organization Map (SOM)

We are using ANN as supervised machine technique to classify magnetic resonance (MR) images into three categories, normal, MS and Tumoral. One of the most common forms of dimensionality reduction is principal components analysis.

  1. Feature Extraction

    A. Feature extraction block

    A statistical method of examining texture that considers the spatial relationship of pixels is the gray-level co-occurrence matrix (GLCM), also known as the gray-level spatial dependence matrix. The GLCM functions characterize the texture of an image by calculating how often pairs of pixel with specific values and in a specified spatial relationship occur in an image, creating a GLCM, and then extracting statistical measures from this matrix. The feature extraction techniques are:

    1. GLCM (Gray level co-occurrence matrix)

    2. DWT (Discrete Wavelet Transform)

    3. Direct variance, etc

  2. Feature Reduction

    One of the most common forms of dimensionality reduction is PCA (principal components analysis).Principal component analysis (PCA) is appropriate when we have obtained measures on a number of observed variables and wish to develop a smaller number of artificial variables (called principal components) that will account for most of the variance in the observed ariables. The principal components may then

    be used as predictor or criterion variables in subsequent analyses. The Principal Component Analysis (PCA) is one of the most successful techniques that have been used in image recognition and compression. Given a set of data, PCA will find the linear lower-dimensional representation of the data such that the variance of the reconstructed data will be preserved. Using a system of feature reduction based on a combined principle component analysis feature vectors will be calculated from the GLCM.

  3. MODEL LEARNING

    1. Feed Forward Artificial Neural Network (FFANN) Based Classifier

      research. The 500 data points extracted from each subject were then used as inputs of the neural networks. The output node resulted in either a 0 or 1, for control or patient data respectively. Since the nodes in the input layer could take in values from a large range, a transfer function was used to transform data first, before sending it to the hidden layer, and then was transformed with another transfer function before sending it to the output layer. In this case, a tan sigmoid transfer function was used between the input and hidden layer, and a log sigmoid function was used between the hidden layer and the output layer.

      A three layer Neural network was created with 500 nodes in the first (input) layer, 1 to 50 nodes in the hidden layer, and 1 node as the output layer. We varied the number of nodes in the hidden layer in a simulation in order to determine the optimal number of hidden nodes. This was to avoid over fitting or under fitting the data. Due to hardware

      limitations, ten nodes in the hidden layer

      Neural Network

      Feed Forward

      were selected to run the final simulation. Figure 2 shows the design of the Feed Forward Neural networks used in this

  4. Morphological Image Processing

Binary images may contain numerous imperfections. In particular, the binary

regions produced by simple thresholding are distorted by noise and texture.

Morphological image processing pursues the goals of removing these imperfections by accounting for the form and structure of the image. These techniques can be extended to greyscale images.

Histogram

The Histogram block computes the frequency distribution of the elements in a vector input, of the elements in each channel of a frame-based matrix input, or of the elements in a sample based N-D array.

User define Function: H = imHistogram( );

Using this histogram we find the 2-D correlation coefficient of query image and image available in data base. To computes the correlation coefficient using

=

one instance in the input image where two horizontally adjacent pixels have the values 1 and 1, respectively. glcm(1,2) contains the value 2 because there are two instances where two horizontally adjacent pixels have the values 1 and 2. Element (1, 3) in the GLCM has the value 0 because there are no instances of two horizontally adjacent pixels with the values 1 and 3. Graycomatrix continues processing the input image, scanning the image for other pixel pairs (i,j) and recording the sums in the corresponding elements of the GLCM.

Fig.3.1. Example of Gay Level Co- occurrence Matrix

2 2

Where, = 2()

and = 2()

Co-occurrence Matrix

In the output GLCM, element (1, 1) contains the value 1 because there is only

Flowchart:

Simulation Results:

    1. CBIR system based on entropy

      7.1)Normal,tumoral and multisclerosis image

    2. Tumor Region

    3. Thresholded Image

Conclusion:

In this study, we are developing a medical decision supportsystem with normal and finding two certain abnormalities. The medical decision making system has been designed by the gray level co-occurrence matrices (GLCM), principal component analysis (PCA), and support vector machine as a supervised learning method (SVM) which will help us to get very promising results in classifying the normal images, images with tumor and image of multiple sclerosis. The benefit of the system is to assist the physician to make the final decision without hesitation. This system can

also be well utilized for detecting tumors in the whole body i.e. not only concentrating on the brain but also the other organs. This system represents an innovative idea to implement an efficient system with powerful algorithm. Just like texture retrieval as one of the methods of CBIR to implement the system, the other feature retrieval techniques can also be considered for comparative study. The other major research area is designing a preprocessing step to assimilate various databases or different type of images in a certain database to make the algorithm more practical.

  1. Reference Books:

    1. Rafael C. Gonzalez, Richard E.Woods, Digital Image Processing.

  2. Reference Papers:

    1. Content Based Image Retrieval Approach to TumorDetection in Human Brain Using Magnetic Resonance Image

      1st International Conference on Recent Trends in Engineering & Technology, Mar- 2012

    2. Fletcher-Heath, L. M., Hall,D, L. O., Goldgof, B., F. Murtagh, R.,2001. Automatic segmentation of non enhancing

      brain tumors in magnetic resonance images, Artificial Intelligence in Medicine 21, pp. 43-63.

    3. Smeulders, A. W. M., Worring, M., Santini, S., Gupta, A , and Jain,V.,2000. Content-Based Image Retrieval at the End of the Years, IEEE Transactions on Pattern Analysis and MachineIntelligence 22, pp. 13491380.

    4. Dahshan, A. E., Salem, and Younis, T. H., 2009. A hybrid technique for automatic MRI brain images classification,

    5. Haralick, R. M., 1979. Statistical and structural approach to texture, proceeding of the IEEE, vol. 67, pp. 786-804. BABES BOLYAI, Informatica, Volume LIV.

Leave a Reply