- Open Access
- Total Downloads : 348
- Authors : Nithya P. V., Mr. Manikandan S.
- Paper ID : IJERTV2IS4142
- Volume & Issue : Volume 02, Issue 04 (April 2013)
- Published (First Online): 20-04-2013
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Energy and Structural Features Based Neural Network Classifier For Glaucomatous Image Classification
Energy and Structural Features Based Neural Network Classifier For Glaucomatous Image Classification
1Nithya P V, PG Scholar,
2Mr. Manikandan S, Asst..Professor,
Department of Electronics and Communication Engineering, PSN College of Engineering and Technology,
Tirunelveli, Tamil Nadu, India,
Abstract Glaucoma is the second top reason for peripheral sightlessness in global and results in the neurodegeneration of the optic nerve. For exact and efficient glaucoma classification, texture features within images are vigorously pursued. To get these essential texture features we concern Energy distribution over wavelet sub bands. The discriminatory potential of wavelet packet tree features obtained from the wavelet filters is investigated in this project. To extract energy signatures obtained using 2-D discrete wavelet transform, and subject these signature to diverse feature ranking and feature selection strategies we proposes this proposed model. To increase the segmentation accuracy we apply the structural features using Gray level Co-occurrence matrix (GLCM). After Efficient feature selection, computational intelligence techniques such as Support Vector Machine (SVM), Neural Networks (NN) based classifier to be used. Extract the Data information of the performance of the proposed computational techniques measured using well known DRIVE datasets.
Index Terms-Biomedical visual imaging, Classifier, data mining, discrete wavelet transforms, feature extraction, glaucoma, Gray level Co-occurrence matrix (GLCM), image texture.
-
INTRODUCTION
ratio and also rim appearance and vascular change. An artificial neural network (ANN) model using multifocal visual evoked potential (M-VEP) information from Objective Vision perimetry system [2] was shows that this model with M-VEP inputs able to detect the eye diseases. Automated clinical decision support systems (CDSSs) have used glaucoma as a predominant case study for decades [3][4]. Such CDSSs are based on retinal image analysis techniques which extort structural, contextual, or textural features from retinal images to distinguish between normal and diseased samples. In CDSS, features extracted from the images are categorized into either structural features or texture features. Normally categorized structural features include disk region disk diameter, rim region, cup region, cup diameter, cup-to-disk ratio, and features extract from the image. The performance of ANN is to recognize glaucomatous visual field defect and also to improve the differential between normal and glaucomatous eye [5]. Proper orthogonal decomposition (POD) is an example of a technique that uses structural features to identify glaucomatous progression [6]. In POD, pixel-level data is used to gauge important changes across samples that are spot or region specific. The dimension of texture features is roughly defined as the spatial variation of pixel intensity (gray-scale values) across the image. The use of texture features and higher order spectra (HOS) features were proposed by [7] for glaucomatous image classification. Although the texture-based technique has been demonstrated successful, it is still a confront to create features that salvage generalized structural and textural features from retinal images. Structure and Texture features using wavelet transforms (WTs) in image processing are frequently engaged to overcome the generalization of features. In WT, the image is representing in terms of the frequency. This provides a framework for the analysis of image features, which are independent in size and frequency province property. Wavelet-Fourier analysis (WFA) is used for the categorization of neuroanatomic problem in glaucoma was proposed by [8] and has achieved significant victory. Wavelet is used to extract features and analyze discontinuities and abrupt changes contained in signals. Wavelet packet is used for texture
classification. Wavelet packets are a WT where discrete time signal is passed through more filter than DWT. Wavelet packet transform is applies to both detailed results and approximation results to create a set of features. In image processing, it is common to use the wavelet energy of each wavelet subband by implementing wavelet packet-based texture classification to gauge the discriminatory potential of the texture features obtained from the image [9]. The goal of this study is to automatically classify normal eye images and diseased glaucoma eye images based on the distribution of average texture features obtained from three prominent wavelet families. In this project. To extract energy signatures obtained using 2-D discrete wavelet change, and subject these signatures to diverse feature ranking and feature selection strategy we proposes this proposed model. To increase the segmentation accuracy we apply the structural features using Gray level Co-occurrence matrix (GLCM). After Efficient feature selection, computational intelligence techniques such as Support Vector Machine (SVM), Neural Networks (NN) based classifier to be used. Extract the Data information of the performance of the proposed computational techniques measured using well known DRIVE datasets. The paper is organized in the subsequent mode. Segment II consists of details of the dataset which are used in this paper. Segment III consists of detailed explanation of the tactic adopt for this study, including the characteristic preprocessing, feature grade, and feature assortment scheme that we were choose. Segment IV consists of a portrayal of the Classifiers and classifier parameter engaged for our experiment. Segment V includes a construal of results obtained using the future method. Finally, the paper concludes with part VI.
Train CFI Read Test
CFI Image
-
DATASET
The retinal images used for this study were collected from the Kasturba Medical College, Manipal, India.The doctors in ophthalmology section of the hospital manually curated the images based on the quality and usability of samples.All the images were taken with a resolution of 560 × 720 pixels and stored in lossless JPEG format. The dataset contains 60 fundus images: 30 normal and 30 open angle glaucomatous images from 20 to 70 year-old person. The fundus camera, a microscope, and a light supply were used to obtain the retinal images to diagnose diseases.Fig.2 (a) and (b) presents typical normal and glaucoma fundus imagery, respectively.
Fig.2.Typical fundus images. (a)Normal. (b) Glaucoma
-
METHODOLOGY
The images in the dataset are applied to typical histogram equalization. The aim of apply histogram equalization was two times: to assign the intensity values of pixels in the input image, such that the output image contained an identical distribution of intensities, and to increase the dynamic choice of the histogram of an image. The following detailed process was then engaged as the feature extraction process on all the images before proceeding to the feature ranking and feature selection schemes.
Discrete Wavelet Transform-Based Energy Feature
The DWT captures both the spatial and frequency information of a signal. DWT investigate the image by decomposing it into a coarse approximation and into
detail information. Such decomposition is performing
Wavelet Filter
Preprocessing
recursively on low-pass approximation coefficients
Wavelet Filter Feature
Wavelet Filter Feature
Feature
Z-Score Normalization
Z-Score Normalization
Random Search Featur Selection
Random Search Feature Selection
Z-Score Normalization
Z-Score Normalization
Neural Network Training Neural Network Prediction
Classified Output
Classified Output
Fig1: Proposed glaucoma detection System
obtain at each level, Let each image be characterize as a p × q gray-scale matrix I[i,j], where each element of the matrix represent the grayscale intensity of one pixel of the image. Each non border pixel has eight adjacent nearest pixel intensities. These eight neighbors can be used to negotiate the matrix. The resultant coefficients are the identical irrespective of whether the matrix is traversed right-to-left or left-to-right. Here we consider four decomposition direction corresponding to 0 (horizontal, Dh), 45 (diagonal, Dd), 90 (vertical, Dv), and 135 (diagonal, Dd) orientations. The decay structure for one level is illustrated in Fig. 3. In this figure, image [I], g[n] and h[n] are in low-pass and high-pass filters, correspondingly, and A is the estimate coefficient. From Fig. 3, the first level of decay results in four coefficient matrices, namely, A1, Dp, Dv1, and
Dd1. Here the number of elements in these matrices is high, but we only require a solitary number as a envoy feature, So we employed averaging method to calculate single valued features. The definition of the three features that were resolute using the DWT coefficients. An averaging of the energy of the intensity values is given by
Fig. 3. 2-D-DWT decomposition
-
Preprocessing of Features
As shown in Table I, 14 features can be found among the Normal and Glaucomatous image sample. Their consequent allotment across these samples is also shown in the table. It should be renowned that the features that exhibit p values<0.0001 were chosen for learn. A preprocessing footstep excludes diseases independent variation from the input images. For preprocessing we use Green channel.
-
Normalization of Features
Each of the 14 features is subject to z-score normalization [10]. In the process of z-score normalization, a sample (vector) consists of 14 features which is converted to zero mean and unit variance. The mean and standard deviation are the input vector
are computed as follows:
Where yold is the unique rate, ynew is the original value, and the mean and std are the mean and standard deviation of the original data range, respectively. Fig. 4 shows the z-scored normalized distribution for each of the features across the 30 Glaucoma and 30 normal samples used in this study.
Fig.4. Distribution of 14 normalized wavelet feature
TABLE I: WAVELET FEATURES AND EQUIVALENT P -VALUES
-
Feature Ranking
Feature ranking is a preprocessing step that precedes classification. In this paper, filter-based approaches to rank the features based on their prejudiced potential across samples. Our aim is to calculate approximately the efficacy of the wavelet features; analysis consists of four extensively used feature ranking proposal. These contain chi squared ([2 ]) [11], gain ratio [12], information gain [13] feature evaluation techniques, and relief feature ranking schemes [14] that are based on an extensive ranking algorithm. Apiece of these algorithms is meticulous as follows.
-
Chi-Squared (2 ) Feature Evaluation:
In this technique, the value of a feature is predictable by compute the value of its 2 statistic. The computation of the 2 statistic is attached to the allocation of values. This feature evaluation technique is divided into two phases. In the first phase, each feature is sort according to a significance level (sigLevel). Set at a sigLevel of 0.5, the features are discretized into hit and miss interval. The 2 value is then computed for every pair of close by intervals of the feature. Subsequently, interval pairs of buck 2 value are amalgamated in a process that terminate when the 2 value exceeds the previous set sigLevel. Phase two of the feature assessment is considered to be a fine tuning of the path achieve in phase one. Once the
amalgamation of feature intervals is carried out alone, a constancy check is performed for the second phase. Any variation in the merging of feature i does not pass the previously resolute sigLevel(i) for that feature, may not be careful a potentially significant feature, and is discouraged for future merging. In this way, the features are ranked according to the level of significance.
-
Gain Ratio and Information Gain Feature Evaluation: In this paper, we have use information gain and gain ratio based methods to rank features. For these techniques, the expected information needed to classify a sample in a dataset
S with m classes is given by
Where pi is the prospect that an illogical sample belong to class Ci and is estimated by Si /S. The entropy or predictable data of a feature A having v diverse values is given as
The data would be gained by spliting the dataset on attribute A is gather using the subsequent relation: Info Gain (A) = I(S) E(A).
The normalization of information gain by means of a stable called SplitInfo for each feature is obtained as follows:
The abovementioned SplitInfo for a feature represent the information generated by splitting training set S into v partitions corresponding to v outcomes of a test on feature A
Thus, Gain Ratio is defined as
The characteristic with the maximum gain ratio is selected as the feature based on which the training set is split.
-
Relief Feature Ranking: The relief algorithm was first proposed by as a feature selection approach and is based on instance-based learning. The relief algorithm use a relevancy factor , which acts as a doorstep that range between (0 < <1) and is used to gauge the statistical relevancy of a feature to the target concept. Relief uses two measures near hit and near miss to describe the proximity of an instance to a subset of instances that belong to a class. An occurrence is a near hit of X if it belongs to a close vicinity of X and to the same class as X. Similarly, an instance is careful a near miss if it belong to the nearness of X . This algorithm chooses a triplet of samples that include the <instance X, its Near Hit, and its Near Miss>, wherever the near hit and near miss are choose using the euclidean
detachment. Once the near hit and near miss are resolute, a feature weight vector W is modernized using Wi = Wi diff(xi , nearhiti )2 + diff(xi , nearmissi )2 .
A weight vector R is resolute using every model triplet. This consequence vector R is derived from the weight vector W, and is used to depict the weight of each feature by
-
-
Feature Selection
To select a subset of appropriate features, we subject the given situate of features to the stability subset assessment (CSE) tactic. CSE find the combination of features that have value that partition the data into subsets containing a strong single class majority, this consistency as a measure was first presented [15] by as follows:
where s is a feature subset, J is the number of distinct combinations of feature values for s, |Di | is the number of occurrence of the ith feature value amalgamation,
|Mi | is the cardinality of the bulk class for the ith feature value amalgamation, and N is the number of instance in the dataset. To use the CSE, the dataset is discredited with numeric attributes using forward selection search, which provide a list of ranked feature. The rank of the feature is resolute according to its overall contribution to the consistency of the attribute set. The variety of feature subset assortment schemes followed in this paper are part of the WEKA suite and include the random search, genetic search best first and greedy stepwise approaches.
Best First: The best first search strategy execution is based on the beam search algorithm[16] .The rudiments of this algorithm are based on the standard hill climbing approach with bcktracking to determine the best fit of a subset of features.
Random Search: In the random search approach, an extensive list of random feature combination are produced and tested. The subset of features that generate the best precision is chosen as the subset of features that best represents the input set.
Genetic Search: The genetic search technique was proposed and utilizes neural network feature ranking [17]. This algorithm demand several iterations of the assessment of a feature subset and includes training a neural network and computing its cost and accuracy.
Greedy Stepwise: Greedy stepwise subset assessment is performed using the greedy forward or backward search through the feature space.
Texture Feature:
Co-occurrence Matrices: Co-occurrence matrix captures numerical features of a texture using spatial relations of analogous gray tones.[2] arithmetical
features computed from the co-occurrence matrix can be used to signify, evaluate, and categorize textures. The subsequent are a subset of typical features derivable from a normalized co-occurrence matrix: is the th access in in a gray-tone spatial dependence matrix, and Ng is the numeral of discrete gray-levels in the quantized image.
CLASSIFIERS USED FOR VALIDATE THE FEATURES
Classifier
Classifier Setting
Classifier
Classifier Setting
LibSVM-(1)
C-Support Vector Classifier type SVM with a radial basis function
LibSVM-(1)
C-Support Vector Classifier type SVM with a radial basis function
SMO-(1)
The polynomial kernel with exponent exp=2.5.
SMO-(1)
The polynomial kernel with exponent exp=2.5.
SMO-(2)
The Pearson VII function based universal kernel, with omega=0.1 and sigma=0.1; without normalization
SMO-(2)
The Pearson VII function based universal kernel, with omega=0.1 and sigma=0.1; without normalization
Random Forest
With 10 trees
Random Forest
With 10 trees
Neural Network
Create feed forward neural network classifier with back propagation algorithm
Neural Network
Create feed forward neural network classifier with back propagation algorithm
-
-
CLASSIFIER SETTINGS
We performed the validation of the ranked features and feature subsets using the standard C-SVC execution of SVM, SMO, random forest, neural network and na¨ve Byes. The SVM employs the radial basis function with a prearranged value of gamma position at 0.28.[18] John C. Platts SMO algorithm guides the SVM. The execution of SMO is performed using both the polynomial kernel with the exponential set to 2.5. The number of trees in the random forest algorithm is set at
10. The na¨ve Bayes classifier is set to use a kernel purpose to approximation the distribution of the data, classifier settings are determined based on repeated trials on the guidance set, until a classification precision of 100% is obtained on the training set.
-
EXPERIMENTAL RESULTS
-
Feature Ranking and Feature Selection
Table II provides a snapshot of the results obtained from both the feature ranking and feature selection schemes described in the methodology section. The ranking algorithms include 2 , gain ratio, info gain, and the relief algorithm. Table II also show the position
of the features chosen using four constancy subset evaluation strategy, best first, random search, genetic,and greedy search. The cell of the counter dyed with the symbol _ portrays the chosen features, and the number of each tinted cell depicts the ranking of each of the features obtained.
-
Classification
Once the features are subject to both categories of feature ranking and/or feature selection, we perform both the tenfold cross rationale and the 60:40 slip tests. Both tests are carried out for the entire 60-sample dataset. In the tenfold cross corroboration method, the dataset is split into ten parts. For the primary iteration, nine parts are worn for training and the remaining part is used for testing. This process is approved for ten times using a different part for testing in each iteration. The results obtain from iterations are then averaged to obtain an overall precision. The 60:40 split tests provides the precision of the classification obtained when 60% of the total number of samples from the dataset are chosen randomly. The classifier is tested against the residual 40% of the samples that comprise the test set. It provide the consequences obtain using the tenfold cross validation. In the crate of 60:40 split up, all classifiers excluding the random forest classifier presented the highest precision of 95.83%. We can terminate that the CSE feature selection method does help in obtaining the highest precision using fewer features, thereby simplify the execution of the technique. For further corroboration, we conduct a sensitivity and specificity analysis of 42 training samples belong to classs glaucoma and normal, each consisting of 21 samples. The test set consisted of 18 samples. Nine samples belong to the group glaucoma, and the other nine samples belong to the group normal. Nayak et al. used for the purpose of structural features such as cup-to-disc (c/d) ratio, the ratio of the distance between the optic disc center and optic nerve head to diameter of the optic disc, and the ratio of blood vessels area in lowergreater side to area of blood vessel in the nasal-temporal side as features in a neural system. Their method detected the glaucoma with a sensitivity and specificity of 100% and 80%, respectively. This result implies that although the system can detect all subjects with glaucoma accurately, it can detect only 80% of the normal subjects as usual. In our planned technique using texture features based on wavelets, we were able to obtain a higher precision of over 96.33%. Finally, we evaluate our results with those obtain using HOS and texture features to enable a direct comparison of precision. To establish a baseline analysis of the features used in this study, we estimate the consistency of individual wavelet features by performing standard sensitivity and specificity analysis on the independent training and test sets are used. The classifiers used for
this analysis are described in .We carried out a tenfold cross rationale of sovereign wavelet features on the intact set, consist of 60 samples. The consequences of this experimentation are shown that it can be seen that the SMO_ (2) classifier performs consistently well using any wavelet-based feature. The results obtained using the random forest classifier is almost at par with the results obtained in. When the optic nerve is injured by glaucoma, the majority of the Individual fibers in the nerve are lost, and the optic nerve become excavate. As glaucoma progress, extra optic nerve tissue is lost and the optic cup grows. These changes make the fundus images obtained from glaucoma patients different from those obtained from normal subjects.
TABLE II: Feature Ranking
OVERALL ACCURACY OF INDIVIDUAL WAVELET FEATURES
Features
LibSVM-(1)
SMO-(1)
SMO-(2)
Random
Forest
Neural Network
Db3
90.00
88.63
93.33
85.40
96.33
Sym3
87.98
86.99
93.32
86.96
89.15
Rbio3.3
88.32
85.54
90.89
88.98
92.45
Rbio3.5
88.19
87.64
89.96
90.17
94.27
Rbio3.7
91.12
88.63
92.59
85.39
96.33
-
-
CONCLUSION
This study illustrates the dependence between featues extracted using three wavelet filters that have been subjected to various feature ranking and feature selection methods. The rank subsets of chosen features have been fed to a set of sorting algorithms to gauge the efficiency of these features. From the accuracies obtain and contrasted, we can terminate that the power get from the exhaustive coefficients can be used to distinguish between normal and glaucomatous images with very high precision. As observed the db3-
Dp_Average_l1_Norm and the rbio3.3-cD_Energy features are highly prejudiced. Furthermore, from that both LibSVM_(1) and SMO_(2) present the highest accuracy of 96.33%.
REFERENCES
-
R. Varma et al., Disease progression and the need for neuroprotection in glaucoma management, Am. J. Manage Care, vol. 14, pp. S15S19, 2008.
-
S. Weiss et al., A model-based method for computer-aided medical decision-making, Artif. Intell., vol. 11, pp. 145172, 1978.
-
D,Bizios,A Heijl, and B.Bengtsson,Trained ANN for glaucoma diagnosis using visual field data:Acomparison with conventional algorithms,J.Glaucoma, vol.16,no.1,pp.20-28,jan.2007
-
M. Balasubramanian et al., Clinical evaluation of the proper orthogonal decomposition framework for detecting glaucomatous changes in human subjects, Invest. Ophthalmol. Vis. Sci., vol. 51, pp. 264271, 2010.
-
U. R. Acharya, S. Dua, X. Du, V. S. Sree, and C. K. Chua,
Automated diagnosis of glaucoma using texture and higher order spectra features, IEEE Trans. Inf. Technol. Biomed., vol. 15, no. 3, pp. 449455, May 2011.
[8]E. A. Essock,Y. Zheng, and P.Gunvant, Analysis of GDx-VCC polarimetry data by wavelet-Fourier analysis across glaucoma stages, Invest.Ophthalmol. Vis. Sci., vol. 46, pp. 28382847, Aug. 2005.-
K. Huang and S. Aviyente, Wavelet feature selection for image classification, IEEE Trans. Image Process., vol. 17, no. 9, pp. 17091720, Sep. 2008.
-
M. H. Dunham, Data Mining Introductory and Advance Topics. NJ: Prentice Hall, 2002.
-
H. Liu and R. Setiono, Chi2: Feature selection and discretization of numeric attributes, in Proc. IEEE 7th Int. Conf. Tools With Artif. Intell., 1995, pp. 338391.
-
J. R. Quinlan, Induction of desion trees, Mach. Learning, vol. 1, pp. 81 106, 1986.
-
J. R. Quinlan, C4.5 Programs for Machine Learning. San Mateo: Morgan Kaufmann, 1993.DUA et al.: WAVELET-BASED ENERGY FEATURES FOR GLAUCOMATOUS IMAGE LASSIFICATION 87
-
K. Kira and L. A. Rendell, A practical approach to feature selection, in Proc. 9th Int. Workshop Mach. Learning, San Francisco, CA, 1992, pp. 249256
-
D. Furcy and S. Koenig, Limited discrepancy beam search, in
Proc. Int.Joint Conf. Artif. Intell., 2005, pp. 125131.
-
D. E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning. Boston, MA: Addison-Wesley, 1989.
-
S. S. Keerthi et al., Improvements to Platts SMO algorithm for SVM classifier design, Neural Comput., vol. 13, pp. 637649, 2001.