- Open Access
- Total Downloads : 6
- Authors : Shyma Mol I , Indu M. G.
- Paper ID : IJERTCONV3IS05007
- Volume & Issue : NCETET – 2015 (Volume 3 – Issue 05)
- Published (First Online): 24-04-2018
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
A Novel Method for Automatic Tuberculosis Detection using Chest Radiographs
Shyma Mol I
PG Scholar, Dept. of ECE TKM Institute of Technology Kollam, India
Indu M. G.
Assistant Professor, Dept. of ECE TKM Institute of Technology Kollam, India
Abstract Tuberculosis (TB) is a widespread infectious disease caused by Mycobacterium Tuberculosis. It mainly affects lungs and spreads through air. Chest X-ray (CXR), microscopic examination and microbiological culture of body fluids are used for the diagnosis of TB. In active pulmonary TB, consolidations or cavities are seen in the lungs. Current diagnosis is slow and often unreliable. In this work, an automated approach for detecting TB in conventional posteroanterior chest radiographs is considered. First, the lung region has to be extracted from input CXR using level set based segmentation method after the pre-processing of input CXR. Then, a set of features can be computed from the extracted lung region. Utilizing these features, the X-rays can be classified as TB affected and normal by using Probabilistic Neural Network. Automatic diagnostic methods require less effort and it saves time
Keywords Automatic detection, Tuberculosis, level set based segmentation, Probabilistic neural network, GLCM, texture feature estimation.
-
INTRODUCTION
Tuberculosis is the second leading cause of death from an infectious disease worldwide. With about one-third of the worlds population having latent TB, and an estimated nine million new cases occurring every year, TB continues to be a major global health problem. TB is caused by the bacillus Mycobacterium tuberculosis, which typically affects the lungs. It spreads through the air when people with active TB cough, sneeze, or otherwise expel infectious bacteria. Opportunistic infections in immunocompromised HIV/AIDS patients and multi-drug-resistant bacterial strains have complicated the TB, while diagnosing tuberculosis still remains a challenge. When left undiagnosed and untreated, mortality rates of patients with tuberculosis are high. Also, the increasing appearance of multi-drug resistant TB has further created an urgent need for a cost effective screening technology to monitor progress during treatment.
In lungs, the Tuberculosis bacteria destroys tissues. This inturn results in infection and thus the generation of sputum inside the lungs. This makes cavities and cloud like structures in the lung region of the chest X-ray (CXR) of TB affected person. So, the presence of cavities or cloud like structures indicates that the person is TB affected.
Standard diagnostics still rely on methods developed in the last century. It is the method of manually detecting the presence of TB by an expert. They are slow and sometimes unreliable. Chest radiography is one of the most reliable methods for TB detection. The introduction of digital
radiography has made it easier to develop automated systems that detect abnormalities related to tuberculosis in chest radiographs. So, an automated approach for detecting tuberculosis in conventional posteroanterior chest radiographs can be implemented and it is of great importance. The objective of the work is to automatically detect tuberculosis (TB) from conventional postero-anterior chest radiographs (CXR) using level-set based segmentation method and Probabilistic Neural Network.
Various methods are available for detecting lung nodules and lung cancers. Pattern recognition method is used in lung nodule detection from chest radiographs[11] and it offers more accuracy in identifying spherical blobs in lungs. Computer-aided diagnosis of lung cancer[12] detects cancerous regions using rule based analysis technique. Most of the existing methods are either complex in nature or insufficient for cloud detection. Taking all these disadvantages under consideration, a new method for automatic TB detection is considered.
The process starts with usual preprocessing steps, which is done to improve the quality of input digital chest X-ray image. For detecting the Tuberculosis infection, only the lung region should be analyzed. But, usually, apart from the lung region, the chest X-ray will include entire chest area of the patient. So, the lung region should be extracted or segmented from the input CXR to avoid false TB detection in the regions outside lungs. Level set based segmentation can be used for the lung region extraction. Then using appropriate features extracted from both normal CXRs and TB affected CXRs, the classifier can be trained. Probabilistic Neural Network (PNN) is used as the classifier in this system. The trained PNN can be used to detect the presence of TB in test images.
This paper is organized as various sections and each section is described as follows. Section I gives an introduction to the work and explains its relevance. The different existing techniques and problems related to these techniques are also explained here. Section II explains the methodology of this work. Section III gives an idea about the data set used for simulation of the work and the simulation results of the work are disscussed in detail. Finally, conclusions were drawn in Section IV.
-
METHODOLOGY
The automatic TB detection system consists of preprocessing, segmentation, feature extraction and classification blocks as already explained. The basic block
diagram for automatic Tuberculosis detection is as shown in the Figure 1.
Fig. 1. System Block Diagram
B. Level set based segmentation
Level set based segmentation method can be used to accurately extract objects from the images.It is a numerical technique for tracking interfaces and shapes. It was first introduced by Osher and Sethian to capture moving fronts in 1987. Here a surface intersects with a plane and that gives a contour. The basic idea of the level set method is to represent contours as the zero level set of an implicit function defined in a higher dimension, usually referred to as the level set function, and to evolve the level set function according to a partial differential equation (PDE). In typical PDE methods, images are assumed to be continuous functions sampled on a grid. Active contours were introduced in order to segment objects in images using dynamic curves. Geometric active contour models are typically derived using the Euler-Lagrange equation. The evolution equation of the level set function can be written in the following general form.
+ || = 0
A. Preprocessing
(2)
Medical images are susceptible to noise. So, a preprocessing step is often performed in those images for noise removal and image enhancement. Different types of filters are there. The types of filters to be applied depend on the type of noise in the input image. Major noises affecting digital chest X-rays are gaussian noise and salt and pepper noise. The salt and pepper noise is also called as data drop- out noise or impulse noise or salt-and-pepper noise. Here, the noise is caused by errors in data transmission. Corrupted pixels are either set to the maximum value or to zero. Unaffected pixels remain unchanged. The noise is usually quantified by the percentage of pixels which are corrupted. This noise can be removed by median filter. Here, the center pixel is replaced by the median value of the pixels covered by the filter mask. As a result the median filter is less likely to smooth edges.
Gaussian noise has a probability density function,or normalized histogram, given by,
which is called level set equation. The function F is called the speed function. For image segmentation, the function F depends on he image data and the level set function $ \phi $. The advantage of the level set method is that one can perform numerical computations involving curves and surfaces on a fixed cartesian grid without having to parameterize the object. Also, the level set method provides mathematical and computational tools for the tracking of evolving interfaces with sharp corners and topological changes. They efficiently compute optimal robust paths around obstacles, and extract clinically useful features from the images.
First, a contour is defined over the object to be detected in the input image, which is initialized as the first level set. On the basis of this, an initial region is generated. Then, using iteration method, the level set contour is converged. The contour obtained after a sufficient number of iterations gives the idea of the object boundary. So, this boundary
() =
1 exp(
(µ)2 (1)
)
)
(22)
related contour can be used to extract that object from the input image.
1
1
(22)2
Before where a is the gray value, µ is the average gray value and is its standard deviation. Approximately 70 % of its pixel values are in the range [(µ – ), (µ + )]. Gaussian noise comes from many natural sources, such as the thermal vibrations of atoms in antennas (referred to as thermal noise) and black body radiation from warm objects. Gaussian noise can be removed from CXR by using Gaussian filter. Histogram equalization method can be used to enhance the filtered image. Histogram equalization is a technique for adjusting image intensities to enhance contrast. In histogram equalization, the image contrast can be maximized by applying a gray level transform which tries to flatten the resulting histogram. Here, the gray level transform is simply a scaled version of the original images
-
Feature extraction
Several features are extracted for classifying the CXR with TB and without TB. It includes histogram based features and other types of features.
-
Intensity Histogram: Histogram is a graph showing the number of pixels in an image at each different intensity values of the image. For an 8-bit grayscale image, there are
256 (i.e., 28) different possible intensities, and so the histogram will graphically display 256 numbers showing the distribution of pixels amongst those grayscale values. The mean and variance can be considered from the intensity histogram. The mean can be computed as,
histogram. Let f be a given image represented as an r x c matrix of integer pixel intensities ranging from 0 to (L – 1).
= ( )
(3)
L is the number of possible intensity values, often 256.
The variance of an image at (i,j) can be mathematically defined as:
2 = (((, ) µ )2) (4)
where 2 is the image variance and µ is the mean value of the image.
-
Gray Level Co-occurrence matrix(GLCM): The Gray Level Coocurrence Matrix (GLCM) method is a matrix where the number of rows and columns is equal to the maximum of gray levels, G, in the image. i.e., GLCM is a 'G × G' matrix. Simply, the GLCM functions characterize the texture of an image by calculating how often a pair of pixel values in a specified spatial relationship occurs in an image. So, GLCM can be defined as a statistical method of examining the textures utilizing the spatial relationship of the pixels. This creates GLCM,
Eccentricity describes the ratio of the length of the longest chord of the shape to the longest chord perpendicular to it. Next one is the orientation, which gives the overall direction of the shape.
Of these, orientation and area are selected as the shape descriptor. To define the orientation, the Hessian matrix of the input image can be considered. The elements of Hessian matrix consists of second order derivative of input image. Then, the eigen values should be found out corresponding to the hessian matrix. The largest and smallest eigen values are considered to find the orientation. The equation for finding out the shape descriptor is given below.
and then, the statistical measures are extracted from this matrix. Gray Level Co-occurrence matrix generation of an
tan1 1
2
(8)
image is shown in figure 2. Since GLCM is calculated by checking how often a pixel with the intensity (gray-level) value i occurs in a specific spatial relationship to a pixel with the value j, each element of the matrix at cell (i,j) can be expressed as Pij .
Fig. 2. Gray Level Co-occurrence matrix generation of an image
Various texture features can be calculated from GLCM matrix. Contrast, correlation, energy and homogeneity are used here. Each of them can be calculated using the mathematical expressions:
Contrast = Pij(i – j)2 (5)
= ( )2 (6)
Energy= Pij2 (7)
where: Pij = Element i,j of the GLCM. N = Number of gray levels in the image.
µ = the GLCM mean (being an estimate of the intensity
of
pixels in the relationships that contributed to the GLCM)
o 2 = the variance of the intensities of all reference pixels in the relationships that contributed to the GLCM.
-
Shape Descriptor: In general, descriptors are some set of numbers or angles that describes a given shape. A few basic descriptors in the context of image are explained below. First one is the area, which gives the total number of pixels comprising the shape. Perimeter is the number of pixels in the boundary of the object in the image.
-
Edge Descriptor: Edges in images are areas with strong intensity contrasts, i.e, a jump in intensity from one pixel to the next. Edge detection in an image significantly reduces the amount of data and filters out useless information, while preserving the important structural properties in an image. The Canny edge detection algorithm is known as the optimal edge detector. The canny edge detector first smoothens the image to eliminate noise. It then finds the image gradient to highlight regions with high spatial derivatives. The algorithm then tracks along these regions and suppresses any pixel that is not at the maximum (nonmaximum suppression). The gradient array is now further reduced by hysteresis. Hysteresis is used to track along the remaining pixels that have not been suppressed. Hysteresis uses two thresholds and if the magnitude is below the first threshold, it is set to zero (made a nonedge). If the magnitude is above the high threshold, it is made an edge. And if the magnitude is between the 2 thresholds, then it is set to zero unless there is a path from this pixel to a pixel with a gradient above threshold.
-
Energy Feature: Discrete wavelet transform is used to extract the energy features of the extracted lung region. The block diagram of DWT is shown in figure 4.4. The discrete wavelet transform uses low-pass and high-pass filters, H(z) and G(z), to expand a digital signal. The coefficients c1 (n) are produced by the low-pass filter, h(n), and are known as coarse coefficients or approximation coefficients. The d1 (n) coefficients are produced by the high-pass filter and are known as detail coefficients. Coarse coefficients provide information about low frequencies, and detail coefficients provide information about high frequencies. Coarse and detail coefficients are produced at multiple scales by iterating the process on the coarse coefficients of each scale. The entire process is computed using a tree-structured filter bank.
Fig. 3. DWT Block Diagram
The output class node with the largest activation represents the winning class. While the class nodes are connected only to the hidden nodes for their class, the input feature vector connects to all training classes, and therefore influences their activations. Therefore the sum of the training vector activations determines the class of the input feature vector.
In PNN algorithm, calculating the class-node activations is a simple proces. For each class node, the example vector activations are summed, which are the sum of the products of the training vector and the input vector. The hidden node activation is simply the product of the two vectors (E is the example vector, and F is the input feature vector).
hi=Ei x F
The class output activations are then defined as:
( 1)
=
2
(10)
Fig. 3. DWT Block Diagram
The pictorial representation of DWT subbands is shown in figure 4.5. Each subband provides different information about the image. When high frequency (use high pass filter) is applied on an image, there are high variations in the gray level between the two adjacent pixels. So edges can be found out. When low frequency (use low pass filter) is applied on an image,there are smooth variations between the adjacent pixels. So edges are not generated or very few edges are generated. All information of image remains same as real image information (it display as approximation image). The energy features can be found out by considering the square of the LL coefficient values. To get a single valued feature, average of the energy values thus obtained can be considered.
-
-
Classification
Probabilistic Neural Network is used as the classifier in this work. Probabilistic networks perform classification where the target variable is categorical. Although the implementation is very different, probabilistic neural networks are conceptually similar to K-Nearest neighbour (k-NN) models. The basic idea is that a predicted target value of an item is likely to be about the same as other items that have close values of the predictor variables. Basically, PNN consists of an input layer, which represents the input pattern or feature vector. The input layer is fully interconnected with the hidden layer, which consists of the training set for the PNN. Finally, an output layer represents each of the possible classes for which the input data can be classified. However, the hidden layer is not fully interconnected to the output layer.
One other important element of the PNN is the output layer and the determination of the class for which the input layer fits. This is done through a 'winner-takes-all' approach.
Where N is the total number of example vectors for this class, hi is the hidden-node activation, and is a smoothing factor. Given an unknown input vector, the hidden node activations are computed and then summed at the output layer. The class node with the largest activation determines the class to which the output feature vector belongs. As no training required, classifying an input vector is fast, depending on the number of classes and example vectors that are present. It is also very easy to add new examples to the network by simply adding the new hidden node, and its output is used by the particular class node. This can be done dynamically as new classified examples are found. The PNN also generalizes very well, even in the context of noisy data.
The PNN is a direct continuation of the work on Bayes classifiers. More precisely, the PNN is interpreted as a function which approximates the probability density of the distribution. The PNN consists of nodes allocated in three layers after the input layers such as pattern layer, summation layer and output layer.
Pattern Layer: It is one pattern node for each training phase. Each pattern node forms a product of the weight vector and for classification, where the weights entering a node are from a particular node. After that, the product is passed through the activation function.
Summation Layer: Each summation node receives the outputs from pattern nodes associated with a given class.
Output Layer: The output nodes are binary neurons that produce the classification decision. The only factor that needs to be selected for training is the smoothing factor that is the deviation of the Gaussian functions.
The working of PNN can simply be explained as follows. First the input features should be converted to a feature vector and it is given to the PNN. PNN computes distances from the input vector to training input vectors and produces a vector whose elements indicate closeness. According to this, a vector of probabilities is generated. The maximum of these probabilities is found out, and the value 1 is assigned for that class and 0 for the other classes.
And PNN thus detects the class of the testing image by selecting that class having value '1'.
-
-
EXPERIMENTAL RESULTS AND DISCUSSION
While using any medical image in a process, preprocessing step should be performed to remove noise and to enhance the image. The same is done in the case of chest X-ray (CXR). Then, segmentation using level set based method and classification using Probabilistic Neural Network was done. Simulation is performed using the MATLAB R2010b.
-
Input Images
The publicly available Montgomery County (MC) set is used in this work. It is a representative subset of a larger CXR repository collected over many years. This standard digital image database for Tuberculosis is created by the National Library of Medicine in collaboration with the Department of Health and Human Services, Montgomery County, Maryland, USA. These X-rays are collected under Montgomery County's Tuberculosis screening program.
For images available in the dataset, the ground-truth radiology reports, that have been confirmed by clinical tests, patient history, etc are also available. The available images are of high resolution. The dimension of the images are 4020 x 4892 or 4892 x 4020. CXR of a normal person and a TB affected person are shown in figures 4 and 5 respectively.
Fig. 4. CXR of normal Fig. 5. CXR of TB affected person person
The CXR of TB patient contains cloud like structures which is readily identifiable from the image. This cloud is formed by the generation of sputum inside the lungs of TB patients due to the infection. Usually, in TB patients, cloud forms in the apex or lower region of the lungs. Cloud spreads according to the amount of infection.
-
Preprocessing
Filtering is performed to remove noises from the input image. Low pass filter, Gaussian filter and median filter were used. The effect of filtering on an image can be analyzed by comparing the histogram, i.e., the intensity plot of the input image and the image after filtering. No significant change was there in the image after filtering. This was confirmed by analyzing the histogram of the image before and after filtering. So, the inference of this step is that, the images available in the database are noise free.
Fig. 6. Image after histogram equalization
The next step is image enhancement. Histogram equalization is done to enhance contrast of the image. The input CXR after histogram equalization is shown in figure
6. The contrast of the chest X-ray image is improved by applying histogram equalization.
-
Segmentation
In segmentation phase, the lung region is extracted from the CXR using level set based segmentation method. This step is done to avoid the false detection of tuberculosis and thereby to improve the performance of the system. If the Chest X-ray image is used for the feature extraction stage and classification phase without segmentation, the system will analyze the whole image and thus will incorrectly detect the bone area as TB affected region.
The initial contour is first defined over the lung region. Then this contour is converged to the lung boundary by using iterations. Boundary detection can be optimized by selecting appropriate number of iterations for level set convergence. Different iteration numbers were tried to find the adequate boundary detection for almost all images in the database. The initial contour is shown in figure 7. The level set function convergence after 200 iterations is shown in figure 8.
Fig. 7. Initial contour Fig. 8. Contour after 200
iterations
Inthis work, an iteration number of 200 was selected since it gives an almost accurate boundary detection. This contour is then used to extract the lung region from the input image. The extracted lung region is shown in figure 9.
Fig. 9. Segmented image
-
Feature extraction
Various features were extracted for the classification phase. Selection of appropriate features affects the performance of the system. First, the mean of the intensity histogram and variance of the same were considered. These two features had different set of values for TB affected CXR and normal CXR. Then, the variance of the gradient of CXR was considered. The edge based feature was extracted by using 'canny' edge detector. Then the mean of the edge detected image was considered as the feature. The GLCM based features and energy features were also extracted for the classification phase. The features extracted from gray level co-occurrence matrix were contrast, correlation, energy and homogeneity. The energy feature was found out by using discrete wavelet transform. For training stage, six images were considered: 3 normal and 3 abnormal CXRs.
It has shown considerable changes in most of the feature values of normal CXR and CXR of TB affected patients. Intensity mean, intensity variance, gradient variance, area, correlation from GLCM, energy from GLCM and average value of DWT coefficients have shown discriminable change in values for normal and TB affected CXRs. So, these features are adequate for training the classifier.
-
Classification
Probabilistic Neural Network was used to classify input CXR as normal and TB affected. First, PNN was trained using six images. The features of three normal images and three TB affected images were extracted. The output vector for each feature of normal images was given the value '1'. For TB affected CXR, the output vector was assigned as '2'. These input feature vectors and already defined output vectors were used to train the PNN. The same features of the testing images were extracted in the testing phase. Then, the trained PNN was recalled to check whether the testing CXR is TB affected or not. The performance of classifier depends on the testing images and features selected. By extracting the above mentioned features and considering six training images, the classifier was able to correctly classify most of the testing images.
There is no any possibility of misclassifying lung injuries as TB affected portion. Because, lung injuries are not visible in chest X ray. X-ray images show hard tissues only. Blood vessels or blood stains are not visible in X-rays as they are formed by very soft tissues. Blood vessels can be made visible only by injecting dye.
-
-
CONCLUSION
The CXR of TB affected person contains cloud like structures. The manual detection of TB from CXR demands for expert persons and is often slow. Automatic method of TB detection resolves these problems. To optimize the performance of the system, lung region extraction from input CXR was performed using level set based segmentation method. Thus the other portions and body parts in the input CXR can be eliminated. This reduces the possibility of false detection by the classifier. Then various features from the extracted lung region can be used to train Probabilistic Neural Network. The same features of the test image can be given to the trained classifier to detect whether that person is TB affected or not. This system needs expert's assistance only in the training stage. Once the training is over, the system will be able to detect TB automatically. The work concentrated on detecting the presence of TB from chest X-rays. Apart from TB detection, the analysis of amount of infection will help in deciding the further treatment for the patient. So, the grading of TB can be considered as an enhancement to this system.
REFERENCES
-
G. Centers for disease control and prevention, Questions and Answers about TB 2014, [Online] Available. http://www.cdc.gov/tb/.
-
Centers for disease control and prevention, Chapter 2:
Transmission
and Pathogenesis of Tuberculosis, [Online] Available. www.cdc. gov/ tb/ education/ corecurr/pdf/chapter2.pdf.
-
I.S. Jacobs Dr. Khalil I. Jassam, Removal of random noise from conventional digital X-ray images, [Online] Available. www:isprs:org/ proceedings/ XXIX/congress/part5/113XXIX – part5X.pdf.
-
G.N.Sarage, Dr Sagar Jambhorkar, Enhancement of chest X-ray images using filtering techniques, IJIJARCSSE, Volume 2, Issue 5, May 2012.
-
Thomas Brox and Joachim Weickert, Level set based image segmentation with multiple regions Springer, pp. 415-423, Aug. 2004.
-
Satish Kumar, Neural Networks, A Classroom Approach Tata McGraw- Hill Education, 2004.
-
Ani1 K. Jain and Jianchang Mao, Artificial neural networks: a tutorial, [Online] Available. www.cogsci.ucsd.edu/ ajyu/T eaching/ Cogs202sp12/ Readings/ jainann96.pdf, March 1996.
-
Khalid Isa, Probabilistic Neural Network (PNN) Algorithm Underwater robotics research group.
-
Stefan Jaeger, Alexandros Karargyris, Sameer Antani, and George Thoma, Detecting Tuberculosis in Radiographs Using Combined Lung Masks, IEEE International Conference Sept. 2012.
-
Alexandros Karargyris, Sameer Antani, and George Thoma, Segmenting Anatomy in Chest X-rays for Tuberculosis Screening, IEEE International Conference, Sept 3, 2011.
-
S. Jaeger, S. Antani, and G. Thoma, Tuberculosis screening of chest radiographs, SPIE Newsroom, 2011.
-
S. Kakeda, J. Moriya, H. Sato, T. Aoki, H. Watanabe, H. Nakata, N. Oda, S. Katsuragawa, K. Yamamoto, and K. Doi, Am. J. Roentgenol, Improved detection of lung nodules on chest radiographs using a commercial computer- aided diagnosis system, AJR, vol. 182, no. 2, pp. 505-510, 2004.
-
Bram van Ginneken, Bart M. ter Haar Romeny, and Max A. Viergever, Computer-aided diagnosis in chest radiography: a survey, IEEE Transactions on medical imaging, vol. 20, No. 12, Dec 2001.
-
Erik L. Ritman, Medical X-ray images, current status and some future challenges, JCPDS-International Centre for Diffraction Data, ISSN 1097-0002, 2006.