- Open Access
- Total Downloads : 193
- Authors : Nidhi
- Paper ID : IJERTV3IS10289
- Volume & Issue : Volume 03, Issue 01 (January 2014)
- Published (First Online): 17-01-2014
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Lung Image Segmentation Using Rotation Invariance and Template Matching
Nidhi
Assistant Professor NIT,Kurukshetra
Abstract
The work aims at using the rotation invariant feature and gray scale invariance feature as basis for template matching for identification of nodules of various sizes and texture. The structural textures so obtained are used to describe statistical feature called variance which provide efficient segmentation of lung nodules and helps in clear visualization of nodule boundaries which is important for doctors for analyzing the disease effects. The segmented image so obtained showed all the nodules clearly but the nodules that benign cannot be separated or identified by segmentation. To identify the nodule so obtained the different size templates of nodules were described to identify nodules of particular size and texture. The LBP variance descriptor provided the texture and LBP rotation invariance allowed nodule to be detected irrespective of the orientation of input image.
-
Introduction
So far many methods have been devised for detection of lung cancer. The concept of rotation invariant linear binary pattern has not been used in combination with template matching for detection of nodules in lung images. The only paper published in this context is in June 2011.
The research has been started in this context and more people are trying to develop an efficient method for lung nodule detection due to high robustness of the LBP method and characteristics like rotation and scale invariance.
-
Proposed Method
The process comprise of following step each of which is described in detail in next section:
-
Data Acquisition
-
Feature Extraction
-
Data Conversion
-
Lung Segmentation
-
Segmentation of nodule
-
Uniform Linear Binary Patterns
-
Rotation invariant Binary patterns
-
Uniform Linear Binary Variance Operator
-
Rotation invariant Linear Binary pattern Variance Operator
-
-
Template Matching
-
Parenchymal nodule (15mm,18mm,20mm)
-
Juxtapleural nodule (15 mm)
-
-
Evaluation
-
Sensitivity and Specificity
-
Accuracy
-
-
check your margins to see if your print area fits within the space allowed.
-
DATA ACQUISITION
For the purpose of the present work, Database of lung images of was downloaded from Lung Image Database Consortium [28], it is a publicly available. The database so obtained contains DICOM data series 1.3.6.1.4.1.9328.50.3.0022
with each series comprising of series of images showing lung at different thickness. The database was divided into groups according to following:
Slice Thickness: 1.3 mm,1.5 mm, 2.0 mm, 2.5
mm ,3.0 mm.
Slice Length: From each set of available thickness, slice length was observed for which the lungs were visible in images as suggested by the Dr Khandelewal form PGI whom we consulted for helping us find the nodule present in images and suggesting how to discriminate it from other false lights present in image.
Fig 12: Flowchart of Complete Process
The figure below presents the studied data out of which best values were chosen that covers maximum images with full lung view present in them forming the next table.
Table 1: Database Distribution
Patient-ID
Slice Thickness
(In mm)
Slice Length for false Start images
(In mm)
Slice Length for lung start (In mm)
Slice length false end images
(In mm)
1.3.6.1.4.1.9328.50.3.0025
1.3
-25.7
-90.7
-163.2
1.3.6.1.4.1.9328.50.3.0026
-5.8
-59.8
-243.5
1.3.6.1.4.1.9328.50.3.0034
1.5
-351.9
-427.6
-478.6
1.3.6.1.4.1.9328.50.3.0041
2.0
-99.6
-128.40
-246
1.3.6.1.4.1.9328.50.3.0042
1022
856
249
1.3.6.1.4.1.9328.50.3.0043
-39
-249.60
-250
1.3.6.1.4.1.9328.50.3.0044
-108.0
-167
-290
1.3.6.1.4.1.9328.50.3.0045
-32
-113
-194
1.3.6.1.4.1.9328.50.3.0046
2.5
13.8
-132
-214
1.3.6.1.4.1.9328.50.3.0047
40.4
-123
-232
1.3.6.1.4.1.9328.50.3.0048
36.2
-101.8
-149.8
1.3.6.1.4.1.9328.50.3.0049
-6.2
-138
-220
1.3.6.1.4.1.9328.50.3.0050
-22.8
-113.2
-169.2
1.3.6.1.4.1.9328.50.3.0031
3.0
1692
1587
1500
1.3.6.1.4.1.9328.50.3.0032
-43.0
-148.0
-226.0
1.3.6.1.4.1.9328.50.3.0033
1647
1515.5
1434.5
The figures representing below show the typical sample of normal, benign and cancer mammogram images for different subjects respectively.
Table 2: Values of Dicom Header Used for Selecting Images for further Processing
-
IMAGE CONVERSION
The DICOM images so obtained are converted in jpeg format using lossless conversion. The images formed using MATLAB were found to be blurred thus the image obtained after conversion is saved as lossless jpg image to retain the quality and information present in image.
-
LUNG SEGMENTATION
Dr Khandelwals first suggestion was to remove the mediastinum from lungs as it is difficult to determine the malignancy at this place even for medical professionals. Following same we first separated lungs from its background and removed central part.
LUNG SEGMENTATION ALGORITHM FOR IMAGE RETRIEVAL
1.
-
The image is thresholded to separate low density tissue (eg. lungs) from fat.
-
The surrounding air, identified as lowdensity tissue, is removed.
-
Cleaning is performed to remove noise using imfill().
-
Lung mask
-
Lung Extraction
-
Optimal Thresholding: The first step is thresholding the image. A CT contains two main groups of pixels:1) highintensity pixels located in the body (body pixels), and 2) lowintensity pixels that are in the lung and the surrounding air (non body pixels). Due to the large difference in intensity between these two groups, thresholding leads to a good separation. Every pixel with an intensity higher than 80 is set to 0 (body pixels) and the others pixels are set to 1 (nonbody pixels).
Fig 13: Segmentation steps: (a) Original (b) Thresholding point using histogram (c) Binary Image (d) Noise removal (e) Lung Mask (f) Lungs Extraction
-
Background removal: The air around the body (background) is removed using an idea from . Background pixels are identifie as follow: they are nonbody pixels and connected to the borders of the image. Thus, every connected region of non body pixel that touches the border is considered as background and discarded.
-
Cleaning: Once the background is removed, several nonbody regions remain. Holes are present in binary image so formed which are cleared using imfill() function of MATLAB.
-
Lung Mask: To extract lungs from background lung mask is created.
-
Labeling: Labels are created for all the objects present in image
w.r.t to area of object
-
Sorting: The objects so obtained are sorted according to their size in decreasing order with first two objects being largest in size.
-
Blob Extraction: The objects obtained after sorting are the Binary Large Objects and represent area covered by lungs. This gives us outline of lungs
-
-
Lung Extraction: BLOBs representing lungs are then subtracted from original image to provide lungs for further processing.
-
-
-
FEATURE EXTRACTION
-
Uniform Linear Binary Patterns
-
Rotation invariant Binary patterns
-
Uniform Linear Binary Variance Operator
-
Rotation invariant Linear Binary pattern Variance Operator
-
Introduction to Linear Binary Pattern
Local Binary Pattern (LBP) is a simple yet very efficient texture operator which labels the pixels of an image by thresholding the neighborhood of each pixel and considers the result as a binary number. Due to its discriminative power and computational
simplicity, LBP texture operator has become a popular approach in various applications. It can be seen as a unifying approach to the traditionally divergent statistical and structural models of texture analysis. Perhaps the most important property of the LBP operator in real-world applications is its robustness to monotonic gray-scale changes caused, for example, by illumination variations. Another important property is its computational simplicity, which makes it possible to analyze images in challenging real-time settings.
The basic idea for developing the LBP operator was that two-dimensional surface textures can be described by two complementary measures: local spatial patterns and gray scale contrast. The original LBP operator (Ojala et al. 1996) forms labels for the image pixels by thresholding the 3 x 3 neighborhood of each pixel with the center value and considering the result as a binary number. The histogram of these 28 = 256 different labels can then be used as a texture descriptor. This operator used jointly with a simple local contrast measure provided very good performance in unsupervised texture segmentation (Ojala and Pietikäinen 1999). After this, many related approaches have been developed for texture and color texture segmentation.
Fig14: Converting Square Neighborhood of image into circular neighborhood
The LBP operator was extended to use neighborhoods of different sizes (Ojala et al. 2002). Using a circular neighborhood and bilinear interpolating values at non-integer pixel coordinates allow any radius and number of pixels in the neighborhood. The gray scale variance of the local neighborhood can be used as the complementary contrast measure. In the following, the notation (P, R) will be used for pixel neighborhoods which mean P sampling points on a circle of radius of R. See Fig. 14 for an example of LBP computation.
Fig 15: An example of LBP computation.
Quantification of weights: Weights increase the difference in values of possible patterns and thus enable easy identification of various micro textons.
Fig 16: Example of computing LBP and Contrast a) Sample b) thresholding w.r.t central pixel c) weights used for calculation of LBP
The example describes clearly how each 3*3 neighbor is thresholded using gc (Central pixel) which in case discussed is having value 6. Thus all the pixels having value greater than 6 will be marked 1 and less will be marked as 0. LBP then calculated using weight vector specified with weights taken for each pixel which represent 1 and added. Contrast is measured using difference of pixel value of 1 and 0 divided by corresponding number of pixels having 1 and 0 respectively.
-
Uniform Linear Binary Operator
Uniform patterns can be used to reduce the length of the feature vector and implement a simple rotation-invariant descriptor. Some binary patterns occur more commonly in texture images than others. A local binary pattern is called uniform if the binary pattern contains at most two bitwise transitions from 0 to 1 or vice versa when the bit pattern is traversed circularly. For example, the patterns 00000000 (0 transitions), 01110000 (2
transitions) and 11001111 (2 transitions) are
uniform whereas the patterns 11001001 (4 transitions) and 01010010 (6 transitions) are not. In the computation of the LBP labels, uniform patterns are used so that there is a separate label for each uniform pattern and all the non-uniform patterns are labeled with a single label. For example, when using (8:R) neighborhood, there are a total of 256 patterns, 58 of which are uniform, which yields in 59 different labels. Ojala et al. (2002) noticed in their experiments with texture images that uniform patterns account for a little less than 90% of all patterns when using the (8,1) neighborhood and for around 70% in the (16,2) neighborhood. Each bin (LBP code) can be regarded as a micro-texton. Local primitives which are codified by these bins include different types of curved edges, spots, flat areas etc.
Fig17: Micro textons detected by Uniform LBP The following notation is used for the LBP
operator: LBPP:Ru2. The subscript represents using the operator in a (P:R) neighborhood. Superscript u2 stands for using only uniform patterns and labeling all remaining patterns with a single label. After the LBP labeled image fl(x,y) has been obtained, the LBP histogram can be defined as
Where n is the number of different labels produced by the LBP operator, and I{A} is 1 if A is true and 0 if A is false.
-
Rotation Invariant Uniform Binary Operator
The LBPP:R operator produces 2P different output values, corresponding to the 2P different binary patterns that can be formed by the P pixels in the neighbor set. When the image is rotated, the gray values gp will correspondingly move along the perimeter of the circle around g0. Since g0 is always assigned to be the gray value of element (0: R) to the right of gc rotating a particular binary pattern naturally results in a different LBPP:R value. This does not apply to patterns comprising of only 0s (or 1s) which remain constant at all rotation angles. To remove the effect of rotation, i.e., to assign a unique identifier to each rotation invariant local binary pattern (ojala et al) defines:
where ROR. Performs a circular bit-wise right shift on the P-bit number i times.
FIG 18: The 36 unique rotation invariant binary patterns that can occur in the circularly symmetric
of fig above has U value of 2 as there are exactly two 0/1 transitions in the pattern. Similarly, the other 27 patterns have U value of at least 4. We designate patterns that have U value of at most 2 as uniform and propose the following operator for gray-scale and rotation invariant texture description instead of LBPri P;R:
neighbor set of LBPri . Black and white circles
8:R
correspond to bit values of 0 and 1 in the 8-bit output of the operator. The first row contains the nine uniform patterns and the numbers inside them correspond to their unique LBPriu2 8:R codes.
Superscript riu2 reflects the use of rotation invariant uniform patterns that have U value of at most 2. By definition, exactly P+1 uniform binary
LBP
ri
P:R
quantifies the occurrence statistics of
patterns can occur in a circularly symmetric
individual rotation invariant patterns correspoding to certain micro features in the image; hence, the patterns can be considered as feature detectors. Fig. 18 illustrates the 36 unique rotation invariant local binary patterns that can occur in the case of P:8, i.e., LBPri 8;R can have 36 different values. For example, pattern #0 detects bright spots, #8 dark spots and flat areas, and #4 edges. If we set R:1, LBPri 8:1 corresponds to the gray-scale and rotation invariant operator that we designated as LBPROT in [29].
LBPROT as such does not provide very good discrimination, as we also concluded in [29]. There are two reasons: The occurrence frequencies of the
36 individual patterns incorporated in LBPROT vary greatly and the crude quantization of the angular space at 45 intervals. We have observed that certain local binary patterns are fundamental properties of texture, providing the vast majority, sometimes over 90 percent, of all 3*3 patterns present in the observed textures. We call these fundamental patterns uniform as they have one thing in common, namely, uniform circular structure that contains very few spatial transitions. Uniform patterns are illustrated on the first row. They function as templates for microstructures such as bright spot (0), flat area or dark spot (8), and edges of varying positive and negative curvature (1-7). To formally define the uniform patterns, we introduce a uniformity measure U(pattern), which corresponds to the number of spatial transitions (bitwise 0/1 changes) in the pattern. For example, patterns 000000002 and 111111112 have U value of 0, while the other seven patterns in the first row
neighbor set of P pixels. Equation assigns a unique label to each of them corresponding to the number of 1 bits in the pattern (0.P), while the non- uniform patterns are grouped under the miscellaneous label (P+1). In Fig. 2, the labels of the uniform patterns are denoted inside the patterns. In practice, the mapping from LBPP:R to LBPriu2 P:R
, which has P . 2 distinct output values, is best implemented with a lookup table of 2P elements. The final texture feature employed in texture analysis is the histogram of the operator outputs (i.e., pattern labels) accumulated over a texture sample. The reason why the histogram of uniform patterns provides better discrimination in comparison to the histogram of all individual patterns comes down to differences in their statistical properties. The relative proportion of non-uniform patterns of all patterns accumulated into a histogram is so small that their probabilities cannot be estimated reliably.
Fig 19: all patterns are angle variants of same LBP pattern thus are labeled under same subscript as 15.
Uniform Linear Binary Variance Operator
Rotation invariant variance measures (VAR).A rotation invariant measure of the local variance can be defined as
where
P:R
Since LBPP:R and VARP:R are complementary , their joint distribution LBPP:R =VAR P:R can better characterize the image local texture than using LBPP:R alone. Although Ojala et al. proposed to use only the joint distribution LBPriu2 P:R =VARP:R of LBPriu2 P:R and VARP;R, other types of patterns, such as LBPu2 P:R, can also be used jointly with VARP;R. However, LBPu2 P:R is not rotation invariant and it has higher dimensions. In practice, the same (P, R) values are used for LBPriu2 and VARP:R.
2.3. LBP variance (LBPV) LBPP:R =VAR P:R is powerful because it exploits the complementary information of local spatial pattern and local contrast [26]. However, VARP:R has continuous values and it has to be quantized. This can be done by first calculating feature distributions from all Training images to get a total distribution and then, to guarantee the highest quantization resolution, some threshold values are computed to partition the total distribution into N bins with an equal number of entries. These threshold values are used to quantize the VAR of the test images. There are three particular limitations to this quantization procedure. First, it requires a training stage to determine the threshold value for each bin. Second, because different classes of textures may have very different contrasts, the quantization is dependent on the training samples. Last, there is an important parameter, i.e. the number of bins, to be preset. Too few bins will fail to provide enough discriminative information while too many bins may lead to sparse and unstable histograms and make the feature size too large. Although there are some rules to guide selection [26], it is hard to obtain an optimal number of bins in terms of accuracy and feature size. The LPBV descriptor proposed in this section offers a solution to the above problems of LBPP:R=VARP:R descriptor. The LBPV is a simplified but efficient joint LBP and contrast distribution method. As can be seen in equation, calculation of the LBP histogram H does not involve the information of variance VARP;R. That is to say, no matter what the LBP variance of the local region, histogram calculation assigns the same weight 1 to each LBP pattern. Actually, the variance is related to the texture feature. Usually the high frequency texture regions will have higher variances and they contribute more to the discrimination of texture images. Therefore, the variance VARP:R can be used as an adaptive weight
to adjust the contribution of the LBP code in histogram calculation.
Classification
Classification implies the assignment of an object (image) to one of the predefined classes. Classification consists of learning and recognition phases. In the first, features are extracted from a set of texture images with known class labels, each class being characterized by its prototype feature vector. Then, in the recognition phase, a feature vector of test image is calculated and one of the known classifiers is used to assign the image to the class it matches best. Classification is related closely to the following three concepts. By recognition we mean the identification of an image among a set of test images. Clustering distributes images into groups of similar images. Segmentation is the partitioning of an image into a set of regions with similar visual properties. Any classification requires a set of features that permits the discrimination between the images of different type. So the problem of establishing an adequate set of characteristics is of great practical importance. The techniques of feature extraction for texture description and analysis can be divided into the four mayor groups: statistical, model based, signal processing and structural methods (Tuceryan and Jain, 1993). Once the features of images are selected, a classification can be done by means of one of several known methods (Chen et al., 1996; Duda et al., 2001; Fukunaga, 1990; Young and Fu, 1986).
3.5.1 Nearest Neighbor Classifier
Among the various methods of supervised statistical pattern recognition, the Nearest Neighbor rule achieves consistently high performance, without a priori assumptions about the distributions from which the training examples are drawn. It involves a training set of both positive and negative cases. A new sample is classified by calculating the distance to the nearest training case; the sign of that point then determines the classification of the sample. The k-NN classifier extends this idea by taking the k nearest points and assigning the sign of the majority. It is common to select k small and odd to break ties (typically 1, 3 or 5). Larger k values help reduce the effects of noisy points within the training data set, and the choice of k is often performed through cross-validation [30].
There are many techniques available for improving the performance and speed of a nearest neighbor classification. One approach to this problem is to pre-sort the training sets in some way (such as kd-trees or Voronoi cells). Another solution is to choose a subset of the training data such that classification by the 1-NN rule (using the subset) approximates the Bayes classifier. This can result in significant speed improvements as k an now be limited to 1 and redundant data points have been removed from the training set. These data modification techniques can also improve the performance through removing points that cause mis-classifications. Several dataset reduction techniques are discussed in the section on target detection.
The above discussion focuses on binary classification problems; there are only two possible output classes. In the digit recognition example there are ten output classes, which changes things slightly. The labeling of training samples and computing the distance are unchanged, but ties can now occur even with k odd. If all of the k nearest neighbors are from different classes we are no closer to a decision than with the single nearest neighbor rule. We will therefore revert to a 1-NN rule when all there is no majority within the k nearest neighbors.
The nearest neighbor rule is quite simple, but very computationally intensive. For the digit example, each classification requires 60,000 distance calculations between 784 dimensional vectors (28×28 pixels). The nearest neighbor code was therefore written in C in order to speed up the Matlab testing. The files are given below, but note that these are set up to read in the image database after it has been converted from the format available on the MNIST web page.
-
Template Matching
Template matching is a technique in digital image processing for finding small parts of an image which match a template image. It can be used in manufacturing as a part of quality control, a way to navigate a mobile robot, or as a way to detect edges in images [31].
A basic method of template matching uses a convolution mask (template), tailored to a specific feature of the search image, which we want to detect. This technique can be easily performed on grey images or edge images. The convolution output will be highest at places where the image structure matches the mask structure, where large image values get multiplied by large mask values.
This method is normally implemented by first picking out a part of the search image to use as a template: We will call the search image S(x, y),
where (x, y) represent the coordinates of each pixel in the search image. We will call the template T(x t, y t), where (xt, yt) represent the coordinates of each pixel in the template. We then simply move the center (or the origin) of the template T(x t, y t) over each (x, y) point in the search image and calculate the sum of products between the coefficients in S(x, y) and T(xt, yt) over the whole area spanned by the template. As all possible positions of the template with respect to the search image are considered, the position with the highest score is the best position. This method is sometimes referred to as Linear Spatial Filtering and the template is called a filter mask.
For example, one way to handle translation problems on images, using template matching is to compare the intensities of the pixels, using the SAD (Sum of absolute differences) measure [32, 33].
A pixel in the search image with coordinates (xs, ys) has intensity Is(xs, ys) and a pixel in the template with coordinates (xt, yt) has intensity It(xt, yt ). Thus the absolute difference in the pixel intensities is defined as Diff(xs, ys, x t, y t) = | Is(xs, ys) It(x t, y
t) |.
The mathematical representation of the idea about looping through the pixels in the search image as we translate the origin of the template at every pixel and take the SAD measure is the following:
Srows and Scols denote the rows and the columns of the search image and Trows and Tcols denote the rows and the columns of the template image, respectively. In this method the lowest SAD score gives the estimate for the best position of template within the search image. The method is simple to implement and understand, but it is one of the slowest methods.
The templates used are shown below
Figure 16: Template with diameter 10, 15 and 18 mm circular nodules and one jaupaxteral nodule. Linear binary pattern is computed for both input image and template and feature vector is created for both images. The template however is rotated at 30, 45 , 60 and 90 degrees and the feature vector is updated with all
rotation angles features also present in it. Thus the template when matched with original image is matched at all the orientations and not just single orientation which provides rotation invariant template matching. Since Linear Binary Pattern converts image in form that patterns are reduced to very small number of patterns only uniform patterns are used while template matching. Red color marking is used to point the matching patterns.
-
Evaluation
Evaluation stages are divided in two parts segmentation evaluation and template matching evaluation. Accuracy is how close a measured value is to the actual (true) value. Precision is how close the measured values are to each other [34].
Image number |
Specificity in % |
Sensitivity in % |
accuracy |
IMG-0003- 00059.jpg |
93.29 |
99.69 |
96.2701 |
IMG-0003- 00060.jpg |
93.63 |
99.70 |
96.4655 |
IMG-0003- 00061.jpg |
93.86 |
99.74 |
96.6187 |
IMG-0003- 00062.jpg |
93.12 |
99.54 |
96.1112 |
IMG-0003- 00063.jpg |
93.02 |
99.62 |
96.0870 |
3. Results
Fig 24: Final output showing marking of corresponding nodules a)original image b) image marked with parenchymal nodules of size 10 mm c) parenchymal nodule with radius 15 mm d)
Image number |
Specificity in % |
Sensitivity in % |
Accuracy |
IMG-0003- 00059.jpg |
93.94 |
99.33 |
96.4793 |
IMG-0003- 00060.jpg |
93.90 |
99.40 |
96.4905 |
IMG-0003- 00061.jpg |
93.78 |
99.04 |
96.5714 |
IMG-0003- 00062.jpg |
93.83 |
99.19 |
96.3580 |
IMG-0003- 00063.jpg |
93.84 |
99.16 |
96.3520 |
parenchymal nodule with 18 mm radius e) juxtapleural nodule with radius 18 mm
Image number |
Specificity in % |
Sensitivity in % |
accuracy |
IMG-0003- 00059.jpg |
93.31 |
99.68 |
96.2735 |
IMG-0003- 00060.jpg |
93.60 |
99.70 |
96.4549 |
IMG-0003- 00061.jpg |
93.87 |
99.74 |
96.6183 |
IMG-0003- 00062.jpg |
93.12 |
99.63 |
96.1469 |
IMG-0003- 00063.jpg |
93.04 |
99.63 |
96.0982 |
Table 3: Sensitivity and specificity of nodule detection of radius 10 mm
Accuracy for cancereous nodule of size 10 mm template presented in segmented image
Table 4: Rate of false positive providing accuracy of nodule detection of radius 15 mm
Image number |
Specificity in % |
Sensitivity in % |
accuracy |
IMG-0003- 00059.jpg |
93.33 |
99.67 |
96.2817 |
IMG-0003- 00060.jpg |
93.59 |
99.71 |
96.4514 |
IMG-0001- 00061.jpg |
93.90 |
99.74 |
96.6392 |
IMG-0001- 00062.jpg |
93.14 |
99.62 |
96.1548 |
IMG-0010- 00163.jpg |
93.06 |
99.61 |
96.1069 |
Table 5: Rate of false positive providing accuracy of nodule detection of radius 18 mm
Table 6: Rate of false positive providing accuracy of Jaupaxtral nodule detection of radius 18 mm
Precision and accuracy curves for four nodules
Figure : Graph tradeoff of sensetivity and specificity which result in final accuracy of 96 percent even though sensitivity of system is close to 99 percent which suggest the system is highly capable of detecting truly
Conclusion and Future Work
An automated technique for the quantitative assessment of Lung Nodule Detection using LBP operators and template matching has been developed. Rotation and scale invariant operators have been used to extract variance feature of image which is very efficient classifier when used in combination with linear binary patterns. These features capture the variation in the gray scale and rotation in images. Both nodule template and image to be diagnostic for abnormality has been converted in local binary pattern image and then the simple pattern matching algorithm has been implemented to extract nodules. The property included in images due to changing the images to local binary pattern enables process to identify nodule irrespective of in which orientation they are present. Due to small size of nodule and four rotation angles features selection the nodule has been classified with 100 percent accuracy and thus proved a good candidate for providing rotation invariant template for identification of nodule in image to be diagnosed for abnormality.
Future work will include applying model to three dimensional detection of nodule. A method for detection of solitary pulmonary nodule using LBB and template matching has shown very positive results with efficiency over 96 percent. The accuracy of the system can be further increased by
increasing the size and quality of the template. It can also be increased by having more angles from 20º, to 10º or 5º in the texture class training set of nodule. The environmental conditions like, the reflection of the light influences the quality of the images and hence the efficiency of process.
-
References
-
http://www.reuters.com/article/2011/06/1 7/us-factbox-cancer- idUSTRE75G0PL20110617
-
L. Ries et al. SEER Cancer Statistics Review 1973{1996. National Cancer Institution, Bethesda, MD, 1999.
-
D. P. Naidich, H. Rusinek, G. McGuinness, B. Leitman, D. I. McCauley,C. I. Henschke. Variables affecting pulmonary nodule detection with computed tomography: evaluation with three-dimensional computer simulation. J. Thorac Imaging no. 8.
-
J. W. Gurney: Missed lung cancer at CT. imaging findings in nine patients Radiology no. 199, 1996.
-
J. A. Buckley, W. W. Scott, S. S. Siegelman, J. E. Kuhlman, B. A. Urban,
D. A. Bluemke, E. K. fishman Pulmonary nodules: Effect of increased data sampling on detection with spiral CT and confidence in diagnosis. Radiology no. 196, 1995.
-
S. E. Seltzer, P. F. Judy, D. F. Adams, F.
L. Jacobson, P. Stark, R. Kikinis, R. G. Swensson, S. Hooton, B. Head, U. Feldman. Spiral CT of the chest: comparison of cine and film-based viewing. Radiology no. 197,1995.
-
B. van Ginneken, T. H. Romeny, M. A. Viergever. Computer-Aided Diagnosis in Chest Radiography: A Survey. IEEE Transactions on medicalimaging, vol. 20, no. 12, 2001.
-
Gray's Anatomy of the Human Body
-
Shodayu Takashima et al, Indeterminate Solitary Pulmonary Nodules Revealed at Population-Based CT Screening of the Lung: Using First Follow-Up Diagnostic CT to Differentiate Benign and Malignant Lesions. ,AJR 2003; 180:1255-1263
-
http://www.radiologyassistant.nl/en/460f 9fcd50637
-
Claudia I. Henschke et al ,CT Screening for Lung Cancer Frequency and Significance of Part-Solid and Nonsolid Nodules AJR 2002; 178:1053-1057
-
Shodayu Takashima et al. , Small Solitary Pulmonary Nodules (1 cm) Detected at Population-Based CT Screening for Lung Cancer: Reliable High-Resolution CT Features of Benign Lesions, AJR 2003; 180:955-964
-
Shodayu Takashima et al, Indeterminate Solitary Pulmonary Nodules Revealed at
Population-Based CT Screening of the Lung: Using First Follow-Up Diagnostic CT to Differentiate Benign and Malignant Lesions. ,AJR 2003; 180:1255-1263
-
Stephen J. Swensen et al, CT Screening for Lung Cancer: Five-year Prospective Experience Radiology 2005; 235:259- 265.
-
Dicom Image Basics
-
Kakar Manish, Dag Rune Olsen , Automatic segmentation and recognition of lungs and lesion from CT scans of thorax Original Research Article Computerized Medical Imaging and Graphics,Volume 33, Issue 1, January 2009, Pages 72-82
-
Brown S. Mathew, Laurence S. Wilson, Bruce D. Doust, Gill D. Robert , Changming Sun
Knowledge- based method for segmentation and analysis of lung boundaries in chest X-ray images Original Research Article Computerized Medical Imaging and Graphics, Volume 22, Issue 6, 12 November 1998, Pages
463-477
-
Jun-Wei LIU, Huan-Qing FENG, Ying- Yue ZHOU, Chuan-Fu LI, A Novel Automatic Extraction
Method of Lung Texture Tree from HRCT Images Original Research Article Acta Automatica Sinica, Volume 35, Issue 4, April 2009, Pages 345-349
-
Youngjoo Lee, Joon Beom Seo, June Goo Lee, Song Soo Kim, Namkug Kim, Suk Ho Kang, Performance testing of several classifiers for differentiating obstructive lung diseases based on texture analysis at high-resolution computerized tomography (HRCT) Original Research Article Computer Methods and Programs in Biomedicine, Volume 93, Issue 2, February 2009,
Pages 206-215
-
Jianhua Yao, Andrew Dwyer, Ronald M. Summers, Daniel J. Mollura, Computer- aided Diagnosis of Pulmonary Infections Using Texture Analysis and Support Vector Machine Classification, Original Research Article Academic Radiology, Volume 18, Issue 3, March 2011, Pages 306-314
-
Jingbin Wang, Margrit Betke, Jane P. Ko, Pulmonary fissure segmentation on CT Original Research Article Medical
Image Analysis, Volume 10, Issue 4,
August 2006, Pages 530-547
-
M. F. McNitt-Gray, N. Wyckoff, J. W. Sayre, J. G. Goldin, D. R. Aberle , The effects of co-occurrence matrix based texture parameters on the classification of solitary pulmonary nodules imaged on computed tomography, Original Research Article Computerized Medical Imaging and Graphics, Volume 23, Issue 6, December 1999, Pages 339-348
-
P.R. Hill, D.R. Bull, C.N. Canagarajah, Rotationally invariant texture features using the dual-tree complex wavelet transform, Proc. Int'l Conf. Image Process., vol. 3,IEEE, Vancouver, BC, Canada, 2000, pp. 901904.
-
Edward H.S. Lo, Mark R. Pickering, Michael R. Frater, John F. Arnold, Image segmentation from scale and rotation invariant texture features from the double dyadic dual-tree complex wavelet transform, © 2010 Elsevier, accepted 5 august 2010
-
Timo Ojala, Matti PietikaÈ inen, Senior Member, IEEE, and Topi MaÈenpaÈa suggested in 2002, Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns july 2002
-
Zhenhua Guo,LeiZhang,DavidZhang, Rotation invariant texture classification using LBP variance (LBPV) with global matching, Biometrics Research Centre, Department of Computing, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong, China,
Pattern Recognition 43 (2010) 706719
-
Timo Ojala, Matti Pietika¬inen,"Unsupervised texture segmentation using feature distributions", Machine Vision and Media Processing Group, Infotech Oulu, University of Oulu, FIN-90570 Oulu, Finland
,Received 19 December 1997; in revised
form 24 February1998
-
http://imaging.cancer.gov/programsandre sources/informationsystems/lidc
-
Timo Ojala, Matti PietikaÈ inen Senior Member, IEEE, and Topi MaÈenpaÈa,Multiresolution
Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns, Pattern Recognition 49 (2010)
-
Charles Elkan, Nearest Neighbor Classification, elkan@cs.ucsd.edu
-
Hae Yong Kim,"Rotation-Discriminating Template Matching Based on Fourier Coefficients of Radial Projections with Robustness to Scaling and Partial Occlusion", Escola Politécnica, Universidade de São Paulo Av. Prof. Luciano Gualberto, tr. 3, 135, São Paulo, SP, 05508-010, Brazil.
-
http://en.wikipedia.org/wiki/Sum_of_abs olute_differences
-
http://en.wikipedia.org/wiki/Cross- correlation
-
en.wikipedia.org/wiki/Accuracy_and_pre cision
-
http://en.wikipedia.org/wiki/Sensitivity_a nd_specificity
-
Messay T, Hardie RC, Rogers SK, Computationally efficient CAD system for pulmonary nodule detection in CT imagery, Med Image Anal. 2010 Jun;14(3):390-406. Epub 2010 Feb 19 (downloaded from:
-
http://www.ncbi.nlm.nih.gov/guide/).