Optical Coherence Tomography to Detect Macular Edema: A Comprehensive Approach

DOI : 10.17577/IJERTCONV5IS21003

Download Full-Text PDF Cite this Publication

Text Only Version

Optical Coherence Tomography to Detect Macular Edema: A Comprehensive Approach

P. Danya

Second Year M.Tech (BMSP&I), Dept. of Instrumentation Technology, SJCE,

Mysuru, India.

Sheela N. Rao

Assistant Professor,

Dept. of Instrumentation Technology, SJCE, Mysuru, India.

Abstract Macula of the retina is responsible for detailed central vision. Macular Edema is the inflammation or swelling of the macula. The severity of the disease ranges from blurred vision to blindness. Amongst various eye imaging modalities, Optical Coherence Tomography is capable of detecting macular edema both in its early and later stages. An algorithm to detect Macular Edema from OCT scan images has been presented. The images are de-noised by filtering followed by segmentation of the retinal layers that enclose the macular portion. A few textural and wavelet features of the images, thickness and area of the macular region in the OCT scans are evaluated as parameters to train the classifier and draw a classification between normal and diseased images. Support Vector Machine, a supervised machine learning model is employed for the binary classification. As the results are satisfactory, the proposed method is competent in efficient detection of Macular Edema.

Keywords Macular Edema (ME); Optical coherence Tomography (OCT); Haralicks Features; Wavelet features; Support Vector Machine (SVM).

  1. INTRODUCTION

    The Human Eye is the organ of vision which helps us perceive the sense of sight. The layer of the human eye which is responsible for vision is the retina. The retina can be affected by various abnormalities and the severity may vary from blurred vision to blindness. Macula is the functional center of the retina and is responsible for detailed central vision. Swelling or thickening of the macula results in Macular Edema (ME). Macular Edema when associated with diabetes is called Diabetic Macular Edema (DME). In this case, the blood vessels in the retina begin to leak fluids, including small amounts of blood into the retina. Cystoid Macular Edema (CME) develops after cataract eye surgery. Macular Edema can be detected in its early stages using Optical Coherence Tomography (OCT), a non-invasive eye imaging technique.

    Optical Coherence Tomography, works similar to ultrasound imaging; in ultrasound imaging, sound waves are used while in OCT scan, light beams are used in a similar fashion [1]. In OCT imaging, a low coherence visible light penetrates the retina. This light beam is reflected back, recombined and detected using the detector [2]. Optical Coherence Tomography is capable of providing consistent macular thickness value over repeated visits [3]. This consistency in metrics plays a prime role in monitoring patients progress over time [3]. The total volume occupied by the Cystoid Macular Edema was evaluated using a method

    as it served as a good metric for evaluation of visual acuity and the overall sensitivity was found to be 91% [4]. Retinal layer segmentation was done by using a canny edge detection filter followed by a LOG filter and also by Graph Theory method [5]. Among these, the second method was found to give better results. An automated method was presented to segment and quantify the cystoid volume for abnormal retina with macular hole which resulted in a good accuracy rate of 99.7% [6]. A high accuracy and reproducibility were demonstrated when a dual-scale gradient information was used for layer segmentation of macular OCT images [7]. A fully automated assessment of Macular Edema from OCT images using Discriminant Analysis classifier has also been reported [8]. In a review conducted on OCT and Fundus photography, it was observed that it was difficult to detect early macular edema from fundus images while the same can be prominently observed from OCT images [9].

    This paper focuses on detection of Macular Edema from OCT images using an algorithm developed in MATLAB. The OCT images have been segmented based on graph search method and the retinal thickness has been evaluated. The area enclosed between the ILM and RPE layers has been estimated for all the images using area under the curve technique. Further, a few Haralicks textural features and a few wavelet features of the images have been extracted. The classification between normal and diseased images is drawn using Support Vector Machine (SVM) classifier.

    The method involved in the proposed algorithm has been discussed in the following sequence: pre-processing, segmentation, thickness evaluation, area estimation, feature extraction and classification. The experimental results and the conclusions are discussed in the final sections.

  2. OPTICAL COHERENCE TOMOGRAPHY Optical Coherence Tomography (OCT) is a non-

    invasive eye-imaging modality that works similar to ultrasound imaging. The only difference between OCT and ultrasound scan is that ultrasound uses sound waves while OCT uses light beams. Imaging is performed in OCT by measuring the echo time delay and the intensity of back- scattered light from the target object. Light beam, when directed towards the eye of the subject is back-reflected or back-scattered from structures that possess different optical properties as well as boundaries between structures. By

    measuring the echo time the light beam takes to be back- scattered from different structures, the dimensions of different structures can be estimated [1].

    In Optical Coherence Tomography (OCT) the light beam from the light source is directed towards the beam- splitter where the beam splits into two and the two beams travel in perpendicular directions (Fig.1). One of these beams hits the reference mirror and the other goes to the eye. Both of these beams are reflected back (back-scattered) and recombined in the detector where the echo-time of light is measured.

    Fig.1. Working Principle of OCT

    The OCT scan of a normal eye is shown in Fig.2 and the OCT scan of an eye affected by macular edema is shown in Fig.3.

    Fig.2. Normal OCT Image

    Fig.3. OCT Image with Macular Edema

  3. METHODOLOGY

    The Macular Edema detection algorithm takes the sequence of steps shown in the block diagram (Fig.4).

    Fig.4. Block Diagram of ME Detection

    A.ROI Extraction:

    Initially, the images in the dataset are read by the algorithm. The input images are cropped to extract the OCT region i.e., our Region of Interest (ROI) eliminating the undesired regions. The OCT region is then subject to further processing. The original image is shown in Fig.5 and the ROI of the image is shown in Fig.6.

    Fig.5. Original Image

    Fig.6. Region of Interest

    1. Pre-processing:

      The original OCT image consists of all the three colour components namely red, green and blue (RGB). For ease of medical image processing, the RGB image is converted into grayscale image. The gray-scale image is shown in Fig.7. This gray image is then filtered using a median filter of window size 5*5 in order to remove noise and smoothen the image. The average Peak Signal to Noise Ratio of the filtered and gray images is 24.5597 dB. The filtered image is shown in Fig.8.

    2. Segmentation:

      Fig.7. Grayscale Image

      Fig.8. Median Filtered Image

      any of the non-zero weights in the adjacency matrix of the original graph. In doing so, the nodes in the newly added columns maintain their connectivity, and the cut is able to traverse in the vertical direction of these columns with minimal resistance. This allows for the start and end nodes to be assigned arbitrarily in the newly added columns, since the cut will move freely along hese columns prior to moving across the image in the minimum-weighted path. Once the image is segmented, the two additional columns can be removed, leaving an accurate cut without endpoint initialization error. Edges are assigned weights as a function of pixel intensity, where darker pixels result in a lower weight. The newly added columns can then be removed, showing an accurate cut despite arbitrary endpoint assignments [10].

      Given a graph with weights associated to edges, the graph can be cut by determining the minimum weighted path

      The retina is a ten layered structure. Out of the ten layers, two of the layers are segmented and plotted in the algorithm. These two layers are Inner Limiting Membrane (ILM) and Retinal Pigment Epithelium (RPE).

      Each OCT image is represented in terms of nodes, where each node corresponds to a pixel. Edges are the links connecting the nodes. A set of connected edges forms a path that can be used to traverse across the graph [10]. Edge weights are assigned to make each pixel different from its neighboring pixel. Edge weights are assigned using equation (1) [10].

      = 2 ( + ) + (1)

      where, wabis the weight assigned to the edge connecting nodes a and b,

      gais the vertical gradient of the image at node a, gbis the vertical gradient of the image at node b,

      wminis the minimum weight in the graph, a small positive number added for system stabilization.Here, implementation, gaand gbare normalized to values between 0 and 1, and wmin= 1 × 105.

      The lowest weighted path of a graph between arbitrary end points can be determined using efficient techniques like Dijkstras algorithm [11], after the edge weights are assigned. In order to determine the minimum weighted path using Dijkstra's algorithm, the weight values must be positive and range from 0 to 1, where an edge weight of zero indicates an unconnected node pair.

      An initialization algorithm [10] is used to assign start and end nodes because a graph might consist of several layered structures and segmenting a specific layer requires the selection or estimation of the corresponding layers start and end nodes. The initialization algorithm is based on the assumption that the layer to be segmented extends across the entire width of the image. Since Dijkstras algorithm prefers minimum weighted paths, an additional column of nodes are added to both sides of the image with arbitrary intensity values and minimal weights wminassigned to edges in the vertical direction. Note that, wmin is significantly smaller than

      that connects two endpoints. Here, we utilize Dijkstra's algorithm [11] for finding the minimum path. The necessary layers are then plotted. The result of segmentation and plots of ILM and RPE layers is shown in Fig.9.

      Fig.9. Segmentation of ILM and RPE

    3. Feature Extraction:

      1. Thickness Evaluation:

        When the average thickness of the entire image was evaluated, the resulting thickness did not show much variation between normal and diseased images. This shortcoming is overcome by splitting the segmented images into five equal regions such that the central region consists of the fovea of the macula (Fig.10). This region splitting can be attributed to the fact that macular edema or macular inflammation is prominent in the foveal region. The average thickness of each region between the ILM and the RPE is evaluated. The thickness measure of the foveal region is comparatively higher for the images that have inflammation (edema) and lesser for normal images.

        Fig.10. Region-splitting for Thickness Evaluation

      2. Area Estimation:

      The Inner Limiting Membrane (ILM) and the Retinal Pigment Epithelium (RPE) layers of the retina enclose the area covered by the macula of the retina in the OCT scan. Due to the inflammation of the macula in case of Macular Edema, the area enclosed by these two layers is comparatively larger than the same in case of normal eye. The results of the segmentation stage i.e., the plot of ILM and RPE (Fig.6) are considered as the curves representing the layers. Using Area Under the Curve (AUC) technique, the area between the ILM and RPE layers were estimated and this metric is used as one of the parameters to train the classifier.

      The area under a curve between two points can be determined by using a definite integral between the two points. The area under the curve y=f(x) between limits x=a andx=b can calculated using the formula shown in equation (2).

      iv. Wavelet Feature Extraction:

      Both the spatial and frequency information are captured by the Discrete Wavelet Transform (DWT) [14]. In DWT, the image is analysed by decomposing it into coarse approximations using low-pass and high-pass filtering. Each image may be represented as a p × q gray-scale matrix I [i,j], where each element of the matrix represents the grayscale intensity of one pixel of the image. Eight different pixels surround and form the neighbouring pixels to each non- border elements of the matrix. Typically, these eight neighbouring elements are used to traverse the matrix. Four decomposition directions of the matrix viz., horizontal (Dh), two diagonals (Dd) and vertical (Dv) are considered because the resulting 2-dimensional DWT co-efficients are the same irrespective of the direction of being traversed (left-to-right or right-to-left).

      The first level of wavelet decomposition yields four co-efficient matrices, namely, A1, Dp, Dv1, and Dd1 [14]. As a single numerical value as a feature of the entire image is

      = ()

      (2)

      desired, averaging methods are employed to obtain the necessary single values. Equations (10) and (11) [14]

      While estimating the area between ILM and RPE the area under ILM is named A1 (equation (3)) and the area under RPE is named A2 (equation (4), and the difference between these two areas is observed as the area enclosed by the two desired

      determine the averages of the corresponding pixel intensities, whereas equation (12) [14] is an averaging of the energy of the pixel intensities. Haar (haar) and Daubechies 3 (db3) wavelet filters are used for decomposition and the results are compared.

      layers.

      1 = 1

      |1(, )|

      … (10)

      1 = (3)

      +

      1 = 1

      +

      =1

      =1

      =1

      =1

      |1(, )|

      … (11)

      2 = (4)

      = 1

      (1(, )|)2

      … (12)

      = 1 2 (5)

      iii. Haralicks Textural Feature Extraction:

      2+2

    4. Classification:

    =1

    =1

    Texture is an important characteristic used in identifying objects or regions in an image [12]. The texture information is adequately specified in a set of gray-tone spatial-dependency matrices (Gray Level Co-occurrence Matrix GLCM [13]) which are computed for various angular relationships between neighboring cell pairs in the image. From the matrix, a number of textural features of a given image can be computed. A few of these features are extracted from the OCT images viz, Energy, Contrast, Homogeneity and Entropy using equations (6) through (9). In addition to the region-wise thickness and the area between layers, the textural features also play a major role in training the machine learning model to classify between normal and macular edema affected OCT scans.

    The binary classification of the images into normal and diseased is performed by using linear Support Vector Machine (SVM) classifier. SVMs are supervised learning models used for data classification [15]. The SVM model is trained using a dataset with samples labelled with the class they belong to. Here, normal OCT images belong to class 0 and edema affected OCT images belong to class 1. The SVM model is being trained with 13 features namely, five region- wise thickness measures, area of fluid between ILM an RPE layers, four textural features energy, contrast, homogeneity and entropy, three wavelet features from level 1 decomposition two averages and energy.

    The training dataset consists of 21 normal OCT scan images and 14 Edema OCT scan images. So, a total of 35

    =0 =0 ,

    = 1 1 2

    (6)

    OCT images are used to train the SVM classification model. Once trained, the SVM model is tested on 24 OCT images

    = 1 1( )2,

    (7)

    =0 =0

    = 1 1 ,

    (8)

    containing 12 normal and 12 diseased images. The

    classification results were 91.67% accurate when Haar

    =0

    =0 1+()

    =0 =0

    = 1 1 , 2, (9)

    wavelet filter was used and 95.83% accurate when Daubechies 3 wavelet filter was used. The Confusion Matrix

    wherep M*N.

    i,j

    is the probability elements of the GLCM of size

    of the linear SVM classification results are shown in Table.I and Table.II.

  4. RESULTS AND ANALYSIS

    The average Peak Signal to Noise Ratio of the filtered and gray images is 24.5597 dB for the pre-processed images in pre-processing stage. The segmentation algorithm using graph theory method yielded results with an accuracy of 84.78%. A total of 46 OCT scan images containing 26 normal images and 20 images affected by Macular Edema were segmented for ILM and RPE layers using this algorithm. Among the 46 images, 39 images were accurately segmented whereas 7 images failed to yield accurate layer

    Object Observed

    False Positive (FP) 1

    True Positive (TP) 12

    The accuracy, sensitivity, specificity and precision of the classification results shown in the confusion matrix (Table 1) are calculated using equations (14), (15), (16) and

    (17).

    segmentation.

    A total of 24 OCT scan images have been tested using this technique, where 12 were normal and 12 were abnormal images.The classification was done with two sets of features. One set included wavelet features extracted using Haar wavelet and the other set included wavelet features

    = +

    +++

    =

    +

    =

    +

    =

    +

    (14)

    (15)

    (16)

    (17)

    extracted using db3 wavelet. The classification was conducted after normalization of the 13 features. The normalization of features was done using equation (13).

    The number of test images, classification results and the corresponding accuracy, sensitivity, specificity and precision are shown in Table III.

    =

    max()

    where, is the normalized value,

    is the old value before normalization,

    (13)

    Haar

    db3

    No. of Test Images

    24

    24

    No. of Normal Images

    12

    12

    No. of Abnormal Images

    12

    12

    True Positive (TP)

    12

    12

    True Negative (TN)

    10

    11

    False Positive (FP)

    2

    1

    False Negative (FN)

    0

    0

    Accuracy

    91.67%

    95.83%

    Sensitivity

    100%

    100%

    Specificity

    83.33%

    91.67%

    Precision

    85.71%

    92.31%

    Table III. Classification Results

    max()is the maximum value in the array before normalization.

    The confusion matrices for the classification resultsare shown in Table I and Table II.

    Table I. Confusion Matrix for Classification Results using Haar wavelet

    Known / True Condition

    Object Absent

    Object Present

    Observed Response

    Object Not Observed

    True Negative (TN)

    10

    False Negative (FN)

    0

    Object Observed

    False Positive (FP)

    2

    True Positive (TP)

    12

    Known / True Condition

    Object Absent

    Object Present

    Observe d

    Object Not Observed

    True Negative (TN) 11

    False Negative (FN)

    0

    Table II. Confusion Matrix for Classification Results using Daubechies 3 wavelet

  5. CONCLUSION

A novel Macular Edema detection algorithm has been developed and presented. An image data set consisting of both normal and edema affected OCT scan images are considered to test and validate the proposed technique. The OCT scan images are acquired using a Spectral OCT SLO equipment from Combination Imaging System. The images are initially pre-processed by median filtering to remove the noise present. Then the retinal layers: Inner Limiting Membrane (ILM) and Retinal Pigment Epithelium (RPE) are acquired using a graph-theory based segmentation algorithm. An accuracy of 84.78% was obtained with a satisfactory amount of visually acceptable contours. The segmented OCT image is split into five equal regions column-wise such that the third and central region consists of the fovea of the macula. Typically the inflammation in Macular Edema is prominent in the foveal region. The average thickness between ILM and RPE layers are evaluated for each of the five regions for all the images. The area of fluid accumulation i.e., the area enclosed by ILM and RPE is then estimated using Area Under the Curve technique considering the segmentation results co-ordinates for ILM and RPE layers. Further, a few textural features and a few wavelet features of the images were extracted. So a total of 13 features are extracted from each OCT image. A linear Support Vector

Machine (SVM) model is trained with the data and used to classify between normal and diseased image samples. An accuracy of 91.67%, sensitivity of 100%, specificity of 83.33% and a precision of 85.71% were observed from the classification results that used Haar wavelet and an accuracy of 95.83%, sensitivity of 100%, specificity of 91.67% and a precision of 92.31% were observed from the classification results that used db3 wavelet.

ACKNOWLEDGEMENT

We whole heartedly thank Dr. PallaviPrabhu and Mr. Arun Kumar (Technician), Sushrutha Eye Hospital, Mysuru for readily providing us with OCT scan images required for our work.

REFERENCES

  1. James G. Fujimoto, Costas Pitris, Stephen A. Boppart, Mark E. Brezinski. Optical Coherence Tomography: An Emerging Technology for Biomedical Imaging and Optical Biopsy. Neoplasia vol. 2, pp. 9-25, January-April 2000.

  2. Joel S. Schuman. Introduction to Optical Coherence Tomography, 5 October, 2012.

  3. Eric H. Broecker, Mark T. Dunbar. Optical Coherence Tomography: its clinical use for the diagnosis, pathogenesis, and management of macular conditions. Optometry, Elsevier. vol.76, no.2, pp. 79-101. February, 2005.

  4. Gary R. Wilkins, Odette M. Houghton, Amy L. Oldenburg. Automated Segmentation of Intraretinal Cystoid Fluid in Optical Coherence Tomography. IEEE Trans Biomed Eng. vol. 59(4), pp. 1109-1114, April 2012.

  5. Appaji M. Abhishek, Tos T.J.M. Berendschot, ShyamVasudeva Rao, SupriyaDabir. Segmentation and nalysis of Retinal Layers (ILM & RPE in Optical Coherence Tomography Images with Edema. 2014 IEE Conference on Biomedical Engineering and Sciences. pp. 204-209, 8 10 December, 2014.

  6. Li Zhang, Weifang Zhu, Fei Shi, Haoyu Chen, Xinjian Chen. Automated Segmentaion of Intraretinal Cystoid Macular Edema for Retinal 3D OCT Images with Macular Hole, IEEE, pp. 2374-8, 2015.

  7. Qi Yang, Charles A. Reisman, Zhenguo Wang, YasufumiFukuma, Masanori Hangai, Nagahisa Yoshimura, AtsuoTomidokoro, Makoto Araie, Ali S. Raza, Donald C. Hood, Kinpui Chan. Automated Layer Segmentation of Macular OCT Images using dual-scale gradient information, Optical Society of America, Opt Express, 18(20), 27 September 2010.

  8. Bilal Hassan, Gulistan Raja. Fully Automated Assessment of Macular Edema using Optical Coherence Tomography (OCT) Images. IEEE Digital Library, 2016.

  9. Taimurassan, M. Usman Akram, Bilal Hassan, AmmaraNasim, Shafaat Ahmed Bazaz. Review of OCT and Fundus Images for Detection of Macular Edema. IEEE Digital Library, 2015.

  10. Stephnie J. Chiu, Xiao T. Li, Peter Nicholas, Cynthia A. Toth, Joseph A. Izatt, SinaFarsiu. Automatic Segmentation of Seven Retinal Layers in SDOCT Images Congruent with Expert Manual Segmentation, #131667, Vol.18 No.18, Optic Express, 27th August 2010.

  11. E. W. Dijkstra, A note on two problems in connexion with graphs, NumerischeMathematik1(1), 269271, 1959.

  12. Robert M. Haralick, K. Shanmugam, ItshakDinstein. Textural Features for Image Classification, IEEE Transactions on Systems, Man and Cybernetics, Vol SMC -3, No. 6, pp. 610-621, November 1973.

  13. Rafeal C. Gonzalez, Richard E. Woods. Digital Image Processing, 3rd Edition, Chapter 9, 10 & 11, pp. 627-833., ISBN: 978-81-317-2695-2, Prentice Hall © 2009.

  14. SumeetDua, U. Rajendra Acharya, Pradeep Chowriappa and S. VinithaSree. Wavelet-Based Energy Features for Glaucomatous Image Classification, IEEE Transactions On Information Technology In Biomedicine, Vol. 16, No. 1, January 2012.

  15. Corinna Cortes, Vladimir Vapnik. Support-Vector Networks, Machine Learning, 20(3), pp. 273-297, 1995.

Leave a Reply