Automatic Detection of Optic Disc and Optic Cup using Simple Linear Iterative Clustering

DOI : 10.17577/IJERTV3IS070947

Download Full-Text PDF Cite this Publication

Text Only Version

Automatic Detection of Optic Disc and Optic Cup using Simple Linear Iterative Clustering

Stephie Wini Wilson

  1. Tech Student, Signal Processing Marian Engineering College

    Kazhakutttam, Thiruvananthapuram Kerala, India

    Hema S. Mahesh

    Asst. Professor, Electronics & Communication Marian Engineering College

    Kazhakuttam, Thiruvananthapuram Kerala, India

    AbstractThe retinal optic disc is the region from where the central retinal artery and optical nerve of the retina emanate. Hence, it often serves as an important landmark and reference for other features in a retinal fundus image. The features obtained from a fundus images are often helpful in the diagnosis of various eye diseases. Locating and segmenting the optic disc are key pre-processing steps for extracting retinal features. The manual examination of optic disc (OD) is a standard procedure used for detecting eye diseases. In this paper, we present an automatic Optic disc detection technique based on simple linear iterative clustering. The method proposed can be used for the segmentation of optic cup. Principal component analysis and mathematical morphology is performed to prepare the image for segmentation.

    Keywordsfundus image; optic disc; principal component analysis; simple linear iterative clustering

    1. INTRODUCTION

      The principal congenital abnormalities of the optic disc that can significantly impair visual function are excavation of the optic disc and optic nerve hypoplasia. The excavated optic disc abnormalities comprise optic disc coloboma, morning glory syndrome, and peripapillary staphyloma. Optic nerve hypoplasia manifests as a small optic nerve, which may or may not be accompanied by a peripapillary ring (the double ring sign). In addition, the optic disc cupping, which occurs as a sequel to some cases of periventricular leucomalacia, can arguably be classified as a type of optic nerve hypoplasia. All of these conditions can be unilateral or bilateral and can impair visual function mildly or severely. Other common causes of visual impairment and blindness are retinopathy, hypertension, glaucoma, and macular degeneration.

      Usually, more than 80% of global visual impairment is avoidable and in the case of diabetes by up to 98%. All of these diseases can be detected through a direct and regular ophthalmologic examination of the risk population. However, population growth, aging, physical inactivity and rising levels of obesity are contributing factors to the increase of them, which causes the number of ophthalmologists needed for evaluation by direct examination is a limiting factor. So, a system for automatic recognition of the characteristic patterns of these pathological cases would provide a great benefit. Optic disc (OD) segmentation is a key process in many

      algorithms designed for the automatic extraction of anatomical ocular structures, the detection of retinal lesions, and the identification of other fundus features. In general, the techniques presented in the literature about the OD processing from fundus images can be grouped into two categories: location and segmentation methods. Location methods are based on finding the OD center and segmentation algorithms on estimating its contour. Location methods are usually focused on the fact that all retinal vessels originate from the OD and follow a parabolic path [1], [2] or that the OD is the brightest component on the fundus [3], [4]. Among segmentation methods, several approaches must be stressed: template-based algorithms [5, 6], deformable models [7, 8] and morphological techniques [9, 10]. Most of algorithms based on mathematical morphology detect the OD by means of watershed transformation, generally through marker controlled watershed, although each author proposes the use of different markers. The method presented in this paper incorporates some of the aforementioned techniques besides new contributions. It is mainly based on watershed transformation with markers, in the same way that in [9, 10], although with certain improvements:

      First, a principal component analysis (PCA) is applied on the RGB fundus image for obtaining a grey image in which the different structures of the retina, such as vessels and OD, are differentiated more clearly in order to get a more accurate detection of the OD. This stage is very important since it largely determines the final result. Then, the vessels are removed through morphological operations to make the segmentation task easier. Finally, simple linear iterative clustering is implemented. This algorithm is fully automatic.

      The paper is organized as follows: in Section II stages of the proposed method is described. Section III shows the experimental results. Finally, Section IV and V provide discussions, conclusion and some future work lines

    2. ALGORITHM

      In this paper, an automatic method to detect the optic disc and optic cup is presented. It is focused on simple linear iterative clustering on a fundus image to obtain the optic disc contour. A pre-processing of the original RGB image is required before segmentation. The first step of the pre- processing consists of applying PCA to transform the input image to grey scale. This technique combines the most

      significant information of the three components RGB in a single image so that it is a more appropriate input to the segmentation method.

      1. Principal Component Analysis

        The input to the process is a grey image which can be obtained from a different intensity image, from a band of the original RGB image [11, 12] to a component of the other color space [13, 14]. In this work, the use of a new grey-scale image is proposed. Specifically, it is calculated by means of PCA [15]. The principal-component axes will be the eigenvectors of the covariance matrix. This is the matrix whose (i,j)th element is the covariance between ith and jth elements of image f when i j, and the variance of the ith and jth element of f when i = j. For a three-channel image transforming to a principal component space creates three new channels in which the first (the most significant) contains the most structural contrast and information. The rank for each axis in the principal set represents the significance of that axis as defined by the variance in the data along that axis. Thus, the first principal axis is the one with the greatest amount of scatter in the data and consequently the greatest amount of contrast and information, while the last principal axis represents the least amount of information such as noise and image artifacts [16] This type of analysis maximizes the separation of the different objects that compose the image so that the structures of the retina are better appreciated. In addition, it is much less sensitive to the existing variability in a fundus image regarding color, intensity, etc. The first, second and third principal component (Z1 PCA, Z2 PCA and Z3 PCA respectively) along with the cropped original RGB image is shown in Fig.1.

        Fig.1: a) Original image, b) Z1 PCA, c) Z2 PCA, d) Z3 PCA

        The first principal component which contains the most information is used for the detection of optic disc while the second principle component which contains comparatively lesser information is used for the detection of optic cup. This is because; the optic cup can be easily distinguished from the second principle component. Both the first and second

        principal components are enhanced for correcting any defects due to non uniform illumination and to improve the contrast.

      2. Image Enhancement

        The non-uniform illumination of the first and second rincipal component image is corrected and its contrast is increased through a local transformation given by

        where tmin and tmax and are the minimum and maximum grey level of the image, respectively, umin and umax are the target levels (typically 0 and 255, respectively), is the mean value of the image for all pixels within a window centered at the current pixel and with a size larger than the OD, and the parameter r is used to control the contrast increasing (experimentally r is 2). The transformation is applied on the image to get an enhanced image.

      3. Blood Vessel Removal

        Mathematical morphology is a nonlinear image processing methodology based on minimum and maximum operations whose aim is to extract relevant structures of an image [30]. The two basic morphological operators are dilation and erosion. In the case of dilation the value of the output pixel is the maximum value of all the pixels in the input pixel's neighborhood. In the case of erosion the value of the output pixel is the minimum value of all the pixels in the input pixel's neighborhood. Dilation and erosion are often used in combination to implement image processing operations. The definition of a morphological opening of an image is erosion followed by dilation, using the same structuring element for both operations, whereas morphological closing of an image, is the reverse: it consists of dilation followed by erosion with the same structuring element. In order to remove the blood vessels from the first principal component image closing is done here.

      4. Generalized Distance Function

        Distance functions are used to measure the distance between two points in vector space. Distance functions can be used for shape modeling. Locus of all points which are at equal distance to a given point, gives various different shapes, when different distance functions are used. This is a fast process. Distance functions can be used as building blocks in constructive geometry, thus giving the user an approximate picture of how the overall output will look like. Use of generalized distance function extends the class of shapes that can be generated by simple locus. Image after obtaining the generalized distance function is used for further segmentation of optic disc and optic cup.

      5. Simple Linear Iterative Clustering

        Simple Linear Iterative Clustering [1] is a superpixel extraction algorithm based on a local version of k-means. It is used to decompose an image in visually homogenous regions. First the image is divided into grids. The center of each grid

        is used to initialize a k-means algorithm. Finally the k-means centers and clusters are refined by using the Lloyd algorithm. After the k-means step, SLIC optionally removes any segment whose area is smaller than a threshold by merging them into larger ones.

        SLIC is faster and memory efficient. The advantages of using SLIC is that, the number of distance calculations are reduced by limiting the search space to a region proportional to the size of the superpixel. Thus the complexity is reduced and it is independent of the number of superpixels k. Here a weighted distance measure combines the color and spatial proximity, while providing control over the size and compactness of the superpixels.

        SLIC is used for the detection of both optic disc and optic cup separately.

        Algorithm

        /*Initialization */

        Initialize cluster centres Ck = [; ; ; ] by Sample pixels at regular grid steps S.

        Move cluster centres to the lowest gradient position in a 3 X 3 neighbourhood.

        Set label l (i) = -1 for each pixel i. Set distance d (i) = 1 for each pixel i. Repeat

        /*Assignment */

        For each cluster centre Ck do

        For each pixel i in a 2S X 2S region around Ck do Compute the distance D between Ck and i.

        If D < d (i) then Set d (i) = D Set l (i) = k

        End if End for End for

        /*Update */

        Compute new cluster centres. Compute residual error E.

        Until E threshold

      6. Canny Edge Detection

        It is a mathematical method for identifying points in a digital image at which the image brightness changes sharply or has discontinuities. The points at which brightness changes rapidly are grouped into a set of curved line segments called edges. Canny edge detector uses a multi-stage algorithm to detect a wide range of edges in an image. Firstly, the noise present in the raw uncompressed image is removed using a filter based on a Gaussian filter. Thus a noise removed blurred image is obtained. Canny algorithm uses four filters to detect horizontal, vertical and diagonal edges in the noised removed image. The first derivative in horizontal ( ) and vertical direction ( ) is obtained using edge detection operator. From this gradient and direction is obtained. This is followed by an edge thinning technique called non-maximum separation. From the estimates of the image gradient, a search is carried out to determine if the gradient magnitude assumes a local maximum in the gradient direction. Large intensity

        gradients more likely correspond to edges than small intensity gradients. It is difficult to choose a threshold value to classify edge and non edge pixels. Hence Canny algorithm uses thresholding with hysteresis. Here two thresholds are required- low and high. Edges which are easily recognizable or distinct are identified using the high threshold. Using the directional derivative edges is traced. From the low threshold small sections of edges are obtained. This gives a binary image where each pixel is marked as edge or non-edge pixels. From the region of interest calculated based on the SLIC output, edges of the optic disc and optic cup are detected using the canny edge detector. This is superimposed to the

        original RGB image to get the detected image.

    3. RESULTS

      The proposed method for detection of optic disc was performed on fundus image obtained from MESSIDOR and DRIONS. The optic disc was correctly detected which can be further used for analyzing various diseases. Fig.2 shows a fundus image of the right eye which was taken as the input. Only a portion of the image is considered throughout the algorithm.

      Fig.2: Fundus image of right eye

      ig.

      Various steps in optic disc detection algorithm are shown in the F 3.

      Fig.3: a) original image, b) first principal component , c) enhanced image, d) after blood vessel removal, e) clustered image, f) after thresholding , g) roi image, h) edge detected,

        1. optic disc segmented

      The original image with detected optic cup is shown in Fig. 4

      Fig.4: a) original fundus image, b) detected optic cup

      The performance of the method has been evaluated based on different concepts. Jaccards (JC) and Dices (S) coefficients, describe similarity degree between two compared elements which will be equal to 1 when segmentation is perfect. Accuracy (Ac) is determined by the sum of correctly classified pixels as OD and non-OD divided by the total number of pixels in the image. True positive fraction (TPF) is established by dividing the correctly classified pixels as OD by the total number of OD pixels in the gold standard. False positive fraction (FPF) is calculated by dividing the misclassified pixels as OD by the total number of non-OD pixels in the gold standard. Finally, in order to be able to compare with more other authors works, another measure was calculated: the mean absolute distance (MAD), whose aim is to measure the accuracy of the OD- boundary.

      Table I is focused on analysing DRIONS database. Performance of our work is compared with the performance of other two methods one based on watershed transformation and the other based on mathematical morphology.

      TABLE I. PERFORMANCE ANALYSIS

      Parameters

      Mean value of parameters

      SLIC

      Watershed Transformation

      [19]

      T. Walter [9]

      Jaccards

      coefficient (Jc)

      0.8423

      0.7185

      0.6227

      Dices

      coefficient (S)

      0.9084

      0.8243

      0.6813

      Accuracy

      0.9934

      0.9649

      0.9589

      Mean absolute

      distance (MAD)

      2.4945

      13.8723

      29.6289

      TPF

      0.9281

      0.7685

      0.6715

      FPF

      0.0050

      0.0106

      0.0210

    4. DISCUSSION

      Variability between fundus images in color, intensity, size, presence of artifacts, etc., makes each state-of-the-art method uses a different input image: green [22], [23], [24] and red [25], [26], [27] band of the original RGB image, or even a combination of both of them [28], [29], intensity component extracted from the HSI representation [30] and lightness channel of the HLS space [31]. However, due to this fundus image variability, they do not always provide the desired results. Therefore a PCA, able to maximize the separation between the different objects of the image, has been proposed

      in this paper as a more appropriate input image. For example, in Fig. 5, PCA is compared with the use of the red component on a specific image. It can be observed that while the red component is completely oversaturated, PCA obtains a grey image where the OD could be segmented.

      Fig. 5: a) original fundus image, b) red component, c) PCA image

    5. CONCLUSION

Diabetic retinopathy, hypertension, glaucoma, and macular degeneration are nowadays some of the most common causes of visual impairment and blindness [32], [33], [34]. Early diagnosis and appropriate referral for treatment of these diseases can prevent visual loss. This algorithm is able to automatically locate the OD from fundus image and enables early detection of diseases related to the fundus. Its main advantage is the full automation of the algorithm since it does not require any intervention by clinicians, which releases necessary resources (specialists) and reduces the consultation time; hence its use in primary care is facilitated. The use of several simulations with random markers helps to avoid sub- segmentation problems where the original image has been segmented as using only one internal marker located in the geodesic center of its largest and brightest object. The optic cup can also be detected using the same algorithm which will enable to measure the cup-to-disc (C/D) ratio so that it can be used for glaucoma diagnosis. A high C/D ratio will indicate that a fundus is suspicious of glaucoma.

REFERENCES

  1. Radhakrishna Achanta, Appu Shaji, Kevin Smith, Aurelien Lucchi, Pascal Fua and Sabine Susstrunk, SLIC Superpixels compared to state-of-the-art Superpixel methods,Journal of LATEX class files, vol 6, no. 1, Dec 2011.

  2. M. Foracchia, E. Grisan, and A. Ruggeri, Detection of optic disc in retinal images by means of a geometrical model of vessel structure, Medical Imaging, IEEE Transactions on, vol. 23, no. 10, pp. 1189 1195, 2004.

  3. S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, and M. Goldbaum,

    Automatic detection of the optic nerve in retinal images, in IEEE International Conference on Image Processing, 1989, vol. 1, pp. 15.

  4. S. C. Lee, Y. Wang, and E. T. Lee, Computer algorithm for automated detection and quantification of microaneurysms and hemorrhages (HMAs) in color retinal images, in SPIE Conference on Image Perception and Performance, 1999, vol. 3663, pp. 6171.

  5. A. Aquino, M.E. Geg´undez-Arias, and D. Mar´n, Detecting the optic disc boundary in digital fundus images using morphological, edge detection, and feature extraction techniques, Medical Imaging, IEEE Transactions on, vol. 29, no. 11, pp. 1860 1869, 2010.

  6. M. Lalonde, M. Beaulieu, and L. Gagnon, Fast and robust optic disc detection using pyramidal decomposition and hausdorff-based template matching,Medical Imaging, IEEE Transactions on, vol. 20, no. 11, pp. 1193 1200, 2001.

  7. J. Lowell, A. Hunter, D. Steel, A. Basu, R. Ryder, E. Fletcher, and L. Kennedy, Optic nerve head segmentation., IEEE Trans. Med. Imaging, vol. 23, no. 2, pp. 256264, 2004.

  8. J. Xu, O. Chutatape, E. Sung, C. Zheng, and P. Chew Tec Kuan, Optic disk feature extraction via modified deformable model technique for glaucoma analysis, Pattern Recognition, vol. 40, pp. 20632076, 2007.

  9. T. Walter, J. C. Klein, P. Massin, and A. Erginay, A contribution of image processing to the diagnosis of diabetic retinopathy-detection of exudates in color fundus images of the human retina, Medical Imaging, IEEE Transactions on, vol. 21, no. 10, pp. 12361243, 2002.

  10. D. Welfer, J. Scharcanski, C. M. Kitamura, M. M. Dal Pizzol, L. W. B. Ludwig, and D. R. Marinho, Segmentation of the optic disk in color eye fundus images using an adaptive morphological approach, Computers in Biology and Medicine, vol. 40, no. 2, pp. 124 137, 2010.

  11. T. Walter, J. C. Klein, P. Massin, and A. Erginay, A contribution of image processing to the diagnosis of diabetic retinopathy-detection of exudates in color fundus images of the human retina, IEEE Trans. Med. Imag., vol. 21, no. 10, pp. 12361243, Oct. 2002.

  12. M. Niemeijer, M. D. A. Moff, and B. van Ginneken, Fast detection of the optic disc and fovea in color fundus photographs, Med. Image Anal., vol. 13, no. 6, pp. 859870, 2009.

  13. A. Osareh, M. Mirmehdi, B. Thomas, and R. Markham, Comparison of colour spaces for optic disc localisation in retinal images, in Proc… 16th Int. Conf. Pattern Recognit., 2002, vol. 1, pp. 743746.

  14. J. Lowell, A. Hunter, D. Steel, A. Basu, R. Ryder, E. Fletcher, and L. Kennedy, Optic nerve head segmentation, IEEE Trans. Med. Imag., vol. 23, no. 2, pp. 256264, Feb. 2004.

  15. I. T. Jolliffe, Principal Component Analysis, 2nd ed. New York: Springer, 2002.

  16. J. C. Russ, Image Processing Handbook, 5th ed. Boca Raton, FL: CRC Press, 2007.

  17. L. Vincent, Minimal path algorithms for the robust detection of linear features in gray images, in Proc. 4th Int. Symp. Math. Morphol. Appl. Image Signal Proc., 1998, pp. 331338.

  18. S. Beucher and F. Meyer, Mathematical Morphology in Image Processing, E. Dougherty ed. New York: Marcel Dekker, 1992.

  19. C. Eswaran, A. Reza, and S. Hati, Extraction of the contours of optic disc and exudates based on marker-controlled watershed segmentation, inProc. Int. Conf.Comput. Sci. Inf. Technol., 2008, pp. 719723.

  20. Messidor Techno-Vision Project, MESSIDOR : Digital retinal images France, 2008 [Online]. Available: http://messidor.crihan.fr/download- en.php.

  21. Retinal image computing & understanding, ONHSDOptic Nerve Head Segmentation Dataset Univ. Lincoln, 2004 [Online]. Available: http://reviewdb.lincoln.ac.uk/Image Datasets/ONHSD.aspx.

  22. M. Niemeijer, M. D. A. Moff, and B. van Ginneken, Fast detection of the optic disc and fovea in color fundus photographs, Med. Image Anal., vol. 13, no. 6, pp. 859870, 2009.

  23. M. Lalonde, M. Beaulieu, and L. Gagnon, Fast and robust optic disc detection using pyramidal decomposition and Hausdorff-based template matching, IEEE Trans. Med. Imag., vol. 20, no. 11, pp. 11931200, Nov. 2001.

  24. C. Eswaran, A. Reza, and S. Hati, Extraction of the contours of optic disc and exudates based on marker-controlled watershed segmentation, IEEE Trans. Med. Imag., vol. 21, no. 10, pp. 1236 1243, Oct. 2002.

  25. T. Walter, J. C. Klein, P. Massin, and A. Erginay, A contribution of image processing to the diagnosis of diabetic retinopathy-detection of exudates in color fundus images of the human retina, IEEE Trans. Med. mag., vol. 21, no. 10, pp. 12361243, Oct. 2002.

  26. D. Welfer, J. Scharcanski, C. M. Kitamura, M. M. D. Pizzol, L. W. Ludwig, and D. R. Marinho, Segmentation of the optic disk in color eye fundus images using an adaptive morphological approach, Comput. Biol. Med., vol. 40, no. 2, pp. 124137, 2010.

  27. J. Hajer, H. Kamel, and E. Noureddine, Localization of the optic disk in retinal image using the watersnake, in Proc. Int. Conf. Comput. and Commun. Eng., 2008, pp. 947951.

  28. A. Aquino, M. E. Gegúndez-Arias, and D. Marín, Detecting the optic disc boundary in digital fundus images using morphological, edge detection, and feature extraction techniques, IEEE Trans. Med. Imag., vol. 29, no. 11, pp. 18601869, Nov. 2010.

  29. S. Lu, Accurate and efficient optic disc detection and segmentation by a circular transformation, IEEE Trans. Med. Imag., vol. 30, no. 12, pp. 21262133, Dec. 2011.

  30. J. Lowell, A. Hunter, D. Steel, A. Basu, R. Ryder, E. Fletcher, and L. Kennedy, Optic nerve head segmentation, IEEE Trans. Med. Imag., vol. 23, no. 2, pp. 256264, Feb. 2004.

  31. A. Osareh, M. Mirmehdi, B. Thomas, and R. Markham, Comparison of colour spaces for optic disc localisation in retinal images in Proc.. 16th Int. Conf. Pattern Recognit., 2002, vol. 1, pp. 743746..

  32. D. Pascolini and S. P.Mariotti, Global estimates of visual impairment: 2010, Br. J. Ophthalmol., pp. 614621, 2011.

  33. World Health Org., Action plan for the prevention of blindness and visual impairment 20092013 2010.

  34. H. R. Taylor, Eye care for the community, Clin. Exp. Ophthalmol., vol. 30, no. 3, pp. 151154, 2002.

Leave a Reply