A Fast, Efficient and Automated method to detect Retinal Blood Vessels from color fundus Images

DOI : 10.17577/IJERTV1IS3038

Download Full-Text PDF Cite this Publication

Text Only Version

A Fast, Efficient and Automated method to detect Retinal Blood Vessels from color fundus Images

A Fast, Efficient and Automated method to dVeolt. 1eIsscuet3, May – 2012

Retinal Blood Vessels from color fundus Images

Jaspreet Kaur1 ,Dr. H.P.Sinha2 ECE,ECE

MMU, mullana University,MMU, mullana University

Abstract

Diabetes mellitus, a metabolic disorder, has become one of the rapidly increasing health threats both in India and worldwide. The complication of the diabetes associated to retina of the eye is diabetic retinopathy. A patient with the disease has to undergo periodic screening of eye. For the diagnosis, ophthalmologists use color retinal images of a patient acquired from digital fundus camera. The present study is aimed at developing an automatic system for the extraction of normal and abnormal features in color retinal images. Prolonged diabetes causes micro-vascular leakage and micro-vascular blockage within the retinal blood vessels. Filter based approach with a bank of Gabor filters is used to segment the vessels. The frequency and orientation of Gabor filter are tuned to match that of a part of vessel to be extracted in a green channel image. To classify the pixels into vessels and non vessels entropic thresholding based on gray level co- occurrence matrix is applied. The performance of the method is evaluated on two publicly available retinal databases with hand labeled ground truths.The performance of retinal vessels on drive database, sensitivity 86.4%, accompanied by specificity of 96%. While for STARE database proposed method sensitivity 85% and specificity 96%. The system could assist the ophthalmologists, to detect the signs of diabetic retinopathy in the early stage, for a better treatment plan and to improve the vision related quality of life.

Keywords Vessel segmentation, Gabor filter, Image Processing, Diabetic Retinopathy

  1. INTRODUCTION

    Diabetic Retinopathy (DR) is an eye disease which occurs due to diabetes. It damages the small blood vessels in the retina resulting in loss of vision. The risk of the disease increases with age and therefore, middle aged and older diabetics are prone to Diabetic Retinopathy. Retinopathy is a progressive disease, which can advance from mild stage to proliferative stage. There are three stages: (i)early stage or non-proliferate diabetic retinopathy (NPDR) or background retinopathy, (ii)maculopathy and (iii)progressive or proliferate retinopathy. These stages of DR are shown in the figure.

    The early stage is further classified as mild NPDR and moderate to severe NPDR [7], [21]. In mild NPDR, signs such as microaneurysms, dot and blot hemorrhages and hard or intra-retinal exudates are seen in the retinal images. Microaneurysms are small, round and dark red dots with sharp margins and are often temporal to macula [7], [8]. Their size ranges from 20 to 200 microns i.e., less than 1/12th the diameter of an average optic disc and are first detectable signs of retinopathy. Hemorrhages are of two types: Flame and Dot-blot hemorrhages. Flame hemorrhages occur at the nerve fibers and

    they originate from precapillary arterioles, which located at the inner layer of the retina [5]. Dot and blot hemorrhages are round, smaller than microaneurysms and occur at the various levels of retina especially at the venous end of capillaries. Hard exudates are shinny, irregularly shaped and found near prominent microaneurysms or at the edges of retinal edema. In the early stage, the vision is rarely affected and the disease can be identified only by regular dilated eye examinations [22]

    FIGURE 1: Main stages of Retinopathy with the disorders

    Diabetic Maculopathy is a stage where fluid leaks out of damaged vessels and accumulates at the center of the retina called macula (which helps in seeing the details of the vision very clearly) causing permanent loss of vision. This water logging of the macula area is called clinically significant macular oedema which can be treated by laser treatment [22],[4].

    Proliferate diabetic retinopathy, which is defined as the growth of abnormal new vessels (neovascularization) on the inner surface of the retina are divided into two categories: neovasculature of the optic disk and neovascularization elsewhere in the retina [7], [8]. The above stages can be seen clearly in Fig. which shows different changes that take place in the retina of a DR patient over a period of time.

    Diabetic Maculopathy is a stage where fluid leaks out of damaged vessels and accumulates at the center of the retina called macula (which helps in seeing the details of the vision very clearly) causing permanent loss of vision. This water logging of the macula area is called clinically significant macular oedema which can be treated by laser treatment [2],[4]

    and section C presents the method of extraction of vessels. While in section D presents entropy thresholdiVnogl.a1nIdssuseec3,tiMonay3- 2012

    the results of the algorithm over an extensive dataset are presented.

    FIGURE 2: Different stages of Diabetic Retinopathy

    1.1 RELATED WORK:

    Sinthaniyothin [12] uses maximum variance to obtain the optic disk center and a region growing segmentation method to obtain the exudates. [11] tracks the optic disk through a pyramidal decomposition and obtains disk localization from a template-based matching that uses the Hausdorff distance measure on the binary edge image. However, the above methods will fail if exudates similar in brightnes and size to the optic disk are present. [1, 13] used blood vessel intersection property to obtain the optic disk. However, they use the whole blood vesse network which can lead to wrong or inconclusive result because of noise from the fringe blood vessels. In contrast we use only the main blood vessels, which is more robust Statistical classification techniques have been very popular lately for the problem of lesion classification. Exudates have color properties similar to the optic disk while Microaneurysms are difficult to segment due to their similarity in color and proximity with blood vessels. In order to classify detected features, typically, candidate regions are detected using color/morphological techniques and then classification is done on these regions using some classifier. Many classifiers have been tried including Fuzzy C-means clustering [15], SVMs ([17],[22], [9]) and simple Bayesian classification [9].

    STARE is a complete system for various retinal diseases [6]. The optic disk is detected using blood vessel convergence and high intensity property. In order to determine the features and classification method to be used for a given lesion, a Bayesian probabilistic system is used.

    This paper focuses on the automated detection of vascular changes that are seen clearly in the moderate to severe stages of DR. These abnormalities are detected by processing retinal images using Gabor Filter banks. Extraction of vessels using gray level co-occurrence matrix is used for the segmentation of vessels. Ther are two databases DRIVE and STARE for testing the segmentation of blood vessels. Then results are compared with the help of receiver operating curce(ROC).

    The rest of the paper is organized as follows: Section 2 detection of blood vessel extraction while section A Gabor Filter banks Section B describes the spatial filtering of vessels

  2. DETECTION OF BLOOD VESSEL

    There are different image-processing methods that can be used for capturing variations. Methods include image segmentation, edge or boundary detection, shape and texture analysis. The detection process can be carried out either on the orignal image or in the transform domain. Some of the transforms that are used in image processing are wavelet transform, Fourier transform, and discrete cosine transform (DCT). This paper utilizes Gabor filter banks for automated detection and classification of retinal images.

    1. Gabor filter banks:

      Gabor filters have been used extensively by researchers for texture detection, classification and image retrieval purposes. The real part of 2D Gabor filter used in the context of retinal vessel segmentation is defined in the spatial domain g(x,y) as follows

      Where,

      The parameters present in the Gabor function defined above are as follows. The angle is orientation of the filter, for example, an angle of zero gives a filter that responds well to the vertical features in an image. The parameter f is the central frequency of pass band. Next, x is the standard deviation of gaussian in x direction along the filter that determines the bandwidth of the filter. Finally, y is the standard deviation of gaussian across the filter that controls the orientation selectivity of the filter.

    2. SPATIAL FILTERING OF VESSELS

      Spatial filtering is performed on the input retinal image to highlight the vessel structures while suppressing the background noise and other artifacts. Because of the directional selectivity of the Gabor filter, it is possible to enhance pixels of vessels oriented along various directions. The response of applying a Gabor filter to a vessel segment is given by

      where, g(x,y) is the Gabor filter defined by equation and Ig(x,y) is a green channel image with vessel segment oriented along different directions. It can be seen that the shape of the filter similar to the vessels and when it is positioned at the center of a vessel at scale and orientation, it provides maximum response along the vessel direction and minimum responses along its perpendicular direction.

      In order to detect the vessels oriented along different directions, the filter has to be rotated along those directions and only maximum response at that position is retained as follows

      For each pixel position in an image, spatial filtering is performed by convolving image with the Gabor kernel along

      different orientations. So the angle of filter is rotated from 0 to 170 degrees to produce a single peak response on the center of a vessel segment. Figure shows the response of vessels for a filter with orientation along 0, 45 and 90. It can be seen that only the vessels along that direction respond maximum than vessels oriented in different direction.

      1. (b)

        (c) (d)

        ISSN: 2278-0181

        Depending upon the ways in which the gray level i follows gray level j, different definitions of co-occurVreonl.c1eIsmsuaet3r,ixMaayre- 2012

        possible. The co-occurrence matrix by considering horizontally right and vertically lower transitions is given by

        Where,

        the total number of transitions in the co-occurrence matrix, a desired transition probability from gray level i to gray level j is obtained as follows

        FIGURE 4: Gray level co-occurrence matrix.

  3. D. ENTROPY THRESHOLDING:

    Based on the gray level variation within or between object and background, the gray level co-occurrence matrix is

    (e) divided into quadrants. Let Th be the threshold within the range

    FIGURE 3: Gabor Filter Response (GFR) images; (a) Input image; (b) GFR for = 0; (c) GFR for = 45; (d) GFR for = 90; (e) Overall Gabor response image with enhances vessels.

    1. EXTRACTION OF VESSELS:

    The enhanced vessel segments in the Gabor filter response image, an effective thresholding scheme is required. The entropy based thresholding using gray level co-occurrence matrix is employed. It computes optimal threshold by taking into account the spatial distribution of gray levels that are embedded in the co-occurrence matrix. The GLCM contains information on the distribution of gray level frequency and edge information, as it is very useful in finding the threshold value. The gray level co-occurrence matrix is a L×L square matrix of the gray scale image I of spatial dimension M×N with gray levels in the range [0, 1. . . L-1]. It is denoted by T = [ti,j

    ]L×L matrix. The elements of the matrix specify the number of transitions between all pairs of gray levels in a particular way. For each image pixel at spatial co-ordinate (m, n) with its gray level specified by f(m ,n), it considers its nearest four neighbouring pixels at locations of (m+1, n), (m-1, n), (m, n +

    1. and (m, n – 1). The co-occurrence matrix is formed by comparing gray level changes of f(m, n) to its corresponding gray levels, f(m +1, n), f(m -1, n), f(m, n + 1) and f(m, n – 1).

      0 Th L-1 that partitions the gray level co-occurrence matrix into four quadrants, namely A, B, C and D.

      FIGURE 5: Four quadrants of co-occurrence matrix

      quadrant A represents gray level transition within the object while quadrant C represents gray level transition within the background. The gray level transition between the object and the background or across the objects boundary is placed in quadrant B and quadrant D. These four regions can be further grouped into two classes, referred to as local quadrant and joint quadrant. Local quadrant is referred to quadrant A and C as the gray level transition that arises within the object or the

      background of the image. Then quadrant B and D is referred as joint quadrant because the gray level transition occurs between the object and the background of the image.

      The local entropic threshold is calculated considering only quadrants A and C. The probabilities of object class and background class are defined as

      the normalized probabilities of the object class and background class are functions of threshold vector (Th, Th) are defined as

      The second-order entropy of the object is given by

      the local transition entropy A denoted by HA(TH). Similarly, the second-order entropy of the background is given by

      the local transition entropy C denoted by HC(TH ). By summing up the local transition entropies, the total second-order local entropy of the object and the background is given by

      Finally, TE the gray level corresponding to the maximum of

      HT(Th) over Th gives the optimal threshold for value

      It can be seen that there exists small unconnected pixels in the thresholded image. These isolated pixels are removed by performing length filtering based on connected pixel labeling. The result of removing these unconnected pixels can be seen in the final segmented image. To ensure that only the section of the image containing data is considered during image processing and analysis, a mask image is generated for each image. It is applied to remove any artifacts present outside the region of interest.

      1. (b)

        ISSN: 2278-0181

        FIGURE 6: Segmented vessels; (a) Thresholded Gabor response image; (b) Final segmented imageVoalf.t1erIssrueem3,oMvianyg- 2012

        unconnected pixels.

  4. RESULT

    The retinal images from the DRIVE database and STARE database are used for evaluating the performance of the vessel segmentation method. Bank of twelve Gabor filters oriented in the range of 0 to 170 degrees are used to enhance the multi- oriented vessels. Increasing the number of filter banks did not result in significant improvement of result but increased the convolution operations. Quantitative evaluation of the segmentation algorithm is done by comparing the output image with the corresponding manually segmented image. The comparison yields statistical measures that can be summarized using ground truth table, as shown. True positives are pixels marked as vessel in both the segmentation given by a method and the manual segmentation used as ground truth. False positives are pixels marked as vessel by the method, but that are actually negatives in the ground truth. True negatives are pixels marked as background in both images. nd false negatives are pixels marked as background by the method, but actually are vessel pixels.

    TABLE 1 : Performance analysis using GROUND TRUTH table

    From these sensitivity and specificity are evaluated. Sensitivity gives the percentage of pixels correctly classified as vessels by the method and specificity gives the percentage of non-vessels pixels classified as non-vessels by the method as follows

    where Tp is true positive, Tn is true negative, Fp is false positive and Fn is false negative at each pixel. The method is compared with the matched filter based method of [14]using the DRIVE database. Table shows that Gabor filter is better in classifications of vessels with less false positive fraction rate.

    TABLE 2 : Performance of retinal blood vessels segmentation method on DRIVE database

    The methods are also evaluated using receiver operating characteristic (ROC) curve. ROC curves are formed by ordered pairs of true positive (sensitivity) and false positive (1- specificity) rates. The points on the ROC curve are obtained by varying the threshold on the Gabor filter output image. For each configuration of threshold value, a pair formed by a true positive and false positive rate corresponding to the methods output is marked on the graph, producing a curve as in Figure. The closer an ROC curve is to the upper left corner, the better the methods performance, with the point (0,1) representing a perfect agreement with the ground truth. Accordingly, an ROC curve is said to dominate another if it is completely above and to the left of it. In the experiment performed, the optimal threshold used to get the final output to get number of points on the ROC curve for both the methods. In the Figure it can be clearly visualized that the Gabor filter method performs better to that of matched filtered based method. the Gabor filter method performs better to that of matched filtered based method.

    FIGURE 7: ROC curves of Gabor and Matched filter methods

    The results of the proposed method are also compared with those of [13], on twenty images from the STARE database and the result is depicted in Table. Here also the proposed method performs better with lower specificity even in the presence of lesions in the abnormal images

    TABLE 3: Comparison of vessel segmentation results on STARE database

        1. (b)

    ISSN: 2278-0181

    Vol. 1 Issue 3, May – 2012

    (c) (d)

    FIGURE 8: Result of vessel segmentation on image from DRIVE database; (a) Input image; (b) Gabor response image;

    1. Manual segmentation by expert; (d) Automatic Segmentation by the method

      1. (b)

    (c) (d)

    FIGURE 9: Result of vessel segmentation on image from STARE database; (a) Input image; (b) Gabor response image; (c)Manual segmentation by expert; (d)Automatic Segmentation by the method.

  5. Colour retinal images from two different databases were used to evaluate the robustness and accuracy of the method. Based on the results obtained it can be demonstrated that the method will be useful in a wide range of retinal images. A brief comparison with some other vessel segmentation algorithms was also provided. It can be concluded that the Gabor filter provides better results when compared with other filter based methods. Since the scale of the Gabor filter can be changed, it will be very useful in multi-scale analysis of vessels. For the pixel level classification of vessels entropic threshold provides a fast and better result. The segmented vessels can be used to obtain the control points used in the retinal registration techniques. Based on the this method of segmented vessels it is possible to quantify the proliferative diabetic retinopathy. It is hoped that vessel segmentation aids clinicians to detect and monitor the progression of disease, minimize the examination time and helps in the better treatment plan. The segmentation of blood vessels in colour retinal images using Gabor filters has been described in Chapter 4. It was found that the appearance of vessels is highly sensitive in the gray scale image containing

    only the wavelength of green. Therefore, for segmentation of

    ISSN: 2278-0181

    Proc. Conf. Comp. Vision Pattern Rec., pages II: 181186,

    vessels was performed using only green channel of RGB colour

    2000.

    Vol. 1 Issue 3, May – 2012

    image. Gabor filter, whose application can be found in problems such as, strokes in character recognition and detecting roads in satellite image analysis, were explored to detect and enhance vessel features in retinal image. When compared with the matched filter for detecting line like features, Gabor filter provided a better result as it has optimal localization in both the frequency and space domains. The Gabor filter was tuned to a suitable frequency and orientation was able to emphasize vessels along that direction and filtering out background noise and other undesirable structures. Values of all the filter parameter were selected based on the properties of vessels. When filter was aligned along orientation of vessel it produced single peak response along that direction. Bank of 12 Gabor filters oriented along different directions in the range of 0 to 170 degrees were used to enhance the multi-oriented vessels. Increasing the number of filter banks did not result in significant improvement of result but increased the time consuming convolution operation. The resulted enhanced vessels were then subjected to thresholding for vessel pixel classification. Entropic threshold calculation based on gray level co-occurrence matrix as it contained information on the distribution of gray level frequency and edge information have been presented. Two publicly available databases were used to evaluate the performance of the method and also to compare it with the matched filter methods. It was found that for DRIVE database the method provided sensitivity of 86.4±4.0 % and 96±1.0 specificity. And for the STARE database 85% sensitivity and 96% specificity were achieved. It was found that the number of miss classified pixels was less compared to matched filter methods using the same database.

  1. K. Akita and H. Kuga. A computer method of understanding ocular fundus images. Pattern Recognition, 15(6):431443, 1982.

  2. Frame A., McCree M., Olson J., McHardy K., Sharp P., and Forrester J.V., Structural analysis of retinal vessels , Proceedings of the 6th International Conference on Image Processing and its Applications, vol. 2, pp. 824827, 1996.

  3. Chaudhuri S., Chatterjee S., Katz N., Nelson M., and Goldbaum M. Detection of blood vessels in retinal images using two dimensional matched filters , IEEE transactions on Medical Imaging, vol. 8, no. 3, pp. 263269. 1989.

  4. J.L.Company, Grading diabetic retinopathy from stereoscopic color fundus photographs – an extension of the modified airlie house classification, ETDRS Report No. 10, Ophthalmology, the Journal of the American Academy of Ophthalmology, vol. 98, no. 5, p. 78, May 1991

  5. K. J. Frank and J. P. Dieckert, Clinical review of diabetic eye disease:A primary care perspective, Southern Medical Journal, vol. 89, no. 5,pp. 463470, May 1996.

  6. M. Goldbaum, S. Moezzi, A. Taylor, S. Chatterjee, J. Boyd,

    E. Hunter, and R. Jain. Automated diagnosis and image understanding with object extraction, object classification, and inferencing in retinal images. International Conference on Image Processing, 3:695698, Sept. 1996

  7. R. Klein, Diabetic retinopathy, Public Health, vol. 17, pp. 137158,May 1996.

  8. J. G.O´Shea and D. A. Infeld, Screening and monitoring diabetic retinopathy, Birmingham and Midland Eye Centre, 1999

  9. H. Wang, W. Hsu, G. K. G., and L. M. L. An effective approach to detect lesions in color retinal images. In In

  10. Chen J., Sato Y., and Tamura S., Orientation space filtering for multiple orientation line segmentation ,IEEE Transactions of Pattern Analysis and Machine Intelligence, vol.22, pp.417-429, 2000.

  11. L. Gagnon, M. Lalonde, M. Beaulieu, and M.-C. Boucher. Procedure to detect anatomical structures in optical fundus images. In Proc. SPIE Medical Imaging: Image Processing, pages 12181225, 2001.

  12. C. Sinthanayothin, J. F. Boyce, T. H. Williamson, H. L. Cook, E. Mensah, S. Lal, and D. Usher. Automated detection of diabetic retinopathy on digital fundus images. Diabetic Medicine, 19:105112, 2002.

  13. Hoover and M. GoldBaum. Locating the optic nerve in retinal image using the fuzzy convergence of the blood vessels. IEEE Trans. on Medical Imaging, 22:951958, Aug. 2003

  14. Chanwimaluang T., and Fan G., An efficient algorithm for extraction of anatomical structures in retinal images , Proceedings of International Conference on Image Processing, vol. 1, pp. 1093-1096, 2003.

  15. A. Osareh. Automated Identification of Diabetic Retinal Exudates and the Optic Disc. PhD thesis, Univ. of Bristol, Jan. 2004.

  16. Bone H., Steel C., and Steel D., Screening for diabetic retinopathy , Optometry, vol. 6, no. 10, pp. 40-43, 2004.

  17. X. Zhang and O. Chutatape. Top-down and bottom-up

    strategies in lesion detection of background diabetic retinopathy. In n Proc. Conf. Comp. Vision Pattern Rec., pages 422428, 2005

  18. Chang C. I., Du Y., Wang J., Guo S. M., and Thouin P. D.,

    Survey and comparative analysis of entropy and relative entropy thresholding techniques , IEEE Proceedings of Vision, Image and Signal Processing, vol. 153, no. 6, pp. 837-850, 2006.

  19. Al-Rawi M., Qutaishat M., and Arrar M., An improved matched filter for blood vessel detection of digital retinal images , Computers in Biology & Medicine, vol. 37, no. 2, pp. 262-267, 2007.

  20. Sopharak, K. Thet Nwe, Y. A. Moe, M. N. Dailey, and B. Uyyanonvara. Automatic exudate detection with a naïve bayes classifier. In International Conference on Embedded Systems and Intelligent Technology (ICESIT), pages 139 142, Feb. 2008.

  21. Dougherty G., Johson M. J., and Wiers M., Measurement of retinal vascular tortuosity and its application to retinal pathologies , Journal of Medical & Biological Engineering & Computing, vol. 48, no. 1, pp. 87-95, 2010.

  22. N. J. Lingel, Care of the patient with diabetic retinopathy, Pacific On-Line Optometry Education.

  23. www.wikipedia.com

ISSN: 2278-0181

Vol. 1 Issue 3, May – 2012

Leave a Reply