A Machine Learning Based Approach for Automated Glaucoma Identification

DOI : 10.17577/IJERTV13IS010083

Download Full-Text PDF Cite this Publication

Text Only Version

A Machine Learning Based Approach for Automated Glaucoma Identification

Abdul Basit Ansari, Shraddha Kuamr, Namrata Bhatt

Department of CSE, SDBCT, Indore, India

Abstract: Early glaucoma detection is essential for both preventing visual loss and properly controlling the condition. In this effort, AI models have shown great promise as useful instruments, providing precise and effective detection techniques. These models analyses medical pictures, such as fundus photos and optical coherence tomography (OCT) scans, using a variety of methods, such as machine learning and deep learning algorithms. Through image processing, artificial intelligence models are able to recognize distinct patterns linked to glaucoma, including thinning of the retinal nerve fiber layer and alterations in the optic nerve head. The capacity of AI models to swiftly and reliably process massive amounts of medical data is a significant benefit for glaucoma detection. This paper presents a methodology based on the neural network model to analyze the fundus images for both glaucoma positive and negative images for automated detection. The approach employs a denoising, feature extraction, and subsequent training of a neural network model based on the gradient descent approach. The algorithm developed is validated through the testing of new image and it is found that the proposed approach attains a classification accuracy of 98% beating baseline algorithm in the domain.

Keywords: Glaucoma, Fundus Imagery, Denoising, Segmentation, Feature Extraction, Neural Networks.

I.INTRODUCTION

Glaucoma poses a significant threat to vision and can lead to irreversible blindness if left untreated [1]. This group of eye conditions is characterized by increased intraocular pressure, which can damage the optic nerve responsible for transmitting visual information to the brain [2]. The danger lies in the gradual and often asymptomatic progression of the disease, as individuals may not be aware of any symptoms until significant vision loss occurs [3]. Glaucoma typically affects peripheral vision first, leading to tunnel vision and, eventually, total blindness if not managed effectively. Hence, it is necessary identify the onset and progress of glaucoma as early as possible. Machine Learning based tools are being explored extensively to aid the glaucoma detection process to make it more accurate and fast[4] Conventional techniques for diagnosing glaucoma frequently depend on doctors' subjective evaluations, which can be laborious and prone to human error. On the other hand, AI models can diagnose patients more quickly and accurately since they can evaluate hundreds of photos in a fraction of the time it would take a human expert. Furthermore, AI models might be able to identify minute alterations in the eye that a human eye might miss, which would enable early glaucoma identification and prompt treatment.

AI models may also make glaucoma screening and diagnosis more accessible, especially in underprivileged areas or those with little access to eye doctors [5].

To save the vision of the individuals affected by glaucoma, the fundamental mechanism is to clinically reduce the levels of IOP. Clinical monotherapy or combination therapy, and in some cases surgical intervention may be required to reduce the levels of IOP to normal values[6]. While conventional medical practices have been prevalent in the diagnosis and subsequent treatment of glaucoma, recent advancements in the domain of artificial intelligence and machine learning have opened up new avenues for the early and accurate detection of glaucoma. Automated computational tools can aid ophthalmologists to detect glaucoma at relatively early stages so as to minimize the damage to optic nerve and hence reduce visual impairments. Moreover, it world form the basis for a strong second opinion. It can be particularly useful in areas which lack advanced medical facilities, typically common in remote areas of low income group countries[7]-[8]. Figure 1 depicts the optic nerve for a glaucoma negative and glaucoma positive image along with the peripheral vision.

Fig. 1. Normal and Glaucoma Inflicted Eye

Several automated techniques have been developed and explored based on the statistical analysis of the fundus image. The basic approach is to pre-process the image to remove the effects of noise and disturbance followed by feature extraction and classification using a machine learning based classifier[9]. The machine learning based

classified is trained with images of two categories viz. affected by glaucoma and unaffected with glaucoma [10]. The classifier tries to identify the patterns in the data and hence classify any new sample of the fundus image as glaucoma positive or negative [11]

The fundus images obtained from fundus imaging need to be processed and analysed to extract critical features for the decision on presence or absence of glaucoma or pre-

(monochromatic nature) of the source and inconsistencies in the characteristics of the sensing device [17]. In this approach, a Gaussian Kernel Function is used

for illumination correction as it is effective in normalizing the dynamic range of the image intensities, and is mathematically expressed as:

(+)

glaucoma like symptoms. Due to the occurrence of noise and disturbance effects while capturing, retrieving,

Here,

(, ) =

(2)

processing and storing the images, the final classification may be prone to errors[14]. This necessitates case sensitive noise removal and image restoration techniques which can enhance the quality of the images under interest so as to facilitate feature extraction and pattern recognition. This paper presents an combined approach for image enhancement and feature extraction pertaining to fundus images which would facilitate automated detection of glaucoma [15].

  1. DATA PRE-PROCESSING

    With the availability of exhaustive digital data records in the medical field coupled with the increasing processing powers of computational algorithms, automated detection of glaucoma has gained prominence. For accurate classification of glaucoma images, it is fundamentally important to pre- process the images prior to actual classification. In this paper, each of the sub-processes employed for image enhancement and subsequent feature extraction are explained in this section.

    RGB to Grayscale Conversion: Typically, acquired fundus images are contain three color channels which are Red, Green and Blue (RGB channels). Analyzing high resolution RGB images requires much higher compute power as compared to analyzing images with a single intensity variable as in the case of grayscale images (with grayscale or intensity variable). The luminosity algorithm for RGB to grayscale conversion is done based on the following relation[16]

    = . + . + . (1)

    (, ) is the Gaussian Kernel.

    represents the normalizing co-efficient.

    represents the scaling co-efficient of the kernel.

    (, ) represent the spatial co-ordinates.

    The reflection co-efficient value (, ) is estimated by convolving the input image and the Gaussian function in the periphery bound the contour . The weight co-efficient is updated throughout the contour for the number of scales

    = : . Further a linear transform to adjust the objectively captured image and the corrected image is given by [18]:

    = + (3)

    Here,

    and corresponds to the physically captured and illumination corrected images respectively.

    are correction constnts.

    The next process is the computation of the two-dimensional spatial correlation given by:

    (, ) = (,)(,) . (3)

    (,)(,)

    Here,

    represents the correlation.

    denotes the normalizing co-efficient.

    denotes the original image

    denotes image correlation

    Here,

    denotes image background

    corresponds the pixel value of the grayscale image.

    corresponds to the red component of the pixel.

    corresponds to the green component of the pixel.

    corresponds to the blue component of the pixel.

    The benefit of the approach is the conversion of the image dataset from three channels to one channel reducing the complexity of the approach.

    Illumination Correction: Illumination correction

    The histogram normalization is computed based on the difference in the eigen values of the original and corrected image given by [19]:

    | | (4)

    The covariance of the image can be computed as:

    [(,)(,)]

    corresponds to the removing the inherent illumination inconsistencies of the image due to poor or inadequate

    Here,

    =

    (5)

    ||

    lighting conditions, variations in the reflections from the target surface due to inherent heterogeneity, variations in the angle of capture, slight movements in the position and/or orientation of the source, variations in the wavelength

    denotes the average operation.

    Segmentation: The subsequent process is to add the product of the weight matrix and normalized co-variance co-efficient to the originally corrected image given by:

    is the time variable

    () is the time domain data.

    For implementing the wavelet transform on the image dataset, the sampled version of the continuous wavelet transform yields the discrete wavelet transform given by:

    = (, ) {(, )} + ( )] (6)

    Here,

    denotes the normalized image.

    denotes the correlation weights.

    To identify the patch, the radial gradient of the fundus image

    (, , ) = () [

    Where,

    () is the discrete × 1 vector.

    0

    is the discrete scaling constant.

    0

    is the discrete shifting constant.

    ]

    (9)

    is to be computed given by [20]:

    The discrete wavelet transform yields two distinct low and high pass values based on the number of levels of

    Here,

    ,, (,)

    = |

    ,,

    (7)

    decomposition and wavelet family given by the approximate co-efficient (CA) and detailed co-efficient (CD). The approximate co-efficient values are typically the low pass

    (, ) denote image pixels

    denotes radius

    denotes radial gradient

    Noise Removal: The next approach is the removal of the inherent noise effects in the image whose occurrence may the have following reasons [21]

    1. Addition of electronic noise in the image due to the use of amplifiers in the sensing device which is also termed as white or Gaussian noise.

    2. The abrupt change or spikes in the analog to digital converters sued in the circuity of the fundus image causing salt and pepper noise patterns.

    3. The multiplicative noise effect due to the inconsistent gain of the adaptive gain control (AGC) circuity used for capturing or retrieving the

      values containing the maximum information content of the image while the detailed co-efficient values account for the noisy spectral part. Retaining the low pass co-efficients and recursively discarding the high pass co-efficients allows to de-noise the image. The choice of the wavelet family impacts the estimation of the noise gradient vector as well.

      Feature Extraction: After the pre-processing and enhancement of the image is performed, the next process is the computation of statistical and texture based features from the image dataset. In recent literature it is found that a combination of both statistical and texture features are effective for classification problems. The features computed in the paper are [24]:

      fundus image [22]

    4. The lack of pixels while capturing the image

      =

      ,

      ,

      (10)

      resulting in frequency mean valued interpolations

      in the reconstructed image causing Poisson noise,

      The removal of noise effects is fundamentally important as

      . . = (

      ,

      ,

      )

      (11)

      noisy images would result in erroneous feature extraction leading to inaccurate classification of the fundus images.

      One of the most effective hyperspectral image restoration

      = (

      ,

      ,

      )

      (12)

      techniques is based on the sub-band decomposition of images into low pass and high pass signal values using the

      =

      ,(, )

      (13)

      wavelet transform. The wavelet transform, unlike the conventional Fourier methods uses non-linear and abruptly changing kernel functions which show efficacy in analysing

      = [(

      )] (14)

      ..

      abruptly fluctuating signals such as images. The discrete

      =

      (15)

      wavelet transforms are computed as [23]:

      =

      (, , ) = () ( ) (8)

      =

      Where,

      , represent the scaling (dilation) and shifting

      ,|,|

      (16)

      (translation) constants constrained to the condition 0.

      is the Wavelet Family or Mother Wavelet

      =

      ,

      ,[(, ) {(, )}] (17)

      = , ()(),

      (18)

      1. For medical applications, the memory and

        ,

        processing power of the equipments would be limited. Thus the algorithm should NOT possess

        = [,][[,] (19)

        high computational complexity.

        =

        ,

        ,

        ,

        ()

        (20)

      2. The SCG algorithm is fast and relative less computationally complex and hence is well suited for large medical image data analysis.

    Here,

    =

    +

    (21)

    The scaled conjugate gradient tries to find the steepest descent vector prior to weight update in each iteration and is

    ,

    corresponds to the ith colour component for pixel j

    mathematically given by [27]:

    () denotes the average of the statistical features.

    . . denotes standard deviation

    Here,

    = (22)

    denotes the variance

    denotes the number of pixels

    A is the initial search vector for steepest gradient search

    g is the actual gradient

    denotes the pixel value.

    =

    +

    (23)

    denotes the probability of occurrence of pixel I w.r.t. pixel j.

    Here,

    +

    2 denotes 2-dimensional correlation

    & denote the number pf pixels along &

    denotes mean along x

    denotes mean along y

    denotes standard deviation along x

    +1 is the weight of the next iteration and is the weight of the present iteration

    is the combination co-efficient

    For any iteration k, the search vector is given by [28]:

    (|+|+)

    denotes standard deviation along y

    Here,

    = + =

    (24)

    Feature extraction serves as a serves the purpose of extracting empirical statistical information from raw data. The feature values) serve as the parameters based on which any automated tool would classify a new fundus imagesample as a positive or negative case of glaucoma [29. The veracity of the feature extraction process can be checked based on the correlation among the extracted features for a large dataset or a subset of the dataset [25]. While individual image samples may exhibit divergences, yet the magnitude of such divergences in generally bound [30]. Hence, a correlation among the extracted features would testify for the correctness of the feature extraction process and its applicability for pattern recognition by any automated classifier. The image enhancement and feature extraction

    The customary g represents

    Proposed Algorithm:

    Start.

    Step. 1: Load image of interest.

    Step. 2: Employ RGB-Grayscale conversion based on the relation:

    = . + . + .

    Step.3: Identify patch to be inpainted based on the

    contour C enclosed by the gradient:

    process can be understood using the sequence of steps described in the proposed algorithm [26].

    =

    ,, (, )

    |

    Classification

    While designing a machine learning algorithm for automated glaucoma detection, the following constraints should be kept in mind:

    1. Typically medical image data occupies large memory (compared to text or numerical data)

    ,,

    Step. 4: Compute: ( )

    Step.5:

    (( ) ||

    Replace patch with ||

    Replace patch with:

    =

    ( + ( + )

    Step.6: For illumination correction, compute the convolution of the Gaussian kernel and the original image bounded by the contour C.

    ( = . )

    {

  2. EXPERIMENTAL RESULTS

    The system has been designed on MATLAB with the dataset from Kaggle [28].

    (, ) = [(, ) (, )] (, )

    =

    }

    Step.7: Compute:

    | |

    Update the normalized 2-dimensional covariance matrix as:

    =

    (

    = =

    )(

    )

    Fig. 2. Original Image

    Step.8: Generate fused images as:

    = +

    Step.9: Decide decomposition levels and family of wavelet function.

    Step.10: Compute the normalizing gradient as:

    ( )+( )

    (, ) =

    (+( )

    Step.11: = :

    Step.12: Compute the Normal and Cumulative Histograms of the original and de-noised images, along with the histogram metrics.

    Step.13: Compute Feature Distribution over the entire range of image index.

    Step.14: Train the SCG based deep neural network, and truncate training on convergence.

    Step.14: Test Network

    Step.16: Computer Classification Accuracy Stop.

    Fig. 3. Denoising using DWT

    Fig. 4. Denoising spectra for image

    Fig.7 Designed GUI for Proposed Work

    Figure 2 presents the original fundus image under observation for the study. Figure 3 depicts the DWT decomposition at the 3rd level of decomposition. Figure 4 depicts the denoising spectra for the image. Figure 5 then depicts the correlation analysis for feature 1 for a sample of 20 images through a stacked bar graph. The range similarity indirectly indicates the correctness of the feature extraction process. A similar logic can be extended to all the features of the dataset. Figure 6 depicts the confusion matrix for the testing phase. Figure 7 depicts the simple GUI designed for the model. From the confusion matrix, if we calculate the accuracy of the proposed approach, we find the accuracy of 98% which clearly beats the accuracy of the previous work. A complete summary of the system parameters and the results obtained is tabulated in table I summarizing all the obtained results.

    Fig. 5. Correlation analysis for feature 1 for 20 sample images

    Table I Summary of Results

    S.No.

    Parameter

    Value

    1.

    Dataset

    Kaggle

    2.

    Image Type

    ,jpg

    3.

    Pre-Processing dimensions

    256 x 256

    4.

    Denoising algorithm

    DWT

    5.

    Classification Model

    Neural Networks

    6.

    Features

    12

    7.

    Classification Accuracy

    98%

    8.

    Classification Accuracy (Previous Work)

    97.2%

    It can be clearly observed that the proposed research with rigorous image enhancement, segmentation followed by feature extraction and neural network based classification attains higher accuracy compared to previously existing work [1].

    Fig. 6. Confusion Matrix for Proposed Work

  3. CONCLUSION

It can be concluded that ML models have a lot of potential to help with glaucoma early identification and management. These models can quickly and accurately analyse medical images by utilizing machine learning and deep learning methods. This could result in earlier diagnoses and better patient outcomes. However, before AI models for glaucoma detection can be widely used in clinical practice, more investigation and improvement are required to solve issues

with data availability, model dependability, and scalability. The research work presented here is an amalgamation of a robust image enhancement, followed by feature extraction and final classification using a deep neural network model The proposed work attains a classification accuracy of 98% for the tested fundus images thereby beating the accuracy of previously existing baseline approaches [1]. Future directions of work may focus on transfer learning models to make the system more pervasive.

REFERENCES

  1. Nayak D, Das D, Majhi B, Bhandary S, ECNet: An evolutionary convolutional network for automated glaucoma detection using fundus images, Elsevier 2021, 67: 102559.

  2. A Adebayo, D Laroche, Unfulfilled Needs in the Detection, Diagnosis, Monitoring, Treatment, and Understanding of Glaucoma in Blacks Globally, Journal of Racial and Ethnic Health Disparities, Springer 2023, pp.1-6.

  3. WK Ju, GA Perkins, KY Kim, T Bastola, WY Choi, Glaucomatous optic neuropathy: Mitochondrial dynamics, dysfunction and protection in retinal ganglion cells, Progress in Retinal and Eye Research, Elsevier 2023, vol.95, 101136

  4. G Montesano, G Ometto, A King, Two-year visual field outcomes of the Treatment of Advanced Glaucoma Study (TAGS), American Journal of Ophthalmology, Elsevier 2023, vol.246., pp. 42-50.

  5. ACL Wu, BNK Choy, Psychological interventions to reduce intraocular pressure (IOP) in glaucoma patients: a review, ACL Wu, BNK Choy – Graefe's Archive for Clinical and Experimental Ophthalmology, Springer 2023, vol.261., pp. pages12151227.

  6. H Jayaram, M Kolko, DS Friedman, G Gazzard, Glaucoma: now and beyond, The Lancet, 2023, vol.402., no.10414, pp. 1788-180.

  7. A Carlisle, A Azuara-Blanco, Psychological interventions to reduce intraocular pressure (IOP) in glaucoma patients: an editorial, Graefe's Archive for Clinical and Experimental Ophthalmology, Springer 2023, vol.261., Art. No. 1213.

  8. SK Panda, H Cheong, TA Tun, SK Devella, Describing the structural phenotype of the glaucomatous optic nerve head using artificial intelligence, American journal of Ophthalmology, Elsevier 2022, vol.236, pp.172-182.

  9. A Neto, J Camera, S Oliveira, A Cláudia, Optic disc and cup segmentations for glaucoma assessment using cup-to-disc ratio, Procedia in Computer Science, Elsevier 2022, vol.196., p. 485-492.

  10. B Liu, D Pan, Z Shuai, H Song, ECSD-Net: A joint optic disc and cup segmentation and glaucoma classification network based on unsupervised domain adaptation, Computer Methods and Programs in Biomedicine, Elsevier 2022, vol.213, pp. 106530.

  11. M. S. Kamal, N. Dey, L. Chowdhury, S. I. Hasan and K. Santosh, "Explainable AI for Glaucoma Prediction Analysis to Understand Risk Factors in Treatment Planning," in IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1-9, 2022, Art no. 2509209.

  12. R. Leonardo, J. Gonçalves, A. Carreiro, B. Simões, T. Oliveira and

    F. Soares, "Impact of Generative Modeling for Fundus Image Augmentation With Improved and Degraded Quality in the Classification of Glaucoma," in IEEE Access, vol. 10, pp. 111636- 111649, 2022.

  13. Y Liu, LWL Yip, Y Zheng, L Wang, Glaucoma screening using an attention-guided stereo ensemble network, Methods, Elsevier 2022, vol.202., pp. 14-21.

  14. P. S. Nandhini, P. Srinath and P. Veeramanikandan, "Detection of Glaucoma using Convolutional Neural Network (CNN) with Super Resolution Generative Adversarial Network (SRGAN)," 2022 3rd International Conference on Smart Electronics and Communication (ICOSEC), Trichy, India, 2022, pp. 1034-1040.

  15. FA Medeiros, AA Jammal, EB Mariottoni, Detection of progressive glaucomatous optic nerve damage on fundus photographs with deep learning, Ophthalmology, Elsevier 2021, vol.128, no.3., pp. 383-392.

  16. O. C. Devecioglu, J. Malik, T. Ince, S. Kiranyaz, E. Atalay and M. Gabbouj, "Real-Time Glaucoma Detection From Digital Fundus

    Images Using Self-ONNs," in IEEE Access, vol. 9, pp. 140031- 140041, 2021.

  17. Zaman F, Gieser S, Schwartz G, Swan C, A multicenter, open-label study of netarsudil for the reduction of elevated intraocular pressure in patients with open-angle glaucoma or ocular hypertension in a real-world setting, Current Medical Research and Opinion, Taylor and Francis, 2021, 37(6): 1011-1020.

  18. Denis P, Duch S, Chen E, Klyve P, European real-world data about the use of a new delivery system containing a preservative-free multi-dose glaucoma treatment, European Journal of Ophthalmology, SAGE Publication 2021,31(3): 10561063.

  19. J Wang, FL Struebing, EE Geisert, Commonalities of optic nerve injury and glaucoma-induced neurodegeneration: Insights from transcriptome-wide studies, Experimental eye research, Elsevier 2021, vol.2017., 108571.

  20. J. Civit-Masot, M. J. Domínguez-Morales, S. Vicente-Díaz and A. Civit, "Dual Machine-Learning System to Aid Glaucoma Diagnosis Using Disc and Cup Feature Extraction," in IEEE Access, vol. 8, pp. 127519-127529, 2020.

  21. S. Borwankar, R. Sen and B. Kakani, "Improved Glaucoma Diagnosis Using Deep Learning," 2020 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT), Bangalore, India, 2020, pp. 1-4.

  22. A. Saxena, A. Vyas, L. Parashar and U. Singh, "A Glaucoma Detection using Convolutional Neural Network," 2020 International Conference on Electronics and Sustainable Communication Systems (ICESC), Coimbatore, India, 2020, pp. 815-820.

  23. S Phene, RC Dunn, N Hammel, Y Liu, J Krause, Deep learning and glaucoma specialists: the relative importance of optic disc features to predict glaucoma referral in fundus photographs, Ophthalmology, Elsevier 2019, vol.126., vol.12., 1627-1639

  24. Fu H., Cheng J., Xu Y., Liu J. Glaucoma Detection Based on Deep Learning Network in Fundus Image. In: Lu L., Wang X., Carneiro G., Yang L. (eds) Deep Learning and Convolutional Neural Networks for Medical Imaging and Clinical Informatics. Advances in Computer Vision and Pattern Recognition. Springer, 2019: 119- 137.

  25. Salam A, Khalil T., Akram, M. et al. Automated detection of glaucoma using structural and non structural features. Springer Plus, 2016, 5:1519.

  26. tefan A, Paraschiv E, Ovreiu S Ovreiu E, A Review of Glaucoma Detection from Digital Fundus Images using Machine Learning Techniques, International Conference on e-Health and Bioengineering (EHB), 2020:1-4.

  27. DMS Barros, JCC Moura, CR Freire, AC Taleb, Machine learning applied to retinal image processing for glaucoma detection: review and perspective, Biomedical Engineering Online, Springer 2020, vol.19., Article number: 20.

  28. Glaocoma Image Dataset, accessed from: https://www.kaggle.com/linchundan/fundusimage1000.