Identification and Categorization of Microaneurysms in Optic Images by Applying DTCWT and Log Gabor Characteristics

Identification and Categorization of Microaneurysms in Optic Images by Applying DTCWT and Log Gabor Characteristics

SujithaSubhajini

Department of Computer Applications, Noorul Islam Centre for Higher Education, Kanyakumari 629180, India

Corresponding Author Email: 
sujithar.mca@gmail.com
Page: 
725-730
|
DOI: 
https://doi.org/10.18280/ria.360509
Received: 
11 September 2022
|
Revised: 
5 October 2022
|
Accepted: 
12 October 2022
|
Available online: 
23 December 2022
| Citation

© 2022 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Ophthalmology is known as the "virtually silent mobster of vision." Ophthalmology is the leading cause of sight problems globally, aside from Diabetic Retinopathy. Intense pressure within the retina causes damage to the retinal image and, as a consequence, modest but undeniable vision problems. Ophthalmology is frequently obscured in its sufferers expecting final phase because the revival of the deteriorated nervous system fibers isn't suited healing properties. In 2010, it was estimated that approximately 60.5 million people over the age of 40 had cataracts. By 2020, this amount may have risen to 80 million. Recent advent of advanced imaging have resulted in excellent qualitative imaging solutions for the detection and monitoring of ophthalmology. Exterior brightness can be used to effectively complete ophthalmology orders. The fourier channels used in this research are daubechies and symlet3, which would improve the accuracy and performance of cataractous image categorization. A conventional 2-D Discrete Wavelet Transform (DWT), which is used to automatically extract and assess variations, is used to evaluate those channels. The extracted characteristics are fed into a convolutional machine classification, which distinguishes between physiological and pathological ophthalmology pictures.

Keywords: 

fundus, hypertension, Anterior chamber

1. Introduction

Ophthalmology damages the peripheral nerves of the eyeball and causes it to degenerate over duration, as depicted in Figure 1. One of the most significant branches of healthcare is ophthalmology. Numerous millions of individuals wouldn't be able to lead regular lives now without. Annually, an individual should have their eyes examined because only an ophthalmology is qualified to identify and treat vision-related issues. It's commonly linked to an increase in hypertension. The increased pressure, known as eye pressure, can cause damage to the retina, which transmits images to human nervous system. If the damage to the visual cortex caused by excessive hypertension continues, hypertension will result in permanent eye problems. The "frontal chamber" is a small compartment located next to the eyeball. Translucent fluid flows across the first compartment, feeding and showering neighbouring organs. In the event that an individual develops hypertension, the fluid in the eyeball does not decrease properly - it drains too slowly. As a result, fluid builds up with in eyeball, and the gravity within the retinal increases. Because if this reduction is done and regulated, the optic disc as well as other sensory organs may be damaged, resulting in visual impairment. Ophthalmology, if left untreated, can lead to permanent vision loss in as little as a few decades. Because the majority of people having hypertension really had no initial symptoms or pain from of the additional pressure, it is vital to consult an eye expert on a regular basis so that ophthalmology can be treated successfully after a brief sight loss [1].

Ophthalmology is more likely to develop in those over the age of 40 who have a genetic component. Everyone two years, a comprehensive eye examination is required. If the person is not healed, he or she may lose eyesight and eventually becoming blind. Two eyes are affected by the condition, but one would have extra serious clinical symptoms than another.

Figure 1. a) Ordinary Optic Nerve b) Ophthalmology Optic Nerve

This study aims to discriminate between iris images of retinal detachments and genuine eye images. Important features are selected for this categorisation using wavelet filtrations. In order to accurately classify retinal images as malignant or benign, supervised classification using wavelet characteristics is developed. It is advised to use a wavelet filter in daubechies. The intensity and maximum values of the image are calculated using this filter. This filtration produces the extracted features, and the texture feature is made possible by attributes like average, efficiency, measure of dispersion, and variance. Thanks to this filtration, a picture will be more accurate and effective.

The following is how the rest of this article is organized: Part 2 delves the related works. In part 3 delves deeper into the suggested MA detection approach. The analysis and conversation are presented in Part 4. Part 5 contains conclusion and future recommendations for the suggested work.

2. Related Work

The documenting of a thorough evaluation of primary and secondary data material using secondary data collection in the regions of research information is perhaps the most essential step inside the automated system design. The suggested methodology and outcomes phase are frequently preceded by an assessment. Its main goal is to keep the reader informed on output channel and patterns, which is the cause for the next goal. A smart flow of concepts, movement, and relevant resources with predictability, proper citation style, acceptable terminology, and a balanced and wide view on previous research into the matter characterize a well-organized study. To Identify and Categorization of Microaneurysms in Optic images in the human. A new approach for detecting hypertension in optic pictures. ROI Retrieval, Semantic Segmentation, and Classifier are the 3 phases of the present scheme. For the automated identification of the retinal image, arteries, and assessment of the characteristics, image processing techniques like as compression, morphology assignments, and quantization are commonly used. The Container to Disc Ratio (CDR) as well as the percentage of container surface was retrieved as characteristics. The CDR characteristic is computed to use the K-means clustering technique, and it is confirmed when using a classifier to categorise healthy and ophthalmic photos [2].

A combination of textural and higher order spectra (HOS) traits was used in study [3] to develop hypertension identification. The findings demonstrate indicate, whether combined with such a classification algorithm, texturing and HOS following z-score standardization and component definition outperform all other classifications and predict discriminate ophthalmic photos. To improve the results, the impact of characteristic standardization and placement is also emphasized. These characteristics are statistically important and can be used to accurately diagnose blindness. A frequently used component for fourier packet-based pattern organization is liveliness circulating over harmonic subsets. Since the discrete wavelet breakdown is overly comprehensive, inclusion assessment is frequently used to improve grouped accuracy and basic component depiction. The majority of wavelength characteristics extraction procedures take into account the evaluation of every subcarrier separately, which indisputably accepts the independence of the harmonic characteristics from multiple subsets. The dependence among attributes from multiple subsets is supposedly investigated and demonstrated for a certain image. A wavelet feature identification computation with a focus on genuine confidence is suggested in light of modelling and optimization. By incorporating the relationship among wavelet inclusion and the evaluation of each particular wavelet characteristic portion, this approach is further improved. The outcomes demonstrate the effectiveness of the techniques in integrating wavelet characteristic identification. By providing a novel segmentation algorithm focused on blocks by blocks evaluation of harmonic co-occurrence characteristics, the usage of wavelet decomposition for characterising the roughness of pictures at variety of scales [4].

Compression of the sub-image blocks, recovery, consecutive chunk characteristic disparity, segmented banding, image acquisition, and narrowing are indeed the steps in pattern recognition. This method performs more effectively than traditional individual suitable to address like texturing spectral range, plc, and similar ones. The offered method's results are deemed to be satisfactory. Platter altering and threshold methods are used to reduce sound, which improves productivity. Fractal analysis (FA) is the foundation of a paradigm for multiclass ophthalmic motion forecasting. FA is associated with modified 1-D optic nerve head impacts of information technology from healthy individuals' eyes as well as from disciplines with interactive and non hypertension [5-7]. Package and a multifractional Scattering motion method that incorporates texturing and multiscale analysis are used to gather FA characteristics. These two characteristics are used for classification tasks using a Gaussian filter. For results taken using Wavelet-Fourier analysis (WFA) and Fast-Fourier analytics (FFA), sensitivities, selectivity, and area under receiver operating characteristic (AUROC) are determined.

For cataract assessment segmenting the retinal image and ophthalmic cup utilizing pixel location categorization. Histograms and centre surrounding characteristics are employed in retinal image identification to categorise every enhanced pixel as a discs or a non-disc. In addition to the descriptive statistics and concentrate encompassing assessments for optical cup separation, the region information was included to the significant structural changes to aid in implementation [8]. In a collection of 650 photos, the proposed technique algorithms have been evaluated using OD and OC boundaries that were detached by skilled professionals. Following that, the CDR is processed for ophthalmology assessment using the segmentation OD and OC. In two separate datasets, the method achieves AUC values of 0.800 and 0.822, which is greater than those of other approaches [9-11]. The methods can be used to diagnose for cataracts. approaches to efficient qualitative scanning for the diagnosis and treatment of ophthalmology. Numerous detection methods and their improvements, such as metastatic disease magneto retinography and computed tomography, are popular methods used to analyse anatomical and functional malformations in the retina in order to both perceive deviation and measure the disease's evolution rationally [12, 13]. Computerized clinical decision support systems (CDSSs) for ophthalmic, including CASNET/glaucoma, are made to make efficient prediction models for identifying illness pathophysiology in people's eyes [14].

Many computer vision tasks, including interface investigation, so the, interface presentation, and shape verification, heavily rely on texture. The geographic distribution of grey levels in a space represents texture [15-17]. Determination of the appropriate qualities that distinguish the textures inside the picture for fragmentation, categorization, and recognizing is necessary for feature extraction. Inside the zones with much the same patterns, the characteristics are deemed to be consistent [18]. As a result, textural elements are not restricted to specific pixel intensities. To exploit segmented images, a few constituent separation approaches, such as spectroscopic processes, are available. For the categorization of cataract, the use of textural characteristics and HOS was suggested in the study [19].

Although texture-based solutions have been shown to be successful, it is still a challenge to create algorithms that retrieve accumulated supplementary and topographical characteristics from retinal vessels [20]. Wavelet Transforms (WTs)-based texture characteristics are often used in image recognition to overcome characteristic generalisation.

2.1 Datasets

According to a research team, this dataset provides comparable experiments on techniques for segmentation and classification of image acquisition. The available collection samples in Figure 2 include 15 photos of a normal fundus, 15 DR pictures, and 15 pictures of hypertension. There is bilateral current standard edge - based segmentation pictures accessible, as well as masking impacting FOV [21].

Figure 2. Characteristic fundus images ordinary and Ophthalmology

3. Proposed Methodology

This research seeks to distinguish between retinal detachments iris image and actual eye pictures. Wavelet filtrations are used to pick out important features for this categorization. Wavelet characteristics are utilised to develop a supervised classification that accurately categorises retina pictures as malignant or benign. A wavelet filtration is suggested for usage in daubechies. This filtration computes the picture's intensity and maximum values. The extracted features are created with the use of this filtration, and texture feature is made possible by properties such as average, efficiency, measure of dispersion, and variation. An picture's correctness and efficiency will improve thanks to this filtration. Conventional 2-D discrete wavelet transforms (DWTs), which are then used to feature extraction and assess variations, are conducted to evaluate this filtration. The automated classification, including a neural network, receives the collected characteristics as input.

The integrated process framework of our project is shown in Figure 3. Levels such as the training phase and testing dataset will be present. The pictures that will be trained are preprocessed with in training phase. Actions and the way include downsizing and choosing the green color. Finally, characteristics including estimate, energy, standard deviation, and dispersion are retrieved from of the precompiled image pixels. Finally, a Neural Network classifier uses these feature extractions to build a classification model. Each image in the testing set is evaluated to determine if it is acceptable or irregular.

3.1 Preprocessing

Preprocessing is to enhance the content in pictures compatibility with spectators who are people. A improved image is generated by an augmentation method when it is used for a certain purpose. This can be accomplished whether by reducing the noise or boosting picture quality. For analysis are presented, picture preprocessing comprises highlighting, sharpening, or smoothing visual information. Implementation augmentation techniques are frequently created via empirical research. Approaches for image preprocessing put a focus on particular image elements to enhance how an image is perceived visually.

3.2 Wavelet-based image decomposition

The spatial and spectral characteristics of an indicator are constrained by the DWT. In DWT, the picture is disintegrated into detailed information and a broad approximation by the use of low-pass and high-pass filters, respectively.

Iteratively, this degradation is applied to the low-pass approximation parameters derived on every stage, up until the critical intensities are reached. Every pixel is scanned as a pxq grayscale image infrastructure I [i,j], where every other element of the structure corresponds to the monochrome brightness of a single photo pixel. There are eight nearest neighbouring pixel brightness for every nonborder pixel. You can use these eight neighbours to move across the system.

It makes no difference that whether matrix is traversed hypothetical left-to-right or right-to-left; the resulting 2-D DWT parameters are about the same. It is appropriate to compare cardinal quadrants to the 0 percent (Dh), 45 percent (Dd), 90 percent (Dv), and 135 percent (Dd) alignments going forward. The daubechies (db4) wavelet filtering is employed in this study. It propagates the actual photo after deconstruction without losing image quality. Aspects of equilibrium or average, entropy, sample variance, and dispersion are present in the proposed model.

3.3 Removal of features

It is used to pick out key elements of a picture. One type of image compression is data acquisition. The required information will be turned into a compressed set of characteristics if it is too vast to be handled. The characteristics which are retrieved are specially picked so that they might capture the necessary data to complete the required task. Whenever photos are huge and an optimised characteristic is needed to do jobs rapidly, this strategy works well.

Research consisting of multiple of elements typically necessitates an amount of storage and compute, as well as classification techniques that overfits the training dataset and perfectly describes insufficiently to training instances. The best results are obtained whenever an expert creates a plan for using implementation characteristics.

3.4 Standard or mean

The averaging characteristic aids in determining the information's centre inclination. To get the averages, add up all the numbers (pixels) then account for a large pixel density. Equation states that it is determined from the decomposed image in the horizontally, vertically, and diagonally axes (1 to 3).

Average $D h 1=\frac{1}{ p \times q } \sum_{x=\{p\}} \sum_{y=\{q\}}|D h 1(x, y)|$      (1)

Average Dv1 $=\frac{1}{ p \times q } \sum_{x=\{p\}} \sum_{y=\{q\}}|D v 1(x, y)|$      (2)

Average Dd $1=\frac{1}{ p \times q } \sum_{x=\{p\}} \sum_{y=\{q\}}|D d 1(x, y)|$       (3)

3.5 Energy

This has been determined according to Equation to determine whichever stream (Red, Green, and Blue) does have the greater percentage in the natural colour picture (4 to 6).

Energy $=\frac{1}{p^2 \times q^2} \sum_{x=[p]} \sum_{y=[q]}(D h 1(x, y))^2$       (4)

Energy $=\frac{1}{p^2 \times q^2} \sum_{x=\{p]} \sum_{y=[q]}(D v 1(x, y))^2$       (5)

Energy $=\frac{1}{p^2 \times q^2} \sum_{x=[p]} \sum_{y=\{q]}(D d 1(x, y))^2$        (6)

3.6 Normative deviation

The divergence from of the median is described by the standard deviation (SD), which is given in Eq. (7). The data sets are likely near the mean if the SD is small. A large SD indicates that the variables cover a wide range of values.

$s_x=\sqrt{\frac{\sum_{i=1}^n\left(x_i-\bar{x}\right)^2}{n-1}}$

$n=$ The number of data points

$\bar{x}=$ The mean of the $x_i$

$x_i=$ Each of the values of the data      (7)

3.7 Variance

The discrepancy quantifies how widely spaced apart a piece of data exists. Variability is always greater than 0. A small variation suggests that considerations to be close to the average (predicted values) and subsequently one another, whereas a large variation indicates that considerations to be quite far from the average and subsequently from one another. The variance is the term for the square root of variation. The estimation of the dispersion of pixel values all around picture average is given by the squares of standard deviation.

3.7.1 Categorization of images

In order to distinguish between different items in the picture, analysis of the data is employed to give matching categories. The category describes the level. The basis for categorization will be spectroscopic or light spectrum specified characteristics, including concentration, thickness, etc. On the basis of a selection method, categorization could be said to partition the classification model into different categories. The classification technique in this research has indeed been performed in accordance with the workflow. A neurological system is made up of fundamental parts that run simultaneously. Those elements draw their motivation from the nature sense organs. Like in nature, how a system functions is typically determined by the relationships among its parts. By altering the estimates of the connections (weights) among elements, a neurological network can be trained to perform a certain function. Neurological systems are frequently adjusted or prepped with the intention that a certain piece of information will lead to a certain desired outcome.

Up until the time that the system output correlates the purpose, the mechanism is managed in consideration of a relationship between the produce and the goal. These info/target collections are anticipated to be often many and used to train the model.

4. Outcome and Analysis

Tables 1 to 6 present the features extracted calculated for the regular and vasodilator input images all along horizontally, vertically, and diagonally axes. The mean data again for red, green, and blue streams are determined. For the green channel, energy, sample variance, and variability are determined.

Table 1. Glaucoma picture horizontal factor

Red

Green

Blue

Energy

Standard Deviation

Variance

0.4812

0.8771

1.0113

7.568

1.896

1.024

0.4318

0.5440

0.6198

1.316

0.251

0.284

0.3098

0.5161

0.5870

1.083

0.202

0.142

0.4992

0.4715

0.5540

7.714

0.197

0.163

0.4432

0.4504

0.5261

6.074

0.147

0.039

Table 2. Strong picture horizontal factor

Red

Green

Blue

Energy

Standard Deviation

Variance

0.6136

0.6112

0.7181

1.693

0.1974

0.1681

0.6679

0.7609

0.8839

1.885

4.2578

0.9380

0.6645

0.6209

0.7246

1.814

0.2396

0.3420

0.6734

0.8035

0.9290

4.078

6.2043

0.4072

0.6251

0.6057

0.7140

1.657

0.2160

0.2110

Table 3. Strong picture vertical factor

Red

Green

Blue

Energy

Standard Deviation

Variance

0.4508

0.4456

0.5132

5.335

0.255

0.115

0.5461

0.5050

0.5731

9.170

0.323

0.436

0.4949

0.4594

0.5318

5.995

0.259

0.123

0.5882

0.7387

0.8566

5.458

0.532

0.148

0.4624

0.4403

0.5049

4.967

0.242

0.085

Table 4. Glaucoma picture vertical factor

Red

Green

Blue

Energy

Standard Deviation

Variance

0.431

0.723

0.833

1.481

4.854

1.266

0.454

0.564

0.616

2.376

0.815

2.655

0.323

0.528

0.596

1.144

0.378

0.823

0.380

0.512

0.577

9.683

0.334

0.455

0.400

0.455

0.532

5.554

0.247

0.100

Table 5. Healthy picture diagonal factor

Red

Green

Blue

Energy

Standard Deviation

Variance

0.228

0.234

0.256

3.838

0.045

2.688

0.254

0.242

0.261

4.443

0.049

3.626

0.259

0.247

0.270

4.806

0.053

4.796

0.239

0.229

0.246

3.559

0.042

2.372

0.235

0.233

0.254

3.711

0.043

2.508

Table 6. Glaucoma picture diagonal feature

Red

Green

Blue

Energy

Standard Deviation

Variance

0.154

0.235

0.253

4.779

0.067

0.001

0.187

0.234

0.248

3.990

0.048

3.064

0.132

0.223

0.241

3.138

0.038

1.542

0.172

0.237

0.256

4.009

0.043

2.312

0.184

0.221

0.242

3.258

0.043

2.558

Table 7. Performance Analysis

Classes

No.of Training pictures

No.of  Testing pictures

No. of properly Classified pictures

Classification

Accurateness (%)

Normal

05

12

12

100

Glaucoma

05

12

11

93.9

Average Accuracy

95.8

Figure 3. GUI presentation of justification by NN classifier for ordinary picture

Figure 4. GUI presentation of justification by NN classifier for irregular picture.

To training and test, all characteristics all along picture's horizontally, vertically, and diagonally axes are obtained and supplied to the FFBNN classification. Figures 3 to 4 depict the neural network objectives of promoting and the categorization outputs for the both physiologic and pathologic images. According to Table 7, which presents the findings of the proposed approach just on HRF database, the categorization of fundus images performed with an overall sensitivities rate of 100%, precision rate of 93.9%, and prediction accurateness of 95.83%. The categorization tactics effectiveness contrasts with [16]. In comparison to [17] who achieved an average precision of 91.66 percent and [15] who achieved an aggregate rate of 95 percent, the proposed technique obtains an overall precision of 96.95 percent. On the HRF database, the suggested classifier offers better sensibility and more consistency.

5. Conclusion

This work implements the suggested strategy to categorise the sample of tissue into regular and abnormal inflammatory image. 95.83 percent reliability may be seen in the outcomes. The research also demonstrates the effectiveness of wavelet-based characteristic approaches for identifying and forecasting the development of cataracts. The mean, energy, sample variance, and dispersion are derived from the precise parameters that can be utilised to discriminate among regular and abnormal inflammatory pictures with a really high degree of accurateness, according to the accurateness results. In future more datastes are utilized for analysis and it can be implement using some artificial blue brain techniques.

  References

[1] Sisodia, D.S., Nair, S., Khobragade, P. (2017). Diabetic retinal fundus images: Preprocessing and feature extraction for early detection of diabetic retinopathy. Biomedical and Pharmacology Journal, 10(2): 615-626. https://dx.doi.org/10.13005/bpj/1148

[2] Kiran, S.M., Chandrappa, D.N. (2021). Plant disease identification using discrete wavelet transforms and SVM. Journal of University of Shanghai for Science and Technology, 23(6): 108-114.

[3] Salih, T.A. (2020). Deep learning convolution neural network to detect and classify tomato plant leaf diseases. Open Access Library Journal, 7(5): 100100. https://dx.doi.org/10.4236/oalib.1106296

[4] Mia, M., Roy, S., Das, S.K., Rahman, M. (2020). Mango leaf disease recognition using neural network and support vector machine. Iran Journal of Computer Science, 3(3): 185-193. https://doi.org/10.1007/s42044-020-00057-z

[5] Xiao, Z., Zhang, X., Geng, L., Zhang, F., Wu, J., Tong, J., Shan, C. (2017). Automatic non-proliferative diabetic retinopathy screening system based on color fundus image. Biomedical Engineering Online, 16(1): 1-19. https://doi.org/10.1186/s12938-017-0414-z

[6] Mayya, V., Kamath, S., Kulkarni, U. (2021). Automated microaneurysms detection for early diagnosis of diabetic retinopathy: A Comprehensive review. Computer Methods and Programs in Biomedicine Update, 1: 100013. https://doi.org/10.1016/j.cmpbup.2021.100013

[7] Raghu, R., Jayaraman, V., Jayaraman, J., Nukala, S.S.V., Díaz, V.G. (2022). A multi-layered edge-secured cloud framework for healthcare monitoring in old-age homes using smart systems driven by comprehensive user interaction. International Journal of Safety and Security Engineering, 12(4): 449-457. https://doi.org/10.18280/ijsse.120405

[8] Eftekhari, N., Pourreza, H.R., Masoudi, M., Ghiasi-Shirazi, K., Saeedi, E. (2019). Microaneurysm detection in fundus images using a two-step convolutional neural network. Biomedical Engineering Online, 18(1): 1-16. https://doi.org/10.1186/s12938-019-0675-9

[9] Xia, H., Lan, Y., Song, S., Li, H. (2021). A multi-scale segmentation-to-classification network for tiny microaneurysm detection in fundus images. Knowledge-Based Systems, 226: 107140. https://doi.org/10.1016/j.knosys.2021.107140

[10] Deepa, V., Sathish Kumar, C., Susan Andrews, S. (2019). Automated detection of microaneurysms using Stockwell transform and statistical features. IET Image Processing, 13(8): 1341-1348. https://doi.org/10.1049/iet-ipr.2018.5672

[11] Joshi, S., Karule, P.T. (2020). Mathematical morphology for microaneurysm detection in fundus images. European Journal of Ophthalmology, 30(5): 1135-1142. https://doi.org/10.1177/11206721198430

[12] Rajesh, D., Rajanna, G.S. (2022). CSCRT protocol with energy efficient secured CH clustering for smart dust network using quantum key distribution. International Journal of Safety and Security Engineering, 12(4): 441-448. https://doi.org/10.18280/ijsse.120404

[13] Du, J., Zou, B., Chen, C., Xu, Z., Liu, Q. (2020). Automatic microaneurysm detection in fundus image based on local cross-section transformation and multi-feature fusion. Computer Methods and Programs in Biomedicine, 196: 105687. https://doi.org/10.1016/j.cmpb.2020.105687

[14] Li, Y.H., Yeh, N.N., Chen, S.J., Chung, Y.C. (2019). Computer-assisted diagnosis for diabetic retinopathy based on fundus images using deep convolutional neural network. Mobile Information Systems, 2019: 6142839. https://doi.org/10.1155/2019/6142839

[15] Melo, T., Mendonça, A.M., Campilho, A. (2020). Microaneurysm detection in color eye fundus images for diabetic retinopathy screening. Computers in Biology and Medicine, 126: 103995. https://doi.org/10.1016/j.compbiomed.2020.103995

[16] Liao, Y., Xia, H., Song, S., Li, H. (2021). Microaneurysm detection in fundus images based on a novel end-to-end convolutional neural network. Biocybernetics and Biomedical Engineering, 41(2): 589-604. https://doi.org/10.1016/j.bbe.2021.04.005

[17] Derwin, D.J., Selvi, S.T., Singh, O.J., Shan, B.P. (2020). A novel automated system of discriminating Microaneurysms in fundus images. Biomedical Signal Processing and Control, 58: 101839. https://doi.org/10.1016/j.bspc.2019.101839

[18] Annu, N., Justin, J. (2013). Automated classification of glaucoma images by wavelet energy features. International Journal of Engineering and Technology, 5(2): 1716-1721. 

[19] Kim, P.Y., Iftekharuddin, K.M., Davey, P.G., Tóth, M., Garas, A., Holló, G., Essock, E.A. (2013). Novel fractal feature-based multiclass glaucoma detection and progression prediction. IEEE Journal of Biomedical and Health Informatics, 17(2): 269-276. https://doi.org/10.1109/TITB.2012.2218661

[20] Cheng, J., Liu, J., Xu, Y., Yin, F., Wong, D.W.K., Tan, N.M., Wong, T.Y. (2013). Superpixel classification based optic disc and optic cup segmentation for glaucoma screening. IEEE Transactions on Medical Imaging, 32(6): 1019-1032. https://doi.org/10.1109/TMI.2013.2247770

[21] Dua, S., Acharya, U.R., Chowriappa, P., Sree, S.V. (2011). Wavelet-based energy features for glaucomatous image classification. IEEE Transactions on Information Technology in Biomedicine, 16(1): 80-87. https://doi.org/10.1109/TITB.2011.2176540