CFLHCF: Simultaneous Detection of the Optic Disc and Exudates Using Color Features, Local Homogeneity and Contextual Features

CFLHCF: Simultaneous Detection of the Optic Disc and Exudates Using Color Features, Local Homogeneity and Contextual Features

Kittipol Wisaeng 

Technology and Business Information System Unit, Mahasarakham Business School, Mahasarakham University, Mahasarakham 44150, Thailand

Corresponding Author Email: 
Kittipol.w@acc.msu.ac.th
Page: 
1557-1566
|
DOI: 
https://doi.org/10.18280/ts.390512
Received: 
31 May 2022
|
Revised: 
11 September 2022
|
Accepted: 
20 September 2022
|
Available online: 
30 November 2022
| Citation

© 2022 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Detecting Optic Disc (OD) and Exudates (EXs) in the fundus images has been challenging and demanding for the computer-aided diagnosis system. Existing algorithms for detecting OD and EXs are mainly based on traditional learning methods that heavily rely on enhanced OD and EXs features. Unlike traditional learning methods, a novel simultaneous detection of OD and EXs is presented. In this proposed novel Color Features, Local Homogeneity and Contextual Features (CFLHCF), the input original fundus image is preprocessed by color normalization, contrast enhancement, noise removal, and OD localization which the EXs is differentiated from its background information. Then, the preprocessed images are given as the input to Mathematical Morphology Binary Segmentation (MMBS) with Sobel Edge Detection (SED) technique, which detects the EXs from the given fundus images. An MMBS with a SED technique is implemented to boost a highly accurate segmented model for small EXs regions of EXs. The DiaretDB0, DiaretDB1, and STARE datasets are used to validate the proposed method. On the DiaretDB0 dataset, this technique archived an average sensitivity of 98.44% for EXs and a specificity of 98.72% for non‒EXs, which can potentially classify EXs even when the regions are trivial. With respect to sensitivity and specificity values, this method outperformed the previous state‒of‒the‒art methods by roughly 1.24% and 1.09% in the detection of EXs. Additionally, we show the EXs‒diagnosis in ~7 seconds per image.

Keywords: 

fundus images, optic disc, exudates detection, local homogeneity, contextual features

1. Introduction

One of the observable diseases of the eyes is diabetic retinopathy (DR). Retinal monitoring is the foremost clinical intervention to prevent menacing eye disease progression that may cause permanent blindness [1]. By 2030, there will be 440 million diabetic individuals worldwide, predicts the World Health Organization (WHO) [2]. EXs is early clinically signed of DR and are detectable through diagnosing the lesions in the eye. Methods for determining EXs are crucial for DR in the early stages [3]. In clinical practice, EXs is the earliest clinically detectable lesion. It is usually the basis for diagnosing DR. Therefore, diagnosing DR at the preclinical stage is essential, similar to detecting blood sugar levels in pre-diabetics. Automatic detection of EXs and classification afterward are challenging issues since the images are complex. Often, it is found that fundus images were unevenly illuminated and poorly contrasted. Examples of existing problems regarding fundus EXs images are shown in Figure 1. EXs regions are close to each other in some cases. Moreover, the contrast between the EXs and non‒EXs is sometimes extremely low. There is prior research on detecting EXs from fundus images, most of which started from improving the quality of an image and extracting features used in EXs segmentation stages. However, the detection performance of EXs tends to decrease dramatically when incorrect segmentation occurs. Typical errors include a tiny number of EXs pixels missing and disconnecting segmented boundaries of EXs regions. In order to resolve the identified complicated issues, the state‒of‒the‒art EXs detection has been developed. This paper focused on developing an application for EXs detection in sensor retrieved fundus images. About 100 related articles published until 2021 in IEEE Xplore Digital Library, Science Direct, Springer Link, and PubMed database have been reviewed. Numerous investigations have proposed innovative methods. A novel approach to detecting EXs comprises fuzzy image processing techniques and Circular Hough Transform was presented by Rahim et al. [4], which yielded successful results in the classification stage of fundus imaging. However, the accuracy rate of this method was relatively low. Similarly, Paing et al. [5] looked at a method for segmenting EXs on low‒quality fundus images using an adaptive artificial neural network (ANN). This method utilized less computation time when classifying EXs pixels from non‒EXs pixels. Next, Omar et al. [6] examined a combination of texture features and the ANN method in detecting EXs and lesions in obtained fundus images. In the first stage, the EXs‒pixels were extracted from non‒EXs using different local binary pattern variants, and their shape was also measured. Before the EXs was extracted, the region affected by the non‒EXs had been removed to obtain a better EXs segmentation result by ANN classifier. Then, Tennakoon et al. [7] developed a new algorithm for segmenting EXs by the convolution neural networks (CNN) method. Additionally, Gondal et al. [8] attempted the CNN method to classify and detect EXs. Inspired by the deep learning method, Kwasigroch et al. [9] attempted a method for automatic segmentation and detecting EXs from fundus images. The technique involved segmenting EXs contours and applying deep CNN to classify the EXs. Additionally, Kaur and Mittal [10] segmentation of EXs from acquired fundus images using a dynamic decision thresholding method. In addition, Kwasigroch et al. [11] proposed EXs detection in DR, using a profound learning algorithms method. Moreover, Seth and Agarwal [12] introduced an algorithm to classify EXs using a CNN with a Linear Support Vector Machine (LSVM). The feature extraction was based on a CNN candidate segmentation with clear borders. This study conferred the classification method to detect EXs based on LSVM. Additionally, Lam et al. [13] implemented CNN for EXs detection on low‒quality images, following instructions for computing the location and discrimination of several types of discoveries in fundus images using a limited number of training samples. Another automatic EXs segmentation application was developed by Zhao et al. [14]. This technique involved the automatic segmentation of EXs in a supervised learning pipeline. Chowdhury et al. [15] analyzed segmentation methods to detect EXs in a DR diagnosis application. A morphological opening operation was proposed to address the elimination of OD and blood vessel trees in the color fundus image. Detecting abnormalities in the retina was segmented using a random forest classifier. Afterward, the selected sample EXs was classified using the K‒means clustering technique. Additionally, Khojasteh et al. [16] presented a CNN for detecting EXs on color fundus images. Intensity variation and inverse surface adaptive thresholding approaches were also used by Karkuxhali and Manimegalai [17] to identify EXs pixels on color fundus images. In 2020, Wang et al. [18] examined CNN, multi-feature joint representation, and the mathematical morphology method. The results yielded an SEN of 94.77% on the HEI‒MED; however, the SEN was less, scoring 89.90% on the e‒Optha datasets. Lastly, Auccahuasi et al. [19] developed a deep learning method to distinguish between EXs and non‒EXs regions.

Figure 1. Example of fundus images; (a)‒(b) retina showing signs of EXs (yellow dots in black lines mask correspond to EXs pixels in the original fundus images

Built on the previous success, in this work, we aim to propose a novel CFLHCF combined MMBS‒SED for simultaneous detection of OD and the presence of EXs features in fundus images. We formulate this problem as the feature analysis problem, in which one task is for OD localization and the other is for EXs detection. Due to the causal relationship between OD and EXs features, a mathematical morphology operator and threshold technique are applied to incorporate the properties of OD features extracted from fundus images for EXs detection. In the end, the proposed novel CFLHCF and MMBS‒SED techniques are two parts. First, color normalization, contrast enhancement, noise removal, and OD localization are used to separate the EXs from their background information in the input original fundus image. Second, for EXs detection and diagnosis, the preprocessed images are given as the input to Mathematical Morphology Binary Segmentation (MMBS) with Sobel Edge Detection (SED) technique, which detects the EXs from the given fundus images. An MMBS with a SED technique is implemented to boost a highly accurate segmented model for small EXs regions of EXs. With the help of two independent testing sets constructed from three freely available datasets, we evaluate the proposed method in the experiments for identifying bright lesions and detecting referable EXs, which are important for EXs screening. We also compare the performance of the proposed method with those of three expert ophthalmologists.

2. Proposed Methodology

In this section, the proposed CFLHCF and MMBS‒SED for EXs detection is developed, which following five stages: 1) image pre-processing, 2) elimination of the OD, 3) extraction of EXs via color component features, 4) selected samples of EXs, and 5) EXs classification, respectively. The fundus images are modified in the first stage to normalize illumination, contrast enhancement, noise removal, color space selection, and optic disc localization. In the second stage, CFLHCF is adopted to obtain EXs candidates. Next, MMBS‒SED is used to distinguish EXs from non‒EXs candidates. The following subsections discuss the details of each stage.

2.1 Image preprocessing

We apply a histogram specification technique [20] to the Red (R), Green (G), and Blue (B) chanel images to the original fundus images. The intensity of the original images is normalized and equalized with the reference histogram images selected by human experts. Segment reallocation and intersegment transformation are used to obtain the transformed histograms for both images with and without edges. These two histograms utilized to perform histogram specifications are combined to get the desired histogram. In this example, the input image’s gray level, i, is mapped to another gray level, d, by Eq. (1):

$C_{\text {in }}(i)=C_{\text {desired }}(d)$                      (1)

where, Cin(i) and Cdesired(d) are the cumulative distribution and histogram, respectively, derived from the original fundus image. In other words, we look for the gray level, d, by Eq. (2):

$d=C_{\text {desired }}^{-1}\left(C_{i n}(i)\right)$                        (2)

We use Eq. (2) to transform each gray level in the original image into a normalized image. The output image is transformed back to RGB if the input image was originally RGB. However, at this stage, as observed from the image’s center out to its periphery, the brightness gradually reduces when the distance to the center image is increased (see Figure 1(b)). To increase the contrast between the EXs lesions in the image, a local contrast enhancement approach [21] must be used. The local contrast method was applied as follows: Given a small N×N running window w and the intensity of a color component image for the pixel range p, then the color image is converted to the full range image fnew [0, L‒1] with the linear stretching using Eq. (3):

$f_{{new }}=(L-1)\left(\frac{\psi_w(p)-\psi_w({Min})}{\psi_w({Max})-\psi_w(\operatorname{Min})}\right)$                        (3)

where, ψ(Min) and ψ(Max) are the sigmoidal function of the minimum and maximum value of the enhanced features among RBG color component of the fundus image p, and L=256 for 24‒bit images. The sigmoid function, which is also known as the standard logistic function is generated by applying Eq. (4):

$\psi_w(p)=\left[1+c x p \frac{\left(m_w-p\right)}{s_w}\right]^{-1}$                          (4)

In Eq. (4) derivatives are given as Eq. (5) and Eq. (6) and applied to all images to contrast windowing:

$m_w=\frac{1}{N^2} \sum_{(i, j) \in w(k, l)} \quad p(i, j)$                       (5)

$s_w=\sqrt{\frac{1}{N^2} \sum_{(i, j) \in w(k, l)}\quad \left(p(i, j)-m_w\right)^2}$                         (6)

where, mw and Sw are the input image’s local mean and standard deviation-correspondingly, which changes for every location. As usual, w(k, l) and N are the locations of the pixel within the window and the number of window sizes. Considering Eq. (4), the gain value determines the real contrast by controlling the speed at which the function changes from minimal to maximum. For the upper limit of acceptable standard deviation values and window size, we select Sw=0.2 and N=59×59, respectively. While the local enhancement method increases the luminosity of images, it also increases the brightness of noise or artifacts, which can lead to incorrectly classifying these pixels as EXs lesions. Hence, the third pre-processing step used image denoising by a median filtering method [22]. The image was filtered with an arithmetic mean filter of size 3×3, the smallest filter required in this instance to remove the most visible signs of impulse noise. Finally, we adopted the J model selection criterion to measure various color model efficiency to select the suitable model color selection criterion [23]. The matrix function J calculated the within‒class (Sw) and between‒class (Sb) scatter matrices. The class reparability of the region of interest pixel class for various color models, Eq. (7), can be used to get this function.

$J=\operatorname{trace}\left(\frac{S_b}{S_w}\right)$                        (7)

The parameter J is a set of overall clustering color pixels, while the denominator is assumed to be the number of regions where an image should be segmented. Here, Sw is the distribution of sample points around each mean vector, and Sb is the mixing weights of the sample pixels around the mean vector. The method for estimating both clusters can be defined as Eq. (8) and (9) respectively.

$S_b=\sum_{i=1}^C N_i\left(M_i-M\right)^2$ with $M=\frac{1}{N} \sum_{i-1}^C N_i M_i$                          (8)

where, C is the estimated color pixels regarded as one class and Ni is the image that has N pixels at the ith color pixel class Ci. The whole samples in the image form dataset are $N=\sum_i N_i$.

$S_w=\sum_{i=1}^C S_i$ with $M_i=\frac{1}{N_i} \sum_{n \in C_i}^C\left(X_n-M_i\right)^2$ and $M_i=\frac{1}{N_i} \sum_{n \in C_i} X_n$                              (9)

where, C is the number of clusters, Xn is the sample color points n, and Mi is the mean vector of class Ci. Based on the suitable color model selection, six delicately selected models are used in this experimental testing: RGB, YIQ, LUV, LAB, HSL, and HSI.

In the experiments, we used 89 fundus images to test the effect of color model selection on the function J selection criterion. The quantitative results for the different color models are shown in Table 1. A higher value of J indicates quantitative results in a more detailed segregation classification, while other members within each class are closer. Based on the experimental testing, the final choice of color model is the LUV color model. Therefore, this stage converts the RGB color space into LUV and evaluates the processed luminance space as a combination with empirically set weights.

Table 1. A comparative analysis of color models

Images

YIQ

HSL

RGB

HSI

LAB

LUV

image001

1.87

2.11

2.21

2.20

3.01

3.09

image002

2.06

2.09

2.29

2.31

2.93

3.14

image003

1.99

2.31

2.18

2.33

2.88

3.30

image004

2.12

2.17

2.17

2.29

2.91

3.11

image089

2.09

2.22

2.31

2.37

2.89

3.13

Average

2.16

2.19

2.29

2.31

2.91

3.12

2.2 Elimination of the optic disc (OD)

Due to the candidate EXs’ similarity to a yellow region in fundus images in terms of shape, color, intensity, brightness, and sharpness, the elimination of OD is a critical step in the proposed methods to classify the EXs [24, 25]. The area in the fundus image with the brightest yellow lesion is the OD. At other times, the OD is not discernible in the yellowish component and has to be localized in the red component. Therefore, OD elimination is not an easy matter. Some of these difficulties in OD elimination are shown in Figure 2.

In this work, the method attempted to locate OD accurately by a mathematical operator and threshold technique. In order to remove blood vessels from the OD region and approximate the OD, we first used the elimination method utilizing mathematical based on the closing operator (Figure 3(b)). Object contours are smoothed, thin connections are broken, and thin protrusions in the OD region are removed while morphologically closing is used on EXs. As shown in Figure 3(b), the results of closing with a structuring element weight of 7.8 were used. Afterward, the resultant image was binarized by automatic thresholding according to the OTSU threshold [26], see Figure 3(c). The result of the thresholding was a binary image where the OD found region was marked with 1’s and the background of the fundus image with 0’s. To take advantage of this feature, all the OD pixels from Figure 3(c) were used to create the candidate OD region in Figure 3(d). The largest pixel did not correspond to the true OD pixels, which is the step’s fundamental weakness. As a result, a mathematical dilation operator with a flat disk shape structuring element weight of 7.5 was applied to extend the OD region. The dilation achieved with this element is shown in Figure 3(e) (the OD region is larger to accommodate the dilation). Then, the OD was binarized using thresholding with a weight of 0.65 (Figure 3(f)). The associated OD components were included in the new binary image, as was previously mentioned. The “good” of these results was finally localized. The OD could then be classified as the fundus image’s largest circular connected component. It was discovered that the OD region was cropped as ∼3,200 pixels in the fundus image. With the help of the OD, elimination could process only the pixels of the OD and omit the background pixels by applying Eq. (10) [24, 25].

$C=4 \Pi \frac{ { Area }}{ { Perimeter }{ }^2}$                        (10)

where, C is the ratio of the OD region’s area to its perimeter square’s compactness, Area denotes the size of the OD candidate area, and perimeter represents the number of pixels along its perimeter (see Figure 3(g)). Then, a binary mask of the boundary candidates is overlaid on the original image (see Figure 3(h)).

Figure 2. Structure of OD and varying size, color, intensity, and location in fundus images

Figure 3. Output images of the OD elimination. (a) pre-processed image, (b) applying morphological closing operator, (c) thresholded image using OTSU threshold method, (d) OD is eliminated on the pre-processed image, (c) applying morphological dilation operator, (f) thresholded image using OTSU threshold method, (g) candidate OD regions using Eq. (10), (h) final OD elimination and superimposed on the original image

2.3 Extraction of exudates via color features and local homogeneity

When a fundus image is provided, function J from the preceding section converts the original image into LUV space. At this point, the fundus image was expected to contain P pixels, and C initialized cluster centers. The metric interval (M) is calculated as $M=\sqrt{P / C}$.  In order to calculate the distances between pixels and cluster centers, color information values in the potential minor neighborhood should be in a 3×3 representation of the EXs region. After then, each cluster center is updated using the identified pixel centroid. This procedure is repeated until there is no longer any change in the distance between successive cluster centers. In Color Features and Local Homogeneity (CFLH), the distance (D) between the cluster centers and the color R, G, B or L, U, V pixel positions is calculated [27]. The cluster center of pixel distance from pixel x to y is defined by Eq. (11) and (12).

Distance $=\sqrt{D_{ RGB }^2+\left(\frac{ m }{ M }\right)^2} D _{ XY }^2$                         (11)

Distance $=\sqrt{ D _{ LUV }^2+\left(\frac{ m }{ M }\right)^2} D _{ XY }^2$                        (12)

where, the weight factor, m, has to have a range of 0 to 255 (RGB) and 0 to 1 (LUV), M is the region size, and x and y are the color values of the EXs pixel at the position of x and y in the fundus image. In CFLH, several regions were extracted as features, including blood vessels, red spots, microaneurysms, and related to non‒EXs. The EXs extraction is based on color information with in-depth features to discriminate between EXs and non‒EXs components. The feature of EXs lesions is that they both appear as yellow dots and are more visible than the surrounding tissue. The extraction was necessary for collecting and providing data on EXs lesions in fundus images because the authors and supervisors didn’t have any medical expertise in fundus images when the research began. Three professional ophthalmologists were able to highlight EXs locations on fundus images effectively. The EXs regions should be marked (see Figure 4); we can select the class of the EXs.

Figure 4. Marking EXs regions with an expert ophthalmologist

The whole EXs regions are represented by yellowish areas in the color component features. Hence, these regions and color shading were targeted for classifying the EXs regions. After being careful with the color model selection in the previous section, we compared RGB and LUV color information that were well‒suited for classifying the whole EXs‒region. The color information values in the possible minor neighborhood are the size of 3×3 used to represent the EXs region in each model, as shown in Table 2.

Table 2. The color information values to represent the EXs‒region in the smallest possible neighborhood is of size 3×3

Images

RGB

LUV

image001

{R:226, G: 193, B: 0},

{R:225, G: 192, B: 1},

{R:221, G: 186, B: 0},

{R:224, G: 191, B: 2},

{R:224, G: 189, B: 3},

{R:220, G: 184, B: 0},

{R:220, G: 184, B: 0},

{R:219, G: 183, B: 1},

{R:216, G: 179, B: 0}

{L:0.90, U: 0.47, V: 0.99},

{L:0.91, U: 0.46, V: 0.98},

{L:0.90, U: 0.46, V: 0.99},

{L:0.90, U: 0.46, V: 0.98},

{L:0.91, U: 0.45, V: 0.98},

{L:0.90, U: 0.46, V: 0.98},

{L:0.90, U: 0.46, V: 0.99},

{L:0.91, U: 0.46, V: 0.99},

{L:0.91, U: 0.46, V: 0.99},

In order to distinguish EXs candidates from background feature candidates, a set of color information was used to prove that the yellowish color has relativity with the EXs‒region in terms of the intensity feature. In this way, the EXs pixels in RGB and LUV channels (see Table 2) that the EXs have different non-EXs contrast in other channels.

2.4 Candidates of exudates using contextual features

In the previous section, the above color component features analysis uses yellow to represent the region of the whole EXs while red, brown, and orange colors represent non‒EXs regions. An example of the proposed novel contextual feature is illustrated in Figure 5. Consider Figure 5, yellow pixels are the position of the current EXs candidate region in the RGB channel, and the corresponding neighbor EXs positions are marked as blue circles.

Figure 5. Example of a manually cropped image containing EXs regions of interest, (a) original fundus images; (b) color information, and a close‒up view of the EXs‒region

Essentially, the proposed contextual feature is an adjustment to the mean RGB value of each candidate. Therefore, in order to distinguish EXs (yellow features) from other color features, the RGB distance between neighboring pixels is taken into consideration. In RGB color information, the probability color values range from 0 to 255 (R:0‒255, G:0‒255, B:0‒255). Therefore, the minimum (Min) and maximum (Max) intensity of the pixels values within the RGB (red, green, and blue) color model are 0 (no color) and 255 (full color). The LUV color model, in contrast, is used to describe the color’s brightness or intensity from 0 to 1. The function is chosen as the local mean and global mean value inside each color information, creating the non‒EXs and EXs regions that are used in the proposed novel contextual feature to calculate the average color value in RGB and LUV color images. Here, let Lmean represent the local mean intensities of the color channel, Confeature denotes a novel contextual feature defined as Eq. (13) and (14), respectively.

$L_{\text {mean }}=\frac{\text { Total intensitics value in the window }}{\text { Number of window size }}$                        (13)

Con $_{\text {Feature }}=\frac{\text { Total value of the local mean }}{\text { Total of window }}$                       (14)

To understand that a local mean intensity creates even ranges between the Min and the Max values (the weighted average values), Table 2 shows the relationship between the intensities value in the window and the number of window sizes. Consider in red (R) color channel in an image, (226, 225, 221, 224, 224, 220, 220, 219, 216), the sum of that point’s nine neighbors divided by 9. To obtain the weighted average of maximum values, the total value of 1,995 divided by 9 is 221. This weight of 221 is considered the weighted average of the EXs in the Red (R) color channel, with the smallest possible neighborhood being the size of 3×3. The neighborhood’s center is then transferred to the next nearby location. The process is repeated to produce the subsequent value of the output image in each window. In this case, the weight of 221 depends on the square of size 3×3 at the red color channel. The neighborhood’s average pixel intensity is transformed into an intensity (also called local homogeneity). Similarly, the total value of the local mean intensity with the same metric can be defined as a novel contextual feature by Eq. (12). For example, if the total value of the weighted average is 221, 221, 212, 198, and 201 in the whole window of 5. A novel contextual feature by the total value of 1,053 divided by 5 is 210.

This technique, sometimes called global means intensities. Therefore, this single value represents a novel contextual feature for classifying the EXs regions in the red space. If the confidences with a novel contextual feature of RGB color channels are 210, 189, and 201 while LUV color channels are 0.90, 0.46, and 0.99, respectively. In terms of the classification of EXs as depicted in Figure 6. Examining effects, as Figure 6(a) depicts, a novel contextual feature gets good classification results of RGB with “imag016.png” from the DiaretDB0, and Figure 6(b) the segmentation results in LUV space.

2.5 Exudates classification using MMBS-SED

Following the extraction of the candidate EXs region in the previous phase, the classification errors with various thresholds are calculated. Thresholds ranging from 0 to 1 are listed at this stage. The threshold with minimum classification error is adopted. As for the EXs, the color feature, local homogeneity, and contextual features can be regarded as the main landmark for distinguishing EXs from the other features in fundus images, which plays an important role in retinal image analyses. The proposed CFLHCF and MMBS‒SED methods mainly rely on two assumptions. First, locate the EXs appear as a yellow and bright region using optimal threshold and mathematical morphology; secondly, the EXs shape exhibited using Sobel edge detector. The EXs detection method consists of the following two parts:

1) Optimal threshold and mathematical morphology to locate the EXs regions.

2) Candidate EXs regions selection using Sobel edge detector.

Step 1: Locate the EXs by computing the histogram probabilities of the fundus image by Eq. (15) [26].

$P(i)=\frac{ { number }\{(r, c) \mid {image}(r, c)=i\}}{(R, C)}$                        (15)

where, R is the number of rows and C is the number of columns, and P(i) is the histogram probabilities. R represents the index for a row and C is a column of the image.

Step 2: Calculate within‒class variance of EXs and non‒EXs pixels. The within‒class variance defines by Eq. (16(a)-(b)).

$\sigma_{ {Within }}^2=w_B(t)^* \sigma_B^2(t)+w_E(t)^* \sigma_E^2$                      (16a)

and

$w_B(t)=\sum_{i=0}^{T-1} p(i)$                       (16b)

where, $\sigma_{ {Within }}^2$ is the weighted sum of the variances of each cluster, $\sigma_B^2(t)$ is the variance of the pixels in the below threshold (Background), $w_{E}(t)$ is the variance of the pixels in the above threshold (EXs), $w_{B}(t)$ is the weight of background pixels, and $w_{E}(t)$ is the weight of EXs pixels respectively.

Step 3: Compute between‒class variance of EXs and non‒EXs pixels by the optimal OTSU method. This method is far faster than the simple threshold. The between-class variance is defined as Eq. (17).

$\begin{aligned} \sigma_{{ Between }}^2 & =\sigma^2-\sigma_{W { ithin }}^2 \\ & =w_B(t)^*\left(\mu_B(t)-\mu\right)^2+w_E(t)^*\left(\mu_E(t)-\mu\right)^2 \\ & =w_B(t)^* w_E(t)^*\left(\mu_B(t)-\mu_E(t)\right)^2\end{aligned}$                           (17)

where, $\sigma^2$ and μ are the overall mean and combined variance of the image, respectively. An example of the optimal threshold value of 0.68 is selected, and the result is shown in Figure 6(c). However, selecting a higher threshold value than 0.68 does not give a better result since dark EXs regions are now considered to belong to the non‒EXs.

Step 4: Mathematical morphology to locate the EXs regions. As seen in the example in Figure 6(c), the threshold method does not give adequate segmentation that provides good results. Thus, a mathematical morphology approach based on the dilation operator should be applied in this stage. Mathematical techniques are divided into larger regions based on predefined criteria. Additional criteria increase the power of the segmentation method, such as region, size, and likeness between the EXs pixel and non‒EXs. The mathematical morphology and the size of the structure element are determined as follows:

In the first round, all-white pixels cannot be marked as EXs regions or their neighborhood pixels. Therefore, in the second round, the white pixel is dilated as the flat structuring element’s size increases. In the third round, the white pixels are dilated by the size of the flat structuring element is 7.4, so dilation is performed. In the fourth round, a mathematical erosion is used to remove small pixels in the dilated mask. In this round, the size of the flat structuring element in mathematical erosion is the same as that used in dilating. In the fifth round, to classify all regions of the EXs in the mask, the algorithm computes the area of the EXs in the sixth round. In the seventh round, if the compactness is higher than a present threshold, the pixel is considered EXs regions in the eighth round. Otherwise, the pixel is regarded as a part of the non‒EXs region in the ninth and tenth rounds. If all EXs pixels have been marked, the processing task is stopped. After the EXs regions have been found, binary masks are given as input to the next stage of boundary detection.

Step 5: The boundaries of the EXs are detected at this stage using four different edge detectors, including Robert, Sobel, Prewitt, and Canny edge detectors. The Sobel operator [28] uses a 3×3 neighborhood based on a gradient operator, the convolution masked by two kernels, and the Robert edge detector [28] calculates the gradient operator around the central pixel using a 3×3 neighborhood. The Prewitt edge detector [28], and the second derivative zero-crossing point correspond to the Canny edge detector [29], with a negligible risk of weak edges. The experiment suggests a Sobel edge detector is best for detecting EXs boundaries (see Figure 6(d)). Compared to the other three edge detectors, it can also identify true EXs boundaries with the minimum number of errors.

Figure 6. CFLHCF and MMBS‒SE detection results of imag016.png from DiaretDB0. (a) the detection results in RGB space, (b) the detection results in LUV, (c) a thresholding result with 0.68, (d) applying Sobel edge detector

3. Dataset and Performance Evaluation Metrics

The effectiveness of the proposed approach is validated in this section through a comprehensive explanation of three publicly available datasets and performance evaluation criteria. Three criteria have been used to evaluate the performance of the proposed method: 1) sensitivity, 2) specificity and 3) accuracy to segment the EXs. Finally, the outcomes of the proposed approach are evaluated against current best practices. The details are outlined below.

3.1 Datasets

Different publicly available annotated datasets of fundus images in the field of fundus imaging have multiple objectives, characteristics, and levels of completion. The most important component is the ground truth data, which offers the benchmark against which the algorithms can be developed and evaluated. The retina datasets are most frequently used based on their ground truth and number of images, including the DiaretDB0 [30], DiaretDB1 [31], STARE [32], MESSIDOR Digital Fundus Images [33] (MESSIDOR), Kaggle Dataset [34], and Retinopathy Online Challenge (ROC) [35]. The recordings are found in the DR databases as fundus images. However, we proposed to focus on EXs detection in fundus images. Therefore, we used three publicly available datasets in this study: DiaretDB0, DiaretDB1, and STARE datasets. The type of problem and proposed approaches developed by researchers and specialists determine the choice and importance of datasets chosen for data processing. Tomi Kauppi also created the publicly available DiaretDB0 dataset. We captured 130 color fundus images for this dataset. There were 20 standard fundus images, and 110 were considered DR images. We chose ground truth information of EXs in the 89 fundus images from the DiaretDB1, a publicly available dataset. There are 5 normal images, whereas 84 were given aberrant images and annotated by four ophthalmologists. It has 1,728,000 high-resolution images taken at 50-degree angles field of view (FOV). The fundus images are 1500×1152 pixels in size. Four hundred fundus images make up the STARE dataset, which was created by the University of California, San Diego. A 35-degree FOV was used to take 400 fundus images for the STARE, of which 322 were classified as non-EXs and 78 as EXs. The images have a size of 700×605 resolution images with 24‒bits per pixel.

3.2 Assessment of detection performance

Three different kinds of evaluation criteria (e.g., Sensitivity (SEN), Specificity (SPEC), and Accuracy (ACC)) are used to verify and evaluate the efficacy of the CFLHCF and MMBS‒SED method. SEN is the percentage of EXs that the process correctly identified as EXs. SPEC represents the percentage of non‒EXs detected as non‒EXs by the process. These criteria are defined as Eq. (18)-(20).

$Sensitivity=\frac{T P}{T P+F N} \times 100$                     (18)

$Specficity=\frac{T N}{T N+F P} \times 100$                      (19)

where, TN (True Negative) is the number of non-EXs successfully classified as non-EXs, FN (False Negative) is the number of non-EXs not classified, FP (False Positive) is the number of EXs not classified, and TP (True Positive) is the number of EXs correctly classified as EXs. Moreover, we used the percentage of ACC performance measures in this study. The accuracy value can be defined as Eq. (20). The ACC value is adopted to evaluate the method’s effectiveness in the study, consisting of the SEN on the vertical axis and SPEC on the horizontal axis.

$Accuracy=\frac{T N+T P}{T N+T P+F N+F P} \times 100$                       (20)

4. Results

In this subsection, the experiments based on three freely accessible datasets are used to evaluate the proposed method. In these experiments, the method is assessed using pixel-based evaluation criteria. In pixel-based criteria, the connected component pixel validation is used to count the number of correctly identified pixels. Based on the above descriptions, the SEN, SPEC, and ACC values were calculated by pixel-based evaluation on, DiaretDB0, DiaretDB1, and STARE datasets under varying regions and sizes. Below is a description of the details.

4.1 The performance of the proposed method

The technique was developed using MATLAB 2019b on a desktop with CPU Intel(R) Core(TM) i7‒6700K, 4.00 GHz, and 8 GB RAM. The performance of the proposed method has achieved overall datasets with RGB and LUV color models in Table 3. The DiaretDB0 offers an SEN rate of 98.12%, a SPEC rate of 98.08%, and an ACC rate of 98.10%, respectively. In order to test the suggested method on the input dataset, we chose 297 fundus images. The experimental results on the DiaretDB1 dataset demonstrate that EXs classification SEN, SPEC, and ACC were 98.41%, 98.29%, and 98.32%, respectively. Finally, we used the proposed methods to test the STARE for EXs detection. The detection of EXs was performed using the proposed method and yielded SEN, SPEC, and ACC of 98.79%, 99.81%, and 99.34%, respectively. The proposed methods are excellent for EXs classification with the LUV color model in STARE because they obtained the best sensitivity and specificity values. However, STARE’s fundus images are smaller than those in the DiaretDB0 and DiaretDB1 databases. The excellent segmentation results in fundus images “img013.png” and “img016.png” from the DiaretDB0 and “img016.png” from the DiaretDB1 with RGB and LUV color models, which includes large and small EXs regions as shown in Figure 7. An example segmentation results in fundus images “img0306.png” and “img0359.png” from the STARE with RGB and LUV color models, which include large and small EXs regions shown in Figure 8.

Figure 7. Visual examples of the proposed method for EXs detection on DiaretDB0 and DiaretDB1 datasets, where the regions in black line represent EXs, (a)‒(a4) EXs segmentation results of fundus image “img013.png” from the DiaretDB0 dataset with RGB, (b)‒(a4) EXs segmentation results of fundus image “img013.png” from the DiaretDB0 dataset with LUV, (c)‒(c4) EXs segmentation results of fundus image “img016.png” from the DiaretDB1 dataset with RGB, (d)-(d4) EXs segmentation results of fundus image “img013.png” from the DiaretDB1 dataset with LUV

4.2 Comparison with the state-of-the-art method

An efficient method to segment and detect EXs in fundus images is proposed. There are algorithms in the literature for segmenting EXs using different techniques. Pereira et al. [36] offered double thresholding and Ant Colony algorithms for classifying EXs in fundus images. The proposed method was relatively good in results on HEI‒MED: the SEN of 80.82% and SPEC of 99.16%, Naqvi et al. [37] have proposed scale-invariant feature transform and K‒means clustering for classifying EXs on DiaretDB1. They reported an SEN of 92.70% and a SPEC of 81.02% for classifying EXs. Annunziata et al. [38] used a green color model for detecting EXs. They reported 71.28% SEN and 98.36% on SPEC for EXs. Imani and Pourreza [39] have proposed dynamic thresholding and mathematical morphology method for classifying EXs in fundus images on the DiaretDB1. They reported 89.01% on SEN and 99.93% on SPEC. Kaur and Mittal [40] have used iterative clustering in their algorithm to find EXs on the STARE. They reported 96.41% SEN and 96.57% SPEC with 81 fundus images. Mo et al. [41] have combined the probability map and maximal probability region for identifying true hard EXs. They reported 92.55% on SEN and 71% on predictive capabilities, but the system was tested with only 169 fundus images. The comparative analysis of the detection of EXs with the state‒of‒the‒art methods was comprehensively summarized, as shown in Table 4.

Table 3. Comparison of exudates detection performance on three different publicly available datasets

Databases

Performance measures (%) in RGB space

Performance measures (%) in LUV space

 

SEN

SPEC

ACC

SEN

SPEC

ACC

DiaretDB0 (130 image)

89.01

88.90

88.04

98.12

98.08

98.10

DiaretDB1 (89 images)

90.89

89.94

89.16

98.41

98.29

98.32

STARE (78 images)

91.80

91.23

91.40

98.79

99.81

99.34

Figure 8. Visual examples of the proposed method for EXs detection on STARE datasets, where the regions in black line represent EXs, (a)‒(a4) EXs segmentation results of fundus image “img0306.png” from the DiaretDB0 dataset with RGB, (b)‒(a4) EXs segmentation results of fundus image “img0306.png” from the STARE dataset with LUV, (c)‒(c4) EXs segmentation results of fundus image “img0359.png” from the DiaretDB0 dataset with RGB, (d)‒(d4) EXs segmentation results of fundus image “img0359.png” from the STARE dataset with LUV

Table 4. Comparison of the obtained EXs detection performance of state‒of‒the‒art methods with the proposed method

Author

Year

Method

Journal

Performance Measures (%)

Sensitivity

Specificity

Accuracy

Rahim et al. [4]

2016

Fuzzy image processing

Brain Informatics

-

-

93.00

Paing et al. [5]

2016

ANN

IEEE

95.00

95.00

96.00

Omar et al. [6]

2016

Region-based multiscale LBP texture

Inter. Conf. on Control, Decision, and Information Technologies

98.68

94.81

96.73

Tennakoon et al. [7]

2016

CNN

In Proc. Ophthalmic Medical Image Analysis

98.27

99.12

97.46

Gondal et al. [8]

2017

CNN

IEEE Inter. Conf. on Image Processing

93.60

97.60

-

Kwasigrochet al. [9]

2018

Deep CNN

IEEE International Interdisciplinary Ph.D. Workshop

89.50

50.50

81.70

Kaur and Mittal [10]

2018

Dynamic decision thresholding

Biocybernetics and Biomedical Engineering

88.85

96.15

93.46

Lam et al. [13]

2018

CNN

Investigative Ophthalmology Visual Science

-

-

98.00

Zhao et al. [14]

2018

R-sGAN technique

IEEE Transactions on Medical Imaging

79.01

97.95

-

Chowdhuryet al. [15]

2019

Random forest classifier-based approach

Medical & Biological Engineering & Computing

86.41

77.39

80.61

Khojastehet al. [16]

2019

Deeply learnable features

Computers in Biology and Medicine

97.60

-

99.00

Karkuxhali and Manimegalai [17]

2019

Intensity variation and inverse surface adaptive thresholding

Biocybernetics and Biomedical Engineering

97.43

98.87

-

Wang et al. [18]

2020

Deep model learned information and multi-feature joint representation

Computer Methods and Programs in Biomedicine

94.77

-

-

Naqvi et al. [37]

2015

Scale-invariant feature

Computers in Biology and Medicine

97.18

83.10

95.02

Annunziata et al. [38]

2016

Green channel homogenization

IEEE Journal of Biomedical and Health Informatics

71.28

98.36

95.62

Imani and Pourreza [39]

2016

Dynamic thresholding and morphological processing

Computer Methods and Programs in Biomedicine

89.01

99.93

-

Kaur amd Mittal [40]

2018

Image intensity and vascular information

Biocybernetics and Biomedical Engineering

-

96.41

96.57

Mo et al. [41]

2018

Cascaded deep residual networks

Neurocomputing

92.55

-

-

Proposed method

2022

CFLHCF and MMBS-SED

IIETA

98.79

99.81

99.34

5. Conclusions

In this paper, the method proposed by the color features, the local homogeneity, and a novel contextual feature was successfully used in fundus images to detect the EXs. Total fundus images and their ground truth information were provided by DiaretDB0, DiaretDB1, and STARE datasets. The quality of fundus images varies due to three imaging techniques for classifying EXs regions of the poor‒quality image proposed. Afterward, the yellowish feature extraction process consisting of candidate regions was classified into EXs and non‒EXs. In addition to thresholding, two techniques were applied in the EXs segmentation: an optimal thresholding method for candidate EXs and a mathematical morphology-based method for small EXs region detection. From Figure 7 and Figure 8, it is clear that the proposed method overcomes the problem of EXs detection from poor fundus images. Moreover, in the works of prior research, the proposed method was used in poor‒quality fundus images, and in most of them, it worked well. The outcomes demonstrated that this technique could be used to help a specialist segment fundus image into EXs and non-EXs, supporting the specialist in early screening.

6. Future Directions

Image processing methods have been proposed to diagnose DR in fundus images. In the future, we will combine different lesion segmentation methods based on machine learning optimizations and double thresholding [42] to classify DR into cotton wool spots, microaneurysms, hemorrhages, and EXs. If the final detection result of this combination is sufficiently good, it is possible to automate the early screening of EXs in fundus images. The automated screening application would reduce experts’ workload since only fundus images, identified as normal or abnormal by the automatic application, need to be further examined by experts.

Acknowledgment

This research project was financially supported by Mahasarakham University, Thailand. The authors’ thankfulness also goes to the following expert ophthalmologists: M.D. Ekkarat Pothiruk, KhonKaen Hospital, Thailand, M.D. Wiranut Suttisa, Kantharawichai Hospital, Thailand, M.D. Sakrit Moksiri, Borabue Hospital, Thailand, for their kind provisions of the OD and EXs detection for this study. Finally, we thank Assist. Prof. Dr. Intisarn Chaiyasuk from Mahasarakham University helped in checking our English.

  References

[1] Leontidis, G. (2017). A new unified framework for the early detection of the progression to diabetic retinopathy from fundus images. Computers in Biology and Medicine, 90: 98-115. https://doi.org/10.1016/j.compbiomed.2017.09.008

[2] Mo, J., Zhang, L., Feng, Y. (2018). Exudate-based diabetic macular edema recognition in fundus images using cascaded deresidual networks. Neurocomputing, 290: 161-171. https://doi.org/10.1016/j.neucom.2018.02.035

[3] Kaur, J., Mittal, D. (2018). Estimation of severity level of non-proliferative diabetic retinopathy for clinical aid. Biocybernetics and Biomedical Engineering, 38(3): 708-732. https://doi.org/10.1016/j.bbe.2018.05.006

[4] Rahim, S.S., Palade, V., Shuttleworth, J., Jayne, C. (2016). Automatic screening and classification of diabetic retinopathy and maculopathy using fuzzy image processing. Brain Informatics, 3(4): 249-267. https://doi.org/10.1007/s40708-016-0045-3

[5] Paing, M.P., Choomchuay, S., Yodprom, M.D.R. (2016). Detection of lesions and classification of diabetic retinopathy using fundus images. Biomedical Engineering International Conference (BMEiCON), pp. 1-5. https://doi.org/10.1109/BMEiCON.2016.7859642

[6] Omar, M., Khelifi, F., Tahir, M.A. (2016). Detection and classification of fundus images exudates using region based multiscale LBP texture approach. International Conference on Control, Decision, and Information Technologies (CoDIT), pp. 227-232. https://doi.org/10.1109/CoDIT.2016.7593565

[7] Tennakoon, R., Mahapatra, D., Roy, P., Sedai, S., Garnavi, R. (2016). Image quality classification for DR screening using convolutional neural networks. Ophthalmic Medical Image Analysis Third International Workshop, pp. 1-9. https://doi.org/10.17077/omia.1054

[8] Gondal, W.M., Kohler, J.M., Grzeszick, R., Fink, G.A., Hirsch, M. (2017). Weakly-supervised localization of diabetic retinopathy lesions in fundus images. IEEE International Conference on Image Processing (ICIP), pp. 2069-2073. https://doi.org/10.1109/ICIP.2017.8296646

[9] Kwasigroch, A., Jarzembinski, B., Grochowski, M. (2018). Deep CNN based decision support system for detection and assessing the stage of diabetic retinopathy. International Interdisciplinary PhD Workshop (IIPhDW), pp. 111-116. https://doi.org/10.1109/IIPHDW.2018.8388337

[10] Kaur, J., Mittal, D. (2018). A generalized method for the segmentation of exudates from pathological fundus images. Biocybernetics and Biomedical Engineering, 38(1): 27-53. https://doi.org/10.1016/j.bbe.2017.10.003

[11] Kwasigroch, A., Jarzembinski, B., Grochowski, M. (2018). Deep CNN based decision support system for detection and assessing the stage of diabetic retinopathy. In 2018 International Interdisciplinary PhD Workshop (IIPhDW), pp. 111-116. https://doi.org/10.1109/IIPHDW.2018.8388337

[12] Seth, S., Agarwal, B. (2018). A hybrid deep learning model for detecting diabetic retinopathy. Journal of Statistics and Management Systems, 21(4): 569-574. https://doi.org/10.1080/09720510.2018.1466965

[13] Lam, C., Yu, C., Huang, L., Rubin, D. (2018). Fundus lesion detection with deep learning using image patches. Investigative Ophthalmology Visual Science, 59(1): 590–596. https://doi.org/10.1167/iovs.17-22721

[14] Zhao, H., Li, H., Maurer, S.S., Guo, Y., Deng, Q., Cheng, L. (2019). Supervised segmentation of un-annotated fundus images by synthesis. IEEE Transactions on Medical Imaging, 38(1): 46-56. https://doi.org/10.1109/TMI.2018.2854886

[15] Chowdhury, A.R., Chatterjee, T., Banerjee, S. (2019). A random forest classifier-based approach in the detection of abnormalities in the retina. Medical & Biological Engineering & Computing. 57(1): 193–203. https://doi.org/10.1007/s11517-018-1878-0

[16] Khojasteh, P., Jœnior, L.A.P., Carvalho, T., Rezende, E., Aliahmad, B., Papa, J.P., Kumar, D.K. (2019). Exudate detection in fundus images using deeply learnable features. Computers in Biology and Medicine, 104: 62-69. https://doi.org/10.1016/j.compbiomed.2018.10.031

[17] Karkuxhali, S., Manimegalai, D. (2019). Robust intensity variation and inverse surface adaptive thresholding techniques for detection of optic disc and exudates in fundus images. Biocybernetics and Biomedical Engineering, 39(3): 753-764. https://doi.org/10.1016/j.bbe.2019.07.001

[18] Wang, H., Yuan, G., Zhao, X., Peng, L., Wang, Z., He, Y., Qu, C., Peng, Z. (2020). Hard exudate detection based on deep model learned information and multi-feature joint representation for diabetic retinopathy screening. Computer Methods and Programs in Biomedicine, 191: 1-16. https://doi.org/10.1016/j.cmpb.2020.105398

[19] Auccahuasia, W., Floresb, E., Sernaqueb, F., Cuevab, J., Diaz, M. (2020). Recognition of hard exudates using Deep Learning. Procedia Computer Science, 167: 2343-2353. https://doi.org/10.1016/j.procs.2020.03.287

[20] Hussain, K., Rahman, S., Rahman, M., Khaled., S.M., Wadud., M.A.A. (2018). A histogram specification technique for dark image enhancement using a local transformation method. IPSJ Transactions on Computer Vision and Applications, 10(3): 1-11. https://doi.org/10.1186/s41074-018-0040-0

[21] Zhang, J., Kamata, S.I. (2008). Adaptive local contrast enhancement for the visualization of high dynamic range images. 19th International Conference on Pattern Recognition, pp. 1-4. https://doi.org/10.1109/ICPR.2008.4761893

[22] Osareh, A., Shadgar, B., Markham, R. (2009). A computational intelligence-based approach for detection of exudates in diabetic retinopathy images. IEEE Transactions on Information Technology in Biomedicine, 13(4): 535-545. https://doi.org/10.1109/TITB.2008.2007493

[23] Wisaeng, K., Sa-ngiamvibool., W. (2018). Improved fuzzy c-means clustering in the process of exudates detection using mathematical morphology. Soft Computing, 22: 2753-2764. https://doi.org/10.1007/s00500-017-2532-8

[24] Wisaeng, K., Sa-ngiamvibool., W. (2018). Automatic detection and recognition of optic disk with maker-controlled watershed segmentation and mathematical morphology in color retinal images. Soft Computing, 22: 6329-6339. https://doi.org/10.1007/s00500-017-2681-9

[25] Wisaeng, K., Sa-ngiamvibool., W. (2019). Exudates detection using morphology mean shift algorithm in retinal images. IEEE Access, 7: 11946-11958. https://doi.org/10.1109/ACCESS.2018.2890426

[26] Otsu, N. (1975). A threshold selection method from gray-level histograms. IEEE Transactions on Systems, Man, and Cybernetics, 9(1): 62-66. https://doi.org/10.1109/TSMC.1979.4310076

[27] Zhou, W., Wu, C., Yi, Y., Du, W. (2017). Automatic detection of exudates in digital color fundus images using superpixel multi-feature classification. IEEE Access, 5: 17077-17088. https://doi.org/10.1109/ACCESS.2017.2740239

[28] Girish, N.C., Daruwala, R.D., Manoj, S.G. (2015). Comparisons of Robert, Prewitt, Sobel operator-based edge detection methods for real time uses on FPGA. IEEE International Conference on Technologies for Sustainable Development (ICTSD), pp. 1-4. https://doi.org/10.1109/ICTSD.2015.7095920

[29] Laaroussi, S., Baataoui, A., Halli, A., Khalid, S. (2019). A dynamic mosaicking method for finding an optimal seamline with Canny edge detector. Procedia Computer Science, 148: 618-626. https://doi.org/10.1016/j.procs.2019.01.050

[30] DiaretDB0: Standard Diabetic Retinopathy Database Calibration level 0, 2021. https://www.it.lut.fi/project/imageret/diaretdb0, accessed on Jan. 29, 2021.

[31] DiaretDB1: Standard Diabetic Retinopathy Database Calibration level 1, 2021. https://www.it.lut.fi/project/imageret/diaretdb1, accessed on Jan. 29, 2021.

[32] SATRE. Structured Analysis of the Retina, 2021. http://cecas.clemson.edu/~ahoover/stare, accessed on Jan. 29, 2021.

[33] Messidor, 2021. http://messidor.crihan.fr/index-en.php, accessed on Jan. 29, 2021.

[34] Kaggle Diabetic Retinopathy Detection Competition, 2021. https://www. kaggle.com/c/diabetic-retinopathy-detection, accessed on Jan. 29, 2021.

[35] Niemeijer, M., Van Ginneken, B., Cree, M.J., et al. (2009). Retinopathy online challenge: Automatic detection of microaneurysms in digital color fundus photographs. IEEE Transactions on Medical Imaging, 29(1): 185-195. https://doi.org/10.1109/TMI.2009.2033909

[36] Pereira, C., Gonçalves, L., Ferreira, M. (2015). Exudate segmentation in fundus images using an ant colony optimization approach. Information Sciences, 296: 14-24. https://doi.org/10.1016/j.ins.2014.10.059

[37] Naqvi, S.A.G., Zafar, M.F., Haq, I.U. (2015). Referral system for hard exudates in eye fundus. Computers in Biology and Medicine, 64: 217-235. https://doi.org/10.1016/j.compbiomed.2015.07.003

[38] Annunziata, R., Garzelli, A., Ballerini, L., Mecocci, A., Trucco, E. (2016). Leveraging multiscale Hessian-based enhancement with a novel exudate inpainting technique for fundus vessel segmentation. IEEE Journal of Biomedical and Health Informatics, 20(4): 1129-1138. https://doi.org/10.1109/JBHI.2015.2440091

[39] Imani, E., Pourreza, H.R. (2018). A novel method for fundus exudate segmentation using signal separation algorithm. Computer Methods and Programs in Biomedicine, 133: 195-205. https://doi.org/10.1016/j.cmpb.2016.05.016

[40] Kaur, J., Mittal, D. (2018). Estimation of severity level of non-proliferative diabetic retinopathy for clinical aid. Biocybernetics and Biomedical Engineering, 38(3): 708-732. https://doi.org/10.1016/j.bbe.2018.05.006

[41] Mo, J., Zhang, L., Feng Y. (2018). Exudate-based diabetic macular edema recognition in fundus images using cascaded deep residual networks. Neurocomputing, 290: 161-171. https://doi.org/10.1016/j.neucom.2018.02.035

[42] Manda, M.P.,  Hyun, D. (2021). Double thresholding with sine entropy for thermal image segmentation. Traitement du Signal, 38(6): 1713-1718. https://doi.org/10.18280/ts.380614