Differentiation Filtered Feature Selection Method for Diabetic Retinopathy Detection from Fundus Images

Differentiation Filtered Feature Selection Method for Diabetic Retinopathy Detection from Fundus Images

Karthic Ramasamy Shunmugaraj* Aravind Britto Karupanan Raj Ragumadhavan Ramachandran Vimala Rayappan

Department of Electronics and Communication Engineering, PSNA College of Engineering and Technology, Dindigul 624622, India

Department of Electrical and Electronics Engineering, PSNA College of Engineering and Technology, Dindigul 624622, India

Corresponding Author Email: 
karthicrs@gmail.com
Page: 
3237-3247
|
DOI: 
https://doi.org/10.18280/ts.420616
Received: 
2 February 2025
|
Revised: 
21 August 2025
|
Accepted: 
17 October 2025
|
Available online: 
31 December 2025
| Citation

© 2025 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Fundus images induced through computer-aided processing are useful in the detection, prediction, and diagnosis of Diabetic Retinopathy (DR). Fundamental image processing with refined computing and image features is useful in identifying flaws in fundus inputs. In this article, a Differentiation-Responsive Feature Selection (DRFS) method is proposed to improve DR detection precision. The computations for detections are formulated by selecting precise pixels from the non-linear distributions. In this computing process, a graph neural network with a differentiation function is employed. The differentiation part is responsible for filtering features that result in less distribution detection. This detection is flexible for pixel distributions with high false rates, through which the graph neural network is trained. Therefore, the feature for detecting DR through the comparative decision is increased by identifying sufficient distributions. The selected features are comparatively computed with the training images to detect DR from input images irrespective of their size. The proposed method improves accuracy by 14.92%, precision by 13.68%, and reduces the false positive rate by 9.72% for the varying features.

Keywords: 

DR, differentiation function, feature selection, graph neural network

1. Introduction

Diabetic retinopathy is the leading cause of vision loss worldwide, caused by diabetes-related damage to retinal blood vessels. Fundus imaging is a non-invasive approach for capturing detailed images of the retina, which allows for the detection of microaneurysms, hemorrhages, and other disease-related abnormalities [1, 2]. Recent advances in imaging technology, such as ultra-widefield and handheld fundus cameras, have improved the ability to detect central and peripheral retinal lesions. Such devices enable thorough retinal evaluations, facilitating early diagnosis and intervention [3, 4]. High-resolution imaging has enhanced the clarity and precision of detecting small retinal changes. The use of fundus imaging in routine screenings allows for better disease progression tracking and treatment efficacy evaluation [5]. Early diagnosis using fundus inputs reduces the chance of blindness through precautionary medications. The availability of portable fundus cameras has increased screening capacity, especially in remote and underserved areas, hence improving overall healthcare outcomes for diabetic patients [4, 6].

Machine learning (ML) methods have become essential for automating diabetic retinopathy identification using fundus pictures. Such algorithms, particularly convolutional neural networks, are highly effective at detecting microaneurysms, exudates, and neovascularization [7, 8]. Advanced models combine standard image processing with deep learning, which increases detection accuracy while decreasing computational complexity [9]. Transfer learning techniques increase efficiency by adapting pre-trained models to specific datasets and reducing the requirement for substantial training data. Automated systems allow uniform and objective grading of retinal anomalies, which overcomes the limitations of manual examinations [10, 11]. ML is useful for large-scale screening programs because it can process data in real time. The algorithms improve diagnostic precision while decreasing false positives, which leads to dependable and scalable solutions for early detection. ML in diabetic retinopathy screening promotes effective healthcare delivery, especially for resource-constrained areas, and aids in preventing vision loss through timely interventions [8, 12]. The contributions of the article are:

• Proposal and description of the novel Differentiation-Responsive Feature Selection (DRFS) method for improving the accuracy of the diabetic retinopathy detection process using fundus images.

• The implication and discussion of the graph neural network for responsive feature identification by analyzing the linear and non-linear pixel distributions is presented.

• The presentation of the performance assessment using experimental results and comparative metrics to evaluate and validate the proposed method’s performance.

The article is organized as follows: The related works are discussed in Section 2, with the problem identified, and in Section 3, the proposed method is described with mathematical equations, diagrammatic illustrations, and explanations. In Section 4, the experimental and comparative performance assessments are described, followed by the conclusion, limitations, and future inclusions in Section 5.

2. Related Works

Mohanty et al. [13] considered an improved robust fuzzy local information k-means (RFLICM) clustering algorithm for diabetic retinopathy (DR) detection. The clustering approach is employed here to localize the information parameters for detection services. The designed algorithm analyzes the parameters and detects the exact cause and location of DR in patients. The designed algorithm maximizes the overall accuracy of DR detection.

Wang et al. [14] introduced an early DR detection model using deep learning (DL) and explainable artificial intelligence (XAI). A convolutional neural network (CNN) algorithm is used in the model that extracts important patterns from datasets. The extracted patterns and features are used to detect and predict the DR for healthcare applications. The introduced model elevates the precision and accuracy level of DR detection.

An improved version of the model proposed by Mohanty et al. [13] was developed by Nazir et al. [15] using a deep neural network (DNN). The proposed model was employed as an early-stage diabetic retinopathy (DR) detection model for healthcare systems. The model uses retinal fundus images as inputs that produce specialized information for DR detection services. The developed model increases the accuracy and precision parameters.

Shamrat et al. [16] proposed an advanced DNN model for DR detection using fundus images. An analysis technique is implemented in the model to analyze the necessary features from fundus images. A median filter is used here to filter the important data from the dataset, which reduces the detection complexity. Experimental results show that the proposed model improves the accuracy and reliability level of the DR detection process.

Atta et al. [17] introduced a hybrid intelligent approach for DR detection. A support vector machine (SVM) and k-nearest neighbor (KNN) algorithms are employed in the approach to evaluating the exact cause of DR. The approach examines the fundus images that produce significant information for DR detection services. The introduced approach achieves a high precision.

Özbay [18] designed an active DL method for DR detection via segmented fundus images. An artificial bee colony (ABC) algorithm is used in the method to analyze the histogram features from the given images. Complex retinal features that are presented in the images are analyzed to produce relevant data for further processes. The designed method elevates the performance and efficiency of DR detection.

Liu et al. [19] proposed a transfer learning (TL) based DR detection method. The method is used as an early screening method that reduces the computational complexity of the disease diagnosis process. The grayscale features of the images are identified, which produces relevant data for disease detection. When compared with others, the proposed method enhances the specificity, efficiency, and accuracy of the method.

Naz et al. [20] developed a deep convolutional generative adversarial network (DCGAN) algorithm for DR recognition systems. A deep embedded clustering (DEC) technique is used in the algorithm to evaluate the features and patterns from given input images. The algorithm reduces the computational cost and latency rate of the systems. The developed DCGAN algorithm improves the performance and accuracy level of the systems.

Mahmood et al. [21] proposed an enhanced hybrid approach for fundus images based on Liu et al. [19]. The introduced approach is also used for abnormal lesion detection, which is used during screening services. The approach is used to classify the exact types and classes of DR in healthcare systems.

Jian et al. [22] proposed a triple cascade CNN model (Triple-DRNet) for DR detection. The traditional CNN algorithm is used here that extract important features and patterns from fundus images. The extracted features are used as input, which minimizes the computational cost of the proposed CNN model. When compared with others, the proposed model enhances the accuracy and effectiveness of detection systems.

Jabbar et al. [23] developed a deep transfer learning-based automated DR detection in remote areas. Retinal fundus images are used here as inputs that produce feasible information for DR detection. The model uses a processing unit to process the data that is presented in the images. The model also eliminates the unwanted features from the dataset and improves accuracy.

Liu and Chi [24] designed a cross-lesion attention network for DR grading using fundus images. The designed method uses an adaptive lesion-aware (ALA) module that filters the spatial and temporal features from the fundus images. The method provides exact grading of DR for early disease diagnosis services.

The pixel distribution analysis of fundus inputs requires proper differentiation of textural features. The differentiations for minimum and maximum intensities and variable features are required to enhance the accuracy of infected region detection. In these cases, the linearity is disturbed across various pixels to where the switching of features is frequent, increasing the computing time. To address these issues due to invariable and variable feature classification, a novel responsive feature selection method is proposed. This feature selection method is reliable by exploiting the differentiation and non-differentiation features while processing fundus inputs.

3. Proposed DRFS

Early detection and diagnosis of Diabetic Retinopathy (DR) are important to prevent severe infection complications. Fundus imaging, which acts as a non-invasive diagnostic tool, helps to detect retinal abnormalities. In this paper, refined image processing and feature selection techniques accurately identify DR-related infection. The incorporation of the DRFS method enhances DR detection precision by combining a graph neural network with a differentiation function. This filters non-linear pixel distributions to identify critical features based on flexible pixel detection to ensure high accuracy, which is adaptable to various input image sizes. The proposed DRFS is presented in Figure 1.

Figure 1. Proposed DRFS method

The responsive feature selection is performed using a non-linear pixel distribution from the extracted ones. The differentiation output is reliable for detecting pixel distribution based on intensity, wherein the responsiveness is accounted based on mapping. In the mapping process, the invariant feature that distinguishes the pixels is identified for detecting infected regions. The responsiveness of a feature is identified from its mapping and intensity verification using a similarity measure. This turns out to be the novelty of the proposed method that clearly differentiates various features that coexist with multiple features in normal or overlapping regions. From the fundus image, the feature extraction for Diabetic Retinopathy includes vascular changes, which are denoted as $ves$ for $k$ number of regions, and $cb$ epresents the color-based features that identify abnormalities. The texture of the image is termed as $tx$, which helps to differentiate the infected and non-infected regions. The size of the area is denoted as $sz$, ich indicates the varying stages of DR. The term size refers to the pixel regions that represent the infected part. This is represented by the $(x, y)$ regions of the input image; the segmenting region resembles these specific pixels through different representations. In the extraction process, the size is unknown and it represents the size of the entire image whereas the segmentation shrinks the actual representation to maximize the infected region detection without false positives. The direction is represented as $dr$, which helps to detect the boundaries of the image. The following equation $f t_{e x t}$ extracts feature from the fundus image at $(x, y)$ pixel coordinates.

$\begin{gathered}f t_{e x t}=\sum_{j=1}^k\left[{ves}_j(x, y) \times c b_j(x, y) \times t x_j(x, y)\right] +s z_j(x, y)+d r_j(x, y)\end{gathered}$          (1)

Here, the identified features help to reduce the impact of the relevant infected region, which leads to difficulty in the diagnosis of Diabetic Retinopathy. It enables the system to adapt to various fundus images that improve the accuracy of detection. This enhances the DRFS method to detect Diabetic Retinopathy with high precision. The feature direction relies on the pixel position from (1, 1) to $(x, y)$; the extraction from definite pixel position is alone considered to identify the direction. In particular, the direction factor refers to the pixel traversal identified until maximum pixel is reached. Such traversal is referred to maximize the feature extraction, region detection, and to improve segmentation accuracy. Besides, the feature extraction texture and intensity factors are considered to identify the pixels and regions. Depending on the different image sizes, the direction traversal time varies. In a conventional feature extraction method, the available features are extracted and processed. This method also follows the same with pre-computed extraction. The pre-computation for high intensity and low texture (patterns) is the constraint in handling features based on the regions. From this feature extraction, the following $n l_{{transform }}$ and $n l_{{inter }}$ equations evaluate the non-linear transformation and interaction.

$\begin{aligned} n l_{ {transform }}(x, y)= & {\left[\left(s z_{j-1} \times s z_j(x, y)\right)\right.}  \left.+\left(d r_{j-1} \times d r_j(x, y)\right)+f t_{ {ext }}\right]\end{aligned}$          (2)

$\begin{aligned} n l_{{inter }}=\left(\frac{1}{\left(f t_{ {ext }-1}-f t_{ {ext }-2}\right)}\right)  & \times\left(1+\left(n l_{{transform }}(x, y) \times c b_j(x, y)\right)\right)\end{aligned}$          (3)

Eq. (2) focuses on transforming pixels that highlight the important features of the retina. The previous regions of size and directions are measured as $s z_{j-1}$ and $d r_{j-1}$ which helps to understand the distribution of infected regions from previous regions. This normalizes the variations that are prioritized and improves the identification of changes in the retina. In Eq. (3), the term $\left(\frac{1}{\left(f t_{e x t-1}-f t_{e x t-2}\right)}\right)$ estimates the difference between extracted features $f t_{e x t-1}$ to $f t_{e x t-2}$. It helps to identify the relationship between various features that often categorize complex and simple variations in the region. The chances for $\left(f t_{e x t-1}-f t_{e x t-2}\right)=0$ (least) is less as the pixels exhibit multiple features and the resultant is either greater than 0 or less than 0. Therefore, the chances of denominator less than the numerator are not possible until the pixels exhibit one feature and a one-to-one mapping is made. Therefore, the interaction in positive side identifies new edges between the features whereas a negative interaction reduces a replicated edge. This interaction between the retinal images provides an accurate classification of detection. This enhances the accuracy of the DR detection.

In DR detection, GNN analyzes complex relationships and interactions between various features extracted from fundus images by processing these features through a message-passing mechanism. The complex relationships refer to the representation of appropriate features and their regions with different shared pixels. Therefore, mapping one-to-one pixels and regions is tedious, as a single pixel exhibits different features and therefore one-to-many mapping is inferred. Besides, the neighbor region also shares the common features from different pixels such that the feature filtering relies on specific mapping made. Thus, the complexity in identifying false rates is high such that the one-to-many representation is to be simplified explicitly. In this, each region aggregates information from its neighboring region to update its feature representation. This captures higher-order interactions and correlations between blood vessels and exudates, which are indicative of DR. The layers continuously pass messages to improve the model's ability to detect infected regions. It filters out irrelevant features and concentrates on the most distinguishing ones to reduce false positives. The GNN's ability to process non-linear relationships between features enhances the detection of subtle retinal abnormalities. The differentiation part is responsible for filtering features that result in less distribution detection, which is expressed as $f t_{{diff }}(x, y)$ in the equation below:

$\begin{aligned} f t_{\ {filter }}=\left[f t_{ {ext }}\right. & \left(n l_{ {inter }}-\widehat{n l}_{ {inter }}\right) \left.\times\left(1-n l_{{transform }}(x, y)\right)\right]\end{aligned}$          (4)

$\begin{aligned} f t_{{diff }}(x, y)=\| & f t_{ {ext }-2}-f t_{{ext }-1} \| +\exp \left(-\mid d i s_{f t}(x+j, y+j)\right.  \left.-f t_{ {filter }}(x, y) \mid\right)\end{aligned}$          (5)

The obtained features were filtered as $f t_{ {filter }}$ based on the feature extraction that combines non-linear parts. Here, the term $\left(n l_{ {inter }}-\widehat{n l}_{{inter }}\right)$ identifies the difference between actual non-linear interaction and the change in non-linear interaction, and is denoted $\widehat{n l}_{{inter }}$. The term $\left(1-n l_{{transform }}(x, y)\right)$ analyzes the impact of non-linear transformation. Together, this process helps to filter features that cause significant changes. The term $\left\|f t_{e x t-2}-f t_{e x t-1}\right\|$ evaluates the difference between two different extracted features for accurate anomaly detection. Feature filtering and differentiation referred in equations above are used to define the uniform and non-uniform mapping between the features.

This is performed to reduce the number of transform outputs influencing the true rate of the region. Besides, the computation identifies differentiable outputs through separating erroneous and positive features that illustrates the pixels (independent and shared). Therefore, these equations are distinct to identify independent features and their corresponding differences, if any. The feature filtering process is illustrated in Figure 2.

Figure 2. Feature filtering process

The $f t_{ {filter }}$ the process is diagrammatically presented in the above Figure 2. The features extracted are used to identify $t x \in s z$. First, $d i s_{f t}$ is computed for $c b$ under distinguishable differentiation and linear (assumption) distributions. However, if $\left[\frac{1}{\left(f_{ {text }-1}-f_{{text }-2}\right)}\right]>0$, then the normalization process is pursued. If the normalization does not fit the $n l_{{transform }}$, then the linearity is disturbed for which $f t_{ {diff }}(x, y)$ estimation is required. Depending on the major difference between $(x+j, y+j)$, the $\hat{n} l_{{inter }}$ is detected and therefore $p x_{ {flex }}(x, y)$ is only applicable for those $f t_{{ext }}$. Therefore, the number of $k \in c b$ and $d r$ region detection is eased through filtering. These features are required to concatenate pixel distributions based on intensity. Such grouping is useful in identifying invariant features even after normalization. The pixel variation between specific regions was computed based on dissimilarities in features and is expressed as ${dis}_{f t}(x+j, y+j)$. It helps to identify regions with high variations based on pixel coordinates, which is crucial for identifying DR features. In DR detection, the differentiation process filters out features that have low variance, which contributes less to detecting DR infection. This enhances both the accuracy and efficiency of the detection process. The following equation $p x_{ {flex }}(x, y)$ computes how the detection is flexible for pixel distributions with high false rates.

$p x_{f l e x}(x, y)=\sum_{j=1}^k\left(d r_j(x, y) \times f t_{e x t}\right)+\left[d i s_{f t}-\frac{1}{\left(p x_{i n t e n}-f t_{d i f f}(x, y)\right)}\right]$          (6)

The direction of boundaries with extracted features was combined to monitor the changes that depend on the reliability of the pixel. The term $\left[{dis}_{f t}-\frac{1}{\left(p x_{{inten }}-f t_{{diff }}(x, y)\right)}\right]$ incorporates pixel intensity, which is denoted as $p x_{{inten }}$ helps to analyze how quickly the variations occur based on the intensity of pixels. It determines how the features of pixels $(x, y)$ contribute to the overall detections. This identifies whether the pixels are reliable or unreliable in the particular region that contains high false positive rates. It helps to filter the regions that cause high false rates during the detection. The system can adjust the regions of feature extraction based on the characteristics of pixel distribution that highlight high variability and change in the pattern. It improves the detection of DR even in the presence of highly varying feature regions. The pixel distribution is stable based on the pixel intensity that is expressed as $f\left(p x_{{flex }}\right)$ in the following.

$\begin{aligned} & f\left(p x_{{flex }}\right) =\left\{\begin{array}{c}\left(d r_j(x, y)+d i s_{f t}\right) \forall p x_{{inten }} \geq x_{\max } \\ \left(d r_j(x, y)+d i s_{f t}-f t_{ {filter }}\right) \forall p x_{ {inten }}<x_{\min }\end{array}\right.\end{aligned}$          (7)

The pixel distribution is considered reliable for DR detection during $p x_{i n t e n} \geq x_{\max }$ based on $\left(d r_j(x, y)+d i s_{f t}\right)$ that indicates the features are similar to DR. A high variation in pixel intensity provides a significant change in pixel distribution, which helps to indicate the abnormalities in the fundus image. An unreliable feature was identified based on $p x_{{inten }}<x_{\min }$ with $\left(d r_j(x, y)+d i s_{f t}-f t_{f i l t e r}\right)$ which indicates false positive regions with uniform pixel distribution. This unreliable region was filtered out from the DR detection. The reliable and unreliable regions were analyzed based on the maximum and minimum threshold values, which are termed as $x_{\max }$ and $x_{\min }$. This helps to minimize the irrelevant features and regions that do not contain any DR infections. The GNN trains the features at the layer $l$ based on the pixel data, which is computed in the equation below.

$\begin{aligned} f t_{l+1}(x, y)=f & t_l(x, y) +\sum_{k \in(x, y)}\left(f t_{f i l t e r}+f t_{d i f f}\right) \times\left(p x_{f l e x}(x, y)-d i s_{f t}\right)\end{aligned}$          (8)

The extracted feature information from layer $l$ is trained using pixels in each region. The new layer is updated from the previous information and is denoted as $f t_{l+1}(x, y)$. he continuous iteration of GNN updates each layer based on its existing feature with the neighboring feature values. It allows the system to capture the relationship between features of different regions, which is important for the accurate detection of DR-related abnormalities in the retina. The following equation $E_{ {decision }}(x, y)$ analyzes how the feature for detecting DR through the comparative decision is increased by identifying sufficient distributions.

$\begin{aligned} & E_{{decision }}(x, y) =\left\{\begin{array}{l}=\left(f t_{{diff }}+f t_l(x, y)\right) \times\left|\begin{array}{c}n l_{{inter }} \\ -n l_{ {transform }}\end{array}\right|+f t_{ {filter }} \\ =f t_{{ext }}\left(p x_{{flex }}(x, y)+{ dis }_{f t}\right) \forall(x, y) \in(1,2, \ldots, k)\end{array}\right.\end{aligned}$          (9)

The above equation compares the distribution of various extracted features in the fundus image with their pixels. The feature is considered sufficient with $E_{ {decision }}=1 \forall(x, y) \in(1,2, \ldots, k)$ and ensures it is reliable in detecting DR-related abnormalities. It prioritizes the features based on their importance, which is learned during the training. This enables the system to adjust based on the features that meet the requirement. It enhances the performance of the system when sufficient discriminative features increase. This decision helps to classify DR and on-DR regions from various features. The decision model represented in Eq. (9) considers the filtering and distribution flexibility across various pixels. The one-to-one and one-to-many mapping using the GNN requires the precise classification of differentiated features and extracted features through pixel to region detection. For an overlapping pixel, the region differentiation is mandatory whereas for an independent feature, one-to-one mapping is alone required. These differences are highlighted in this equation that is required to train the learning model. From the above decision, the invariant features were evaluated as ${inv}_{f t}(x, y)$ in the below equation.

$\begin{aligned} {inv}_{f t}(x, y)=p x_{{flex }}(x, y)  +\left(\frac{p x_{ {inten }}}{f t_{ {diff }}- { dis }_{f t}}\right) \times\left(f t_{ {ext }-2}-f t_{\ {ext }-1}\right) \forall E_{{decision }}=0\end{aligned}$          (10)

$\begin{aligned} D R_{{infection }}(x, y) & =f t_{ {ext }}  +\left(n l_{{inter }}+p x_{ {flex }}(x, y)\right)  \times E_{{decision }}+i n v_{f t}(x, y)\end{aligned}$          (11)

Eq. (10) focuses on patterns that indicate the DR regions rather than the irrelevant regions. It combines features that are consistent and reliable, even in varying input image conditions. The GNN model for invariable feature detection is presented in Figure 3.

In Figure 3, $inV_{f t}$ detection process is described using $(x, y, j)$ variants. The $p x_{{inten }}$ using $(x, y)$ is the first assumed intensity for categorizing $x_{\min }$ and $x_{\max }$. This categorization is required to perform $E_{ {decision }}(x, y)$ under multiple $d i s_{f t}$ identified. The first mapping using $k \forall\left(f t_{{diff }}\right.$, $\left.d i s_{f t}\right)$ is useful in identifying $x_{\max }$ based on $\left(f t_{{ext-1 }}-f t_{{ext-2 }}\right)$ provided the $f\left(p x_{{flex }}\right)$ is defined as either high/low. These output cases are matched with $i$ and $j$ until $(x+j)=(y+j)$ is true, and therefore the mapping is repeated without $\left(d i s_{f t}>0\right)$. Therefore the $(i=j)$ cases are used to update the $p x_{{inten }}$ for $x_{\min }$ or $x_{\max }$ classification. Using the consecutive mapping, the graph neural network layers for $i n V_{f t}$ are defined. Features with small variations were considered stable, and large variations may lower the value for an invariable feature that leads to instability. In a graph representation, the features, pixels, and $x_{\min}$  constitutes the structure. These elements form the nodes and the mapping (one-to-one or one-to-many) forms the edges based on different regions occupied. Besides, the final edges are formulated based on similarity index. The similarity index represents the intensity and invariable detection identified between the nodes. It enhances the system’s ability to detect DR based on stable features. Eq. (11) identifies the infected regions by integrating the previous factors. A high value of $D R_{{infection }}(x, y)$ indicate the region as an infected region, and a lower value ensures a healthy and non-infected region.

The confusion matrix for differences in feature detection monitors the performance of the features considered. In the proposed method, how it classifies pixels or regions in fundus images as infected or non-infected is described. The proposed DRFS ensures that only discriminative features are utilized to achieve higher True positives and True negatives. It minimizes the false positives and false negatives. The existing methods often rely on linear distributions, whereas the proposed method reduces false negatives by accurately identifying subtle abnormalities in early detection (Figure 4).

Figure 3. GNN model for invariable feature detection

Figure 4. Confusion matrix for difference detection

Figure 5. ROC for invariable feature detection

The ROC identifies the true positive rate against the false positive rate based on the evaluation of the specificity of invariable feature detection. The ROC indicates high performance by the proposed method in distinguishing DR-infected regions from healthy areas. The invariable features were extracted through differentiation ${inv}_{f t}(x, y) \forall E_{{decision }} \in(1,2, \ldots, k)$ that allowed the proposed method to maintain a consistent True positive rate even at low false positive rates. The existing methods often suffer from a high false rate due to their inability to adapt to non-linear distributions, leading to an inconsistent ROC curve. The proposed approach shows significant improvements in detecting invariable features across varied image conditions when compared to the existing methods (Figure 5).

4. Performance Assessment

4.1 Dataset and experimental results

The performance assessment is presented using the MATLAB outputs extracted using specific dataset inputs. The input fundus images are fetched from the dataset named APTOS 2019 Blindness Detection [25]. The dataset is used for early diagnosis of diabetic retinopathy that results in vision blindness. The input images are collected from different clinics and devices constituting 13K+ images of size 20 GB. The dataset used in this article provides 3662 images for training and 180 images for testing. DR is categorized under no infection (1805 images), mild infection (370 images), moderate (370 images), severe (193 images), and proliferate (295 images). The image size is 224 $\times$ 224 pixels, with a resolution for which the maximum number of extractable features is 14. The proposed GNN is modeled for a maximum of 1300 iterations and 3 complete epochs, wherein the learning rate is varied from 0.8 to 1.0 to complete a single epoch. This experimental setup is deployed in a standalone computer with a 2.1 GHz processor unit with 8 GB of physical storage, and a standard 15.6” output device. The sample outputs obtained are presented in Table 1 for the inputs considered from the dataset.

Table 1. Sample input and output

Input

$tx$

$p x_{ {inten }}$

$d i s_{f t}$

$D R_{infection}$

4.2 Comparative assessment

The comparative performance assessments are discussed using accuracy, precision, specificity, false positive rate, and computing time metrics. The number of features (1-14) and the different methods are the variables used in this comparative assessment. The proposed DRFS method is compared with the existing EDCGAN [20], MaFCM [13], and WE-DNN [15] methods. The selection of the above baseline methods is due to their heterogeneous method application for DR detection and segmentation. The method employs adversarial learning and clustering to identify pattern variations. The second proposal uses a deep learning-based clustering approach to locate maximum feature differences avoiding false rates. In the third proposal, deep neural network is employed to maximize detection and segmentation accuracy by training the network using differentiation factor. The functions suggested in these proposals are identified with individual flaws mentioned earlier. The proposed method uses these functions deliberately with modifications as described in the proposed section. Some functions such as differentiation and grouping are similar to some extent of the proposed method. Therefore, the purpose for selecting these methods is to mitigate the errors identified and to maximize the infected region detection precision. Besides, the problem in pattern differentiation and feature representation is mitigated using pre-representation using the GNN. Such representations are useful in classifying feature-based pixels and their regions regardless of their overlapping nature.

4.2.1 Accuracy

The overall accurate detection of the infected region by comparing the ratio of correctly classified true positives and true negatives to the total number of infected DR. A higher accuracy demonstrates how the DRFS method identifies DR features from the fundus images. The pixel relationships based on continuous training $f t_{l+1}(x, y)=f t_l(x, y)+\sum_{k \in(x, y)}\left(f t_{f i l t e r}+f t_{d i f f}\right)$ were evaluated using GNN to ensure reliable decisions. It focused on refining its decision-making process by precise pixel selection from non-linear distributions. The analysis of invariable features contributes to high accuracy even in varying complex scenarios, which helps to detect subtle abnormalities in the fundus image. The abnormalities detected with these steps improve the region (infected) detection with high accuracy (Figure 6).

Figure 6. Accuracy

4.2.2 Precision

The proportion of true positive detections as correctly identified DR regions among all positive detections by a proposed method ensures a high precision score. It indicates how the DRFS method minimizes false positives by filtering irrelevant pixel distributions. The differentiation function $f t_{d i f f}(x, y)=\left\|f t_{e x t-2}-f t_{e x t-1}\right\|$ within the graph neural network, within identifies features that indicate pixels contain DR-related abnormalities. This high precision focuses on relevant features and avoids non-infected regions to enhance the performance of the system and reduce the misclassification of healthy regions as infected. The misclassification reduction is pursued in all the iterations until a maximum us achieved. Such a trail using the training iterations retains the accuracy achieving high precision (Figure 7).

Figure 7. Precision

4.2.3 Specificity

The ability of the model to correctly identify non-infected regions by the proportion of infected regions out of all features from the fundus image. The high specificity in the proposed DRFS method reflects its capability to ignore non-DR features that accurately classify healthy regions. It maximizes the correctly classified non-DR regions. This $D R_{{infection }}(x, y)=f t_{{ext }}+\left(n l_{ {inter }}+p x_{{flex }}(x, y)\right)$ is achieved from the feature filtering process in the graph neural network, which reduces false detections in areas with high variability. High specificity ensures the reliability of the proposed method in how it screens a large number of images with accuracy. This enhances accurate classification and reduces misclassifying infected regions in complex images. As the classification increases, the mapping and feature representation factors are reliable to uphold the specificity by identifying true positives (Figure 8).

Figure 8. Specificity

4.2.4 False positive rate

The false positive rate was analyzed based on the proportion of healthy regions that were incorrectly classified as infected. The proposed method ensures a lower false positive rate, which is highly reliable in avoiding false alarms $\left(d r_j(x, y)+d i s_{f t}-f t_{f i l t e r}\right) \forall p x_{i n t e n}<x_{\min }$ that lead to unnecessary deviations in detection. The incorporation of invariant features with dynamic pixel selection from non-linear distributions helps to ensure that ambiguous regions are not mistakenly flagged as infected. The features were compared and trained using the graph neural network to reduce the inclusion of irrelevant features that lead to fewer false positives (Figure 9).

Figure 9. False positive rate

4.2.5 Computing time

The time taken by the system to process the input and deliver predictions during DR detection is its computing requirement. The proposed method ensures a low computing time, which indicates its computational efficiency. The DRFS method achieves this by optimizing feature selection and reducing redundant computations. It combines the pixels based on filtering non-essential distributions from the neighboring regions. The graph neural network can train on high-pixel feature distributions by minimizing unnecessary computations. It makes the detection faster and maintains its robustness with limited time requirement. This requirement is set as target for the increasing iterations even after a new feature is extracted (Figure 10). The performance assessment results are summarized in Table 2 for the varying features.

Figure 10. Computing time

Table 2. Performance assessment results for features

Metrics

EDCGAN

MaFCM

WE-DNN

DRFS

Accuracy

0.898

0.911

0.932

0.9634

Precision

0.901

0.923

0.948

0.9696

Specificity

0.902

0.924

0.941

0.9551

False Positive Rate

0.106

0.088

0.076

0.0657

Computing

Time (s)

2.65

2.04

1.239

0.9059

The proposed method improves accuracy by 14.92%, precision by 13.68% and specificity by 13.11%. This method reduces the false positive rate by 9.72% and the computing time by 9.01%.

5. Conclusion

sing a graph neural network, the DRFS method enhances the detection of Diabetic Retinopathy (DR) with a differentiation function. It significantly improves detection precision and minimizes false positives, even for images with high distribution variability. This innovative approach not only improves the detection process but also provides reliable diagnostic support for healthcare with high accuracy and precision. The DRFS method improves the detection of infected and non-infected regions with high patient outcomes. The proposed method is found to achieve 15.7% high accuracy, 16.7% high precision, and 11.2% fewer false positives compared to the other existing methods.

The textural pattern differentiation relies on multiple color and contour features for which pre-processing using differentiated pixels is required. This requirement is considered a lag in this proposed DRFS method, and therefore, the pre-processing is augmented with contour classification in future work. The trails of limited feature and pattern differentiation required more training iterations and target epochs vary based on the iterations. Therefore, the trials resulted in more training time with memory consumption.

  References

[1] Ikram, A., Imran, A., Li, J., Alzubaidi, A., Fahim, S., Yasin, A., Fathi, H. (2024). A systematic review on fundus image-based diabetic retinopathy detection and grading: Current status and future directions. IEEE Access, 12: 96273-96303. https://doi.org/10.1109/ACCESS.2024.3427394

[2] Xiao, Y., Dan, H., Du, X., Michaelide, M., Nie, X., Wang, W., Zheng, M., Wang, D., Huang, Z., Song, Z. (2023). Assessment of early diabetic retinopathy severity using ultra-widefield clarus versus conventional five-field and ultra-widefield optos fundus imaging. Scientific Reports, 13(1): 17131. https://doi.org/10.1038/s41598-023-43947-5

[3] Midena, E., Zennaro, L., Lapo, C., Torresin, T., Midena, G., Frizziero, L. (2023). Comparison of 50° handheld fundus camera versus ultra-widefield table-top fundus camera for diabetic retinopathy detection and grading. Eye, 37(14): 2994-2999. https://doi.org/10.1038/s41433-023-02458-3

[4] Yang, Y., Li, F., Liu, T., Jiao, W., Zhao, B. (2023). Comparison of widefield swept-source optical coherence tomographic angiography and fluorescein fundus angiography for detection of retinal neovascularization with diabetic retinopathy. BMC Ophthalmology, 23(1): 315. https://doi.org/10.1186/s12886-023-03073-2

[5] Haq, N.U., Waheed, T., Ishaq, K., Hassan, M.A., Safie, N., Elias, N.F., Shoaib, M. (2024). Computationally efficient deep learning models for diabetic retinopathy detection: A systematic literature review. Artificial Intelligence Review, 57(11): 309. https://doi.org/10.1007/s10462-024-10942-9

[6] Liu, Z., Han, X., Gao, L., Chen, S., Huang, W., Li, P., Wu, Z., Wang, M., Zheng, Y. (2024). Cost-effectiveness of incorporating self-imaging optical coherence tomography into fundus photography-based diabetic retinopathy screening. NPJ Digital Medicine, 7(1): 225. https://doi.org/10.1038/s41746-024-01222-5

[7] Batool, S., Gilani, S.O., Waris, A., Iqbal, K.F., Khan, N.B., Khan, M.I., Eldin, S.M., Awwad, F.A. (2023). Deploying efficient net batch normalizations (BNs) for grading diabetic retinopathy severity levels from fundus images. Scientific Reports, 13(1): 14462. https://doi.org/10.1038/s41598-023-41797-9

[8] Kallel, F., Echtioui, A. (2024). Retinal fundus image classification for diabetic retinopathy using transfer learning technique. Signal, Image and Video Processing, 18(2): 1143-1153. https://doi.org/10.1007/s11760-023-02820-8

[9] Fang, L., Qiao, H. (2023). A novel DAG network based on multi-feature fusion of fundus images for multi-classification of diabetic retinopathy. Multimedia Tools and Applications, 82(30): 47669-47693. https://doi.org/10.1007/s11042-023-15296-y

[10] Hu, C.L., Wang, Y.C., Wu, W.F., Xi, Y. (2024). Evaluation of AI-enhanced non-mydriatic fundus photography for diabetic retinopathy screening. Photodiagnosis and Photodynamic Therapy, 49: 104331. https://doi.org/10.1016/j.pdpdt.2024.104331

[11] Rom, Y., Aviv, R., Cohen, G.Y., Friedman, Y.E., Ianchulev, T., Dvey-Aharon, Z. (2024). Diabetes detection from non-diabetic retinopathy fundus images using deep learning methodology. Heliyon, 10(16): e36592. https://doi.org/10.1016/j.heliyon.2024.e36592

[12] Ravichandran, M., Saravanan, S., Mathivanan, S.K., Rajadurai, H., Malar, M.B.A., Mallik, S., Qin, H. (2024). Adamic–Adar similarity indexed Wald boost data classification for diabetic disease diagnosis with big data. Systems and Soft Computing, 6: 200175. https://doi.org/10.1016/j.sasc.2024.200175

[13] Mohanty, C., Mahapatra, S., Acharya, B., Kokkoras, F., Gerogiannis, V.C., Karamitsos, I., Kanavos, A. (2023). Using deep learning architectures for detection and classification of diabetic retinopathy. Sensors, 23(12): 5726. https://doi.org/10.3390/s23125726

[14] Wang, Z.J., Wang, Y., Ma, C., Bao, X., Li, Y. (2025). Diabetic retinopathy classification using a multi-attention residual refinement architecture. Scientific Reports, 15(1): 29266. https://doi.org/10.1038/s41598-025-15269-1. https://doi.org/10.1109/ACCESS.2024.3405570

[15] Nazir, K., Kim, J., Byun, Y.C. (2024). Enhancing early-stage diabetic retinopathy detection using a weighted ensemble of deep neural networks. IEEE Access, 12: 113565-113579. https://doi.org/10.1109/ACCESS.2024.3432867

[16] Shamrat, F.J.M., Shakil, R., Akter, B., Ahmed, M.Z., Ahmed, K., Bui, F.M., Moni, M.A. (2024). An advanced deep neural network for fundus image analysis and enhancing diabetic retinopathy detection. Healthcare Analytics, 5: 100303. https://doi.org/10.1016/j.health.2024.100303

[17] Atta, R., Mustafa, Y., Ghaida, A., Abrar, S., Joury, A., Manar, A., Mona, A., Noor, A. (2024). Diabetic retinopathy detection: A hybrid intelligent approach. Computers, Materials & Continua, 80(3): 4561. https://doi.org/10.32604/cmc.2024.055106

[18] Özbay, E. (2023). An active deep learning method for diabetic retinopathy detection in segmented fundus images using artificial bee colony algorithm. Artificial Intelligence Review, 56(4): 3291-3318. https://doi.org/10.1007/s10462-022-10231-3

[19] Liu, K., Si, T., Huang, C., Wang, Y., Feng, H., Si, J. (2024). Diagnosis and detection of diabetic retinopathy based on transfer learning. Multimedia Tools and Applications, 83(35): 82945-82961. https://doi.org/10.1007/s11042-024-18792-x

[20] Naz, H., Nijhawan, R., Ahuja, N.J., Al-Otaibi, S., Saba, T., Bahaj, S.A., Rehman, A. (2023). Ensembled deep convolutional generative adversarial network for grading imbalanced diabetic retinopathy recognition. IEEE Access, 11: 120554-120568. https://doi.org/10.1109/ACCESS.2023.3327900

[21] Mahmood, M.A.I., Aktar, N., Kader, M.F. (2023). A hybrid approach for diagnosing diabetic retinopathy from fundus image exploiting deep features. Heliyon, 9(9): e19625. https://doi.org/10.1016/j.heliyon.2023.e19625

[22] Jian, M., Chen, H., Tao, C., Li, X., Wang, G. (2023). Triple-DRNet: A triple-cascade convolution neural network for diabetic retinopathy grading using fundus images. Computers in Biology and Medicine, 155: 106631. https://doi.org/10.1016/j.compbiomed.2023.106631

[23] Jabbar, A., Naseem, S., Li, J., Mahmood, T., Jabbar, M. K., Rehman, A., Saba, T. (2024). Deep transfer learning-based automated diabetic retinopathy detection using retinal fundus images in remote areas. International Journal of Computational Intelligence Systems, 17(1): 135. https://doi.org/10.1007/s44196-024-00520-w

[24] Liu, X., Chi, W. (2023). A cross-lesion attention network for accurate diabetic retinopathy grading with fundus images. IEEE Transactions on Instrumentation and Measurement, 72: 1-12. https://doi.org/10.1109/TIM.2023.3322497

[25] Diabetic Retinopathy 224×224 (2019 Data). https://www.kaggle.com/datasets/sovitrath/diabetic-retinopathy-224x224-2019-data, accessed on Dec. 15, 2025.