© 2025 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).
OPEN ACCESS
Heart disease is among the leading causes of death in the modern world. Clinical data analysis is extremely difficult in the prediction of cardiovascular disease. The goal of this research is to use deep learning to improve the accuracy of early-stage cardiovascular disease prognosis. Initially, the SCalable Range-Based Adaptive Bilateral (SCRAB) filter is used to eliminate noise artifacts. Then the relevant features are pried from the pre-processed images using ResNet-101 and EfficientNet algorithms. These features are selected by Mayfly Optimization Algorithm (MFO). Finally, the combined features are fed into Deep Belief Network (DBN) for detecting Cardio Vascular Disease. According to experimental findings, the suggested deep features with DBF classifier have an accuracy of 99.43%. Furthermore, the proposed technique outperforms the existing ones in terms of accuracy. It has the potential to be a very useful tool for doctors in their clinical work.
deep learning, DBN, multi-modality images, cardio vascular disease, feature extraction, classification
The heart is the most important organ as it supplies blood to all organs. If the heart isn't functioning properly, the brain and other organs will also stop functioning, which will result in the victim dying within minutes. Therefore, it is crucial that the heart functions properly.
Every year, coronary sickness, also known as cardiovascular disease, puts the health of countless numbers of people worldwide at risk. It is one of the principal causes of homicide [1, 2]. Several disorders can affect the heart and blood arteries, causing serious health issues. It is one of the prime causes of illness and death globally, affirming a substantial number of lives each year. Cardio Vascular Disease (CVD) encompasses multiple illnesses, including hypertension, coronary artery disease, heart failure, stroke, and peripheral artery disease.
The artery-clogging accumulation of plaque is one of the main causes of cardiovascular disease. Accumulation can limit blood circulation to vital organs, including the heart and brain, leading to heart attacks and strokes. A number of lifestyle factors, such as smoking, not exercising, dietary propensity, eating a bad diet, drinking too much alcohol and being obese, increase the chance of developing cardiovascular disease [3]. In addition, underlying medical disorders including age, diabetes and excessive cholesterol, gender and genetics all play important roles [4, 5]. Since cardiovascular diseases have several underlying causes, it can be challenging to identify the condition correctly and promptly [6]. Quick and precise prediction of CVDs is necessary for healthcare since early prediction and treatment of these conditions can dramatically lower the risk of sickness and death. Quick identification and appropriate treatment are critical for improving results and lowering the risk of complications associated with cardiovascular disease.
Artificial neural networks (ANNs), a type of multi-layered structure of algorithms inspired by human brain networks, are the basis of deep learning that does automatic feature learning. Research in cardiovascular medicine has shown that deep learning can accurately predict outcomes. However, because the cutting-edge deep learning model approaches rely on a variety of algorithms, these algorithms are now crucial for precisely predicting the presence or absence of cardiac disorders.
This study collected images from multiple modes, including MRI, CT, PET, and chest X-rays from the publicly available datasets. To remove noise artifacts, the acquired pictures are pre-processed using the SCalable Range-Based Adaptive Bilateral (SCRAB) filter. The relevant features are retrieved from the pre-processed pictures using transfer learning based neural networks: ResNet-101 and EfficientNet. The structural features are extracted by these deep neural networks and acquired as feature sets. These features are selected by MFO algorithm and the sets are fused to DBN as input for detecting CVD. The benefit of optimization algorithms is their ability to handle intricate non-linear situations with a high degree of adaptation and flexibility. When compared to the current deep neural networks, the suggested approach produces the most accurate and optimal results. This technique has the potential to increase cardiovascular disease detection accuracy and boost artificial intelligence in hospitals.
By combining information from several imaging modalities, medical personnel can obtain a deeper understanding of a patient's condition. Multimodality images are very supportive in the area like cardiology, neurology, cancer and orthopaedics for better diagnosis and effective treatments. Hence this present study adopted multimodality images. Figure 1 displays the overall architecture of the proposed approach.
Figure 1. Complete architecture of the proposed approach
The study makes the following primary contributions:
• SCRAB filter has employed as a pre-processing tool to sharpen and smooth the acquired pictures.
• Transfer learning based ResNet-101 and EfficientNet approaches are used to extract the suitable features from the pre-processed picture.
• MFO algorithm is used for feature selection to improve the efficiency.
• The incorporated feature sets are fed into DBN to detect CVD.
AI can detect early signs of cardiac disease by analyzing data such as body weight, blood pressure, sugar level, cholesterol, and heart rate. While Deep Learning (DL) and Machine Learning (ML) methods are altering the current healthcare system, accurately and consistently predicting cardiac disease remains challenging. Heart disease detection has been done using a variety of categorization techniques.
Though precise and reliable prediction of heart illness remains a challenge, ML and DL approaches are redefining the present healthcare scheme. Heart disease detection has been done using a variety of categorization techniques. For the objective of classifying and predicting CVD patients, Khan et al. [7] used various ML approaches, including logistic regression (LR), random forest (RF), decision tree (DT), Naïve Bayes (NB), and support vector machine (SVM). Out of all these techniques, Random Forest has shown to be effective. Kavitha et al. [8] combined random forest and decision tree algorithms to build a hybrid model. Random forest probabilities provide the foundation for how the integrated model functions. After identifying the most crucial features using the SelectKBest function and chi-squared statistical approach, Senan et al. [9] employed the feature engineering method to construct new features that highly linked to train machine learning approaches and provide encouraging outcomes. Two distinct datasets were trained using optimised hyperparameter classification methods, including KNN, SVM, Random Forest, Logistic Regression and Decision Tree.
Singh and Kumar [10] used the KNN algorithm, Decision tree, SVM and linear regression process as classification and regression methods to predict heart disease using UCI repository data. To improve accuracy, Hashi and Zaman [11] uses grid search method to change the hyperparameters for five classification algorithms. Following the use of feature selection techniques like the Matthew's correlation and Fisher score, Saqlain et al. [12] classified data using RBF kernel based SVM. The study led by Dimopoulos et al. [13] used RF, K-Nearest Neighbour (KNN), and Decision Tree for diagnosing Cardio Vascular Disease. RF classifier shown best accuracy compared to others by using Hellenic SCORE technique.
A deep CNN-based method for identifying cardio vascular disorders utilising 1D and 2D PCG raw data was presented by Jamil and Roy [14]. Direct feature selection from the raw PCG signal has been made automated and effective by using Particle Swarm Optimisation (PSO) and Genetic Algorithms (GA). Vision Transformer (ViT), which leverages the self-attention process, has been used to enhance the classifier's performance. A multi-modal stacking ensemble was proposed by Yoon and Kang [15] that incorporates data from two different modalities: scalograms and grayscale ECGs.
XGBoost was employed as the Meta learner by aggregating the detections of the base learner, while SVM, logistic regression, random forest, and the ResNet-50 schemes were used as the individual foundation learners of the stacking group.
A unique multi-modal approach for detecting CVDs based on PCG and ECG characteristics is proposed by Li et al. [16] uses traditional neural networks to collect deep-coding characteristics from the PCG and ECG. The genetic algorithm is used to select the best feature subset from a combined collection of features. Next, we use a support vector machine to put classifications into practice. Ammar et al. [17] developed a group of classifiers to categorise cardiac disorders, as well as a network based on deep learning model to separate the three major cardiac structures in short-axis cine MRI images. The DL segmentation network is a FCNN based U-Net with smaller amountof trainable parameter sets.
Figure 2 illustrates the complete workflow of the proposed work. The suggested work has three stages: (1) Pre-processing, (2) Feature Extraction, (3) Feature Selection and (4) Classification.
The multi-modal images used in this study, including CT scans, MRIs, chest X-rays and PET scans, were collected from publicly accessible databases. SCRAB filter is employed as a pre-processing tool to sharpen and smooth the acquired pictures. The relevant features are derived from the pre-processed pictures using transfer learning based neural networks: ResNet-101 and EfficientNet. These features are selected by MFO algorithm and the sets are incorporated to DBN as input for detecting CVD. The benefit of optimisation algorithms is their ability to handle intricate non-linear situations with a high degree of adaptation and flexibility. The proposed method outperforms existing deep neural networks in terms of accuracy and efficiency. This method may be implemented in hospitals to facilitate the advancement of AI in the health domain and enhance the accurate identification of cardiovascular ailments. The schematic architecture of the suggested method is explained in Figure 2.
Figure 2. Schematic representation of the proposed model
3.1 Pre-processing
Preparing the data for feature extraction requires pre-processing to guarantee that the input data is consistent and clean indicative of the underlying patterns, all of which contribute to the creation of features that are more reliable and accurate. This present study employed Adaptive Bilateral Filter (ABF) for sharpening and smoothing the input images.
The suggested ABF's shift-variant filtering process and its impulse response are depicted in Eqs. (1) and (2), respectively.
$\hat{f}[m, n]=\sum_k \sum_l h[m, n ; k, l] g[k, l]$ (1)
where, $\widehat{f}[m, n]$ represents the restored image, $\mathrm{g}[m, n]$ represents the degraded image, and $h[m, n: k, l]$ represents the response at $[m, n]$ to an impulse at $[k, l]$.
$\begin{gathered}h\left[m, n ; m_0, n_0\right] =I\left(\Omega_{m_0, n_0}\right) r_{m_0, n_0}^{-1} e^{-}\left(\frac{\left(m-m_{0}\right)^2+\left(n-n_0\right)^2}{2 \sigma_d^2}\right), e^{-\frac{1}{2}}\left(\frac{g[m, n]-g\left[m_{0,} n_0\right]-\left[m_{0,} n_0\right]^2}{\sigma_\gamma\left[m_{0,} n_0\right]}\right)\end{gathered}$ (2)
where the centre pixel of the window is $\left[m_0, n_0\right]$. The indicator function is represented by $I(\cdot)$ and the volume beneath the filter is normalised to unity by Eq. (3).
$r_{m_0, n_0}\ and\ \left(\Omega_{m_0, n_0}\right)=\left\{[m, n]:[m, n] \in\left[m_0-N, m_0+N\right] \times\left[n_0-N, n_0+N\right]\right\}$ (3)
Two significant changes are present in ABF: Initially, an offset $\zeta$ is added to the range filter. Second, the range filter $\sigma_r$ 's width and $\zeta$ are both locally adaptive in ABF. ABF will degenerate into a traditional bilateral filter if $\zeta=0$ and $\sigma_r$ is fixed. In ABF, a fixed low pass Gaussian filter with $\sigma_r=1.0$ is used as the domain filter.
The bilateral filter becomes a considerably more potent filter that can both smooth and sharpen when a locally adaptable $\zeta$ and $\sigma_r$ are combined. Additionally, it sharpens an image by making the edges more pronounced. The range filter is essentially a one-dimensional filter that analyses the image's histogram. The range filters parameter $\sigma_r$. It determines how wide the filter should be. The width of the range filter is set by the range filter parameter $\sigma_r$. It establishes the degree of selection exercised by the range filter in selecting pixels with comparable grey values for inclusion in the averaging process. The range filter will give each pixel in the range a comparable weight if $\sigma_r$ is large in relation to the window's data range.
3.2 Feature extraction
A method called feature extraction is in use to extract crucial details about the CVD images from the pre-processed pictures, which facilitates and improves the accuracy of classification. In this study, a deep learning based ResNet101 and Efficient Net approaches has been utilized to get the efficient and robust features. Finally, the incorporated features are used to classify the CVD.
3.2.1 ResNet 101
A particular kind of residual neural network called ResNet was first presented by He et al. [18]. The ResNet network employs residual connections that allow gradients to pass through directly, preventing gradients from becoming zero following the application of the chain rule. ResNet-101 is composed of 104 convolutional layers overall. In addition, it consists of 29 blocks with a total of 33 layers; 29 of these blocks use the output from the preceding block, which is known as the residual connections mentioned before. At the conclusion of each block, these residuals are utilised as the first operand of the summing operator to generate the input for the blocks that follow.
Figure 3. Basic architecture of ResNet 101
The next four blocks employ the output of the previous block: a convolution layer with a filter size of 1×1 and a stride of 1, followed by a batch normalisation layer that performs the normalisation process. The output of that layer is then delivered to the summing operator at the block's output. Figure 3 shows the architecture of ResNet 101 network.
3.2.2 EfficientNet
Tan and Le [19] presented the convolutional neural network architecture called EfficientNet. By increasing the network's depth, breadth, and resolution in an ethical manner, the goal of EfficientNet is to offer state-of-the-art performance at a low computational cost. EfficientNet's design is comparable to that of other convolutional neural networks (CNNs), consisting of a convolutional layer stack followed by pooling layers, fully connected layers, and nonlinear activations. Compound scaling, which consistently adjusts the depth of the network, breadth, and resolution by a set of predefined scaling coefficients, is one of the main advances of EfficientNet.
Depth wise Separable Convolutions: EfficientNet divides the basic convolution operation into two distinct operations: depth wise convolution and point wise convolution.
A set of scaling factors (phi) is introduced by EfficientNet to regulate the network's depth, breadth, and resolution. These coefficients are empirically found on a validation set using a grid search, guaranteeing that the scaled models function as well as possible under the restrictions of computing power. With its unique compound scaling technique, EfficientNet generally strikes a reasonable compromise between model efficiency and performance.
3.3 Feature selection
Feature selection is a crucial step in machine learning that involves selecting a subset of relevant attributes. This study utilized MFO algorithm for feature selection. MFO can improve image processing methods like object detection, enhancement, and segmentation. It can aid in raising the precision and effectiveness of image processing methods.
3.4 Classification
This present study utilized DBN to classify the normal and CVD images from the features selected by Mayfly Optimisation Algorithm.
3.4.1 DBN
DBN is based on a Restricted Boltzmann Machine and employs an ANN with an RBM topology consisting of visible and non visible hidden layers. The RBM employs the energy function to calculate unit values in a probabilistic manner, based on the Hopfield network. The RBM is exposed in Figure 3. The RBM has zero internal connection. Figure 3 illustrates the DBN, which connects the RBM's structures in a sequential fashion.
The front-facing hidden unit layer in the structure serves as the preceding visible unit layer. Establishing the hidden and visible layers 1 into a distinct RBM is how DBN learning is accomplished. Once learning is finished, a fresh input equal to the value of hidden layer 1 is given to hidden layers 1 as well as 2 through the RBM for training. So, up until the final layer, learning is sequential. The DBN is used in the back propagation algorithm, a supervised learning classification technique. It is set up at the top layer of the DBN. In this work, a back propagation-DBN classification prediction model was developed.
In Figure 4, the visible layer is represented by $v\left(v_1, v_2, \ldots, v_n\right)$, the hidden layer is represented by $h\left(h_1, h_2, \ldots, h_n\right)$, and the weight matrix separating the hidden layer from the visible layer is represented by W. The feature sets are input from the visibility layer, the weight value w and the state of every neuron are randomly initialised to produce the hidden layer data. This disconnection at the same level between neurons results in the following properties of the neuron state: conditions of the visible units are activated independently if the visible cell state is determined, and conditions of the visible units are activated independently if the hidden cell state is determined.
Figure 4. Restricted Boltzmann machine
In the case of a given set of states (v, h), the energy function of the RBM model may be represented as follows:
$\begin{gathered}E(v, h)=-\sum_{i=1}^n a_i v_i-\sum_{j=1}^m b_j h_j-\sum_{i=1}^n \sum_{j=1}^m v_i w_{i j} h_j=-a^T v-b^T h-h^T w v\end{gathered}$ (4)
where the visible unit's offset vector is symbolized by $a= \left(a_1, a_2, \ldots, a_n\right)$, the hidden unit's bias vector is represented by $b=\left(b_1, b_2, \ldots, b_m\right)$, and the visible layer's state vector is represented by $v=\left(v_1, v_2, \ldots, v_n\right)$, the hidden layer's state vector is symbolized by $h=\left(h_1, h_2, \ldots, h_m\right)$, the connection weight matrix is denoted by $w=\left(w_{i, j}\right)$, and ($w_{i, j}$) represents the weight of the $i_{t h}$ visible part and the $j_{t h}$ hidden component. The construction of DBN is illustrated in Figure 5.
Figure 5. DBN
The deep learning toolbox MATLAB was used to execute the experiments for the proposed work. The multi-modal pictures used in this study, including MRIs, chest X-rays PET scans and CT scans were obtained from Kaggle datasets. Additionally, a comparison of the suggested network using traditional DL models is shown.
4.1 Performance measures
Several performance measures can be employed to evaluate the proposed model's effectiveness. A system's performance in deep learning can be assessed using many benchmarks. The next sections cover performance measurements, such as F1 score, accuracy, recall, and precision.
Accuracy: The accuracy of the suggested model is evaluated by calculating the proportion of right guesses to all prediction as provided in Eq. (5), is one method to determine the frequency with which a deep-learning algorithm properly classifies a data point.
$Accuracy=\frac{T P+T N}{T P+F P+T N+F N}$ (5)
Precision: Precision is the ratio of the number of accurate predictions to total inputs. It is sometimes referred to as positive predictive value. Eq. (6) may be utilised for the computation of precision.
$Precision=\frac{T P}{T P+F P}$ (6)
Recall: Recall, also known as sensitivity, is the proportion of accurately predicted class members to the total number of members in that class. It serves as a gauge for the classifier's completeness. Eq. (7) may be utilised for the computation of recall.
$Recall=\frac{T P}{T P+F N}$ (7)
F1-Score: F1-Score is the outcome of combining precision and recall. Eq. (8) may be utilised to quantify it.
$F 1- Score =2 \times \frac{({ Precision } \text { × } { Recall })}{ { Precision }+ { Recall }}$ (8)
4.2 Discussions
The proposed method for segmenting and categorizing cardiac nodules is provided, along with experimental data. The recommended solution was implemented using MATLAB. The categorization results were assessed using F1 score, precision, accuracy, and recall measures. The classification results obtained from the proposed method is displayed in Table 1. The proposed technique attains 99.43% of accuracy, 94.62% of Precision, 90.27% of Recall and 92.39% of F1-score on multimodality images.
Table 1. Classification results of the proposed approach
|
Performance Measures |
Combined Features with DBN |
|
Accuracy |
99.43 |
|
Precision |
94.62 |
|
Recall |
90.27 |
|
F1-score |
92.39 |
From the results, it is observed that classification with pre-processed image achieves much better result than directly using the original images. Moreover, our deep learning-based feature extraction methods gives effective and robust features Feature selection method can generate more discriminate features for better recognition. This work incorporates pre-processing, feature extraction, feature selection, and classification procedures, and the recognition process is completely automated without human interaction. The proposed model's findings are illustrated in Figure 6.
Figure 6. Performance of the proposed model
The proposed model was analyzed against existing machine learning and deep learning methodologies. Deep features with DBN provide the highest accuracy compared to other approaches (refer to Table 2).
Table 2. Analysis of classification accuracy of the proposed approach with the previous works
|
Ref. |
Method |
Accuracy (%) |
|
[20] |
HRFLM |
88.7 |
|
[21] |
Deep learning |
82 |
|
[22] |
Neuro-Fuzzy Model |
91 |
|
[23] |
RFE-GB |
89.78 |
|
[8] |
DT+RF |
88 |
|
[24] |
XGBH |
80.6 |
|
[25] |
Self-Attention based Transformer Model |
95.2 |
|
Proposed model |
DBN |
99.43 |
Figure 7 displays the comparative analysis of the suggested DBN model, methods and the existing HRFLM, Deep Learning, NFM, RFE-GB, DT+RF, XGBH methods in terms of Accuracy.
Figure 7 illustrates the accuracy of the suggested methodology compared to existing methods. The suggested DBN method perform well than the existing approaches like HRFLM, Deep Learning, NFM, RFE-GB, DT+RF, XGBH models in terms of accuracy. The proposed Deep features (ResNet101, Efficient Net) with DBN model achieves 99.43% of accuracy, HRFLM achieves 88.7% of accuracy, Deep Learning model gives 82%of accuracy, NFM model achieves 91% accuracy, RFE-GB model attains 89.78% of accuracy, DT+RF gives the accuracy of 88% and XGBH achieves 80.6% of accuracy. The Proposed DBN enhances accuracy of 10.73%, 17.43%. 8.43%, 9.65%, 11.43% and 18.83% than the HRFLM, Deep Learning, NFM, RFE-GB, DT+RF and XGBH models respectively.
Figure 7. Analysis of proposed DBF model and the existing models
Figure 8 illustrates the results of the pre-processing, Feature Extraction and Classification results of the proposed DBF method [26].
Figure 8. The results of the proposed work
The multimodality input images are projected in the first column. The pre-processed images using ABF is displayed in second column. The outcome of the feature extraction process utilising the combined ResNet101 and EfficientNet features are displayed in the third column, and the DBN classification result is shown in the final column. The proposed operator achieves 99.43% of classification accuracy. The proposed work identifies three images as CVD, and two images as Normal among five test images.
One of the most serious diseases that distress people worldwide is heart disease. Lack of physical activity and changing lifestyles can raise the risk of certain illnesses. Early identification of heart disease can prevent several catastrophes. Using an effective algorithm to predict probable heart disease could help doctors detect cardiac illness early. There are various diagnostic methods accessible in medicine. Deep learning is considered to be the most accurate solution. This research focused on the early detection of cardio-vascular disease using multi-modality images. Initially, SCRAB filter is employed for removing the noise artifacts. Then the relevant features are derived from the pre-processed pictures by the use of transfer learning based neural networks: ResNet-101 and EfficientNet algorithms. These features are selected by MFO algorithm and the feature sets are fused to DBN as input for detecting CVD. Incorporation of the proposed pre-processing, feature extractor, feature selection and classifiers attain 99.43% of accuracy. It can be assisting physicians to predict the Cardio vascular disease at the early stage.
[1] Ammari, A., Mahmoudi, R., Hmida, B., Saouli, R., Bedoui, M.H. (2021). A review of approaches investigated for right ventricular segmentation using short-axis cardiac MRI. IET Image Processing, 15(9): 1845-1868. https://doi.org/10.1049/ipr2.12165
[2] Chen, C., Qin, C., Qiu, H., Tarroni, G., Duan, J., Bai, W., Rueckert, D. (2020). Deep learning for cardiac image segmentation: A review. Frontiers in Cardiovascular Medicine, 7: 25. https://doi.org/10.3389/fcvm.2020.00025
[3] Ng, R., Sutradhar, R., Yao, Z., Wodchis, W.P., Rosella, L.C. (2020). Smoking, drinking, diet and physical activity—modifiable lifestyle risk factors and their associations with age to first chronic disease. International Journal of Epidemiology, 49(1): 113-130. https://doi.org/10.1093/ije/dyz078
[4] Mythili, T., Mukherji, D., Padalia, N., Naidu, A. (2013). A heart disease prediction model using SVM-decision trees-logistic regression (SDL). International Journal of Computer Applications, 68(16): 11-15. https://doi.org/10.5120/11662-7250
[5] Frieden, T.R., Jaffe, M.G. (2018). Saving 100 million lives by improving global treatment of hypertension and reducing cardiovascular disease risk factors. The Journal of Clinical Hypertension, 20(2): 208-211. https://doi.org/10.1111/jch.13195
[6] Flores, N., Reyna, M.A., Avitia, R.L., Cardenas-Haro, J.A., Garcia-Gonzalez, C. (2022). Non-invasive systems and methods patents review based on electrocardiogram for diagnosis of cardiovascular diseases. Algorithms, 15(3): 82. https://doi.org/10.3390/a15030082
[7] Khan, A., Qureshi, M., Daniyal, M., Tawiah, K. (2023). A novel study on machine learning algorithm-based cardiovascular disease prediction. Health & Social Care in the Community, 2023(1): 1406060. https://doi.org/10.1155/2023/1406060
[8] Kavitha, M., Gnaneswar, G., Dinesh, R., Sai, Y.R., Suraj, R.S. (2021). Heart disease prediction using hybrid machine learning model. In 2021 6th International Conference on Inventive Computation Technologies (ICICT), Coimbatore, India, pp. 1329-1333. https://doi.org/10.1109/ICICT50816.2021.9358597
[9] Senan, E.M., Abunadi, I., Jadhav, M.E., Fati, S.M. (2021). Score and correlation coefficient-based feature selection for predicting heart failure diagnosis by using machine learning algorithms. Computational and Mathematical Methods in Medicine, 2021(1): 8500314. https://doi.1155/2021/8500314
[10] Singh, A., Kumar, R. (2020). Heart disease prediction using machine learning algorithms. In 2020 International Conference on Electrical and Electronics Engineering (ICE3), Gorakhpur, India, pp. 452-457. https://doi.org/10.1109/ICE348803.2020.9122958
[11] Hashi, E.K., Zaman, M.S.U. (2020). Developing a hyperparameter tuning based machine learning approach of heart disease prediction. Journal of Applied Science & Process Engineering, 7(2): 631-647. https://doi.org/10.33736/jaspe.2639.2020
[12] Saqlain, S.M., Sher, M., Shah, F.A., Khan, I., Ashraf, M.U., Awais, M., Ghani, A. (2019). Fisher score and Matthews correlation coefficient-based feature subset selection for heart disease diagnosis using support vector machines. Knowledge and Information Systems, 58: 139-167. https://doi.org/10.1007/s10115-018-1185-y
[13] Dimopoulos, A.C., Nikolaidou, M., Caballero, F.F., Engchuan, W., Sanchez-Niubo, A., Arndt, H., Ayuso-Mateos, J.L., Haro, J.M., Chatterji, S., Georgousopoulou, E.N., Pitsavos, C., Panagiotakos, D.B. (2018). Machine learning methodologies versus cardiovascular risk scores, in predicting disease risk. BMC Medical Research Methodology, 18: 179. https://doi.org/10.1186/s12874-018-0644-1
[14] Jamil, S., Roy, A.M. (2023). An efficient and robust phonocardiography (PCG)-based valvular heart diseases (VHD) detection framework using vision transformer (vit). Computers in Biology and Medicine, 158: 106734. https://doi.org/10.1016/j.compbiomed.2023.106734
[15] Yoon, T., Kang, D. (2023). Multi-modal stacking ensemble for the diagnosis of cardiovascular diseases. Journal of Personalized Medicine, 13(2): 373. https://doi.org/10.3390/jpm13020373
[16] Li, P., Hu, Y., Liu, Z.P. (2021). Prediction of cardiovascular diseases by integrating multi-modal features with machine learning methods. Biomedical Signal Processing and Control, 66: 102474. https://doi.1016/j.bspc.2021.102474
[17] Ammar, A., Bouattane, O., Youssfi, M. (2021). Automatic cardiac cine MRI segmentation and heart disease classification. Computerized Medical Imaging and Graphics, 88: 101864. https://doi.1016/j.compmedimag.2021.101864
[18] He, K., Zhang, X., Ren, S., Sun, J. (2015). Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385. https://doi.org/10.48550/arXiv.1512.03385
[19] Tan, M., Le, Q.V. (2019). EfficientNet: Rethinking model scaling for convolutional neural networks. arXiv preprint arXiv:1905.11946. https://doi.org/10.48550/arXiv.1905.11946
[20] Mohan, S., Thirumalai, C., Srivastava, G. (2019). Effective heart disease prediction using hybrid machine learning techniques. IEEE Access, 7: 81542-81554. https://doi.1109/ACCESS.2019.2923707
[21] Matsumoto, T., Kodera, S., Shinohara, H., Ieki, H., Yamaguchi, T., Higashikuni, Y., Kiyosue, A., Ito, K., Ando, J., Takimoto, E., Akazawa, H., Morita, H., Komuro, I. (2020). Diagnosing heart failure from chest X-ray images using deep learning. International Heart Journal, 61(4): 781-786. https://doi.org/10.1536/ihj.19-714
[22] Casalino, G., Castellano, G., Kaymak, U., Zaza, G. (2021). Balancing accuracy and interpretability through neuro-fuzzy models for cardiovascular risk assessment. In 2021 IEEE Symposium Series on Computational Intelligence (SSCI), Orlando, USA, pp. 1-8. https://doi.1109/SSCI50451.2021.9660104
[23] Theerthagiri, P., Vidya, J. (2022). Cardiovascular disease prediction using recursive feature elimination and gradient boosting classification techniques. Expert systems, 39(9): e13064. https://doi.1111/exsy.13064
[24] Peng, M., Hou, F., Cheng, Z., Shen, T., Liu, K., Zhao, C., Zheng, W. (2023). Prediction of cardiovascular disease risk based on major contributing features. Scientific Reports, 13(1): 4778. https://doi.1038/s41598-023-31870-8
[25] Rahman, A.U., Alsenani, Y., Zafar, A., Ullah, K., Rabie, K., Shongwe, T. (2024). Enhancing heart disease prediction using a self-attention-based transformer model. Scientific Reports, 14(1): 514. https://doi.org/10.1038/s41598-024-51184-7
[26] Thankappan, B.C., Krishnammal, T.K. (2025). Advanced cardiovascular disease classification using multi-modal imaging and deep learning. IAES International Journal of Robotics and Automation (IJRA), 14(1): 58-66. https://doi.org/10.11591/ijra.v14i1.pp58-66