© 2025 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).
OPEN ACCESS
In recent days, breast cancer among women has become a normal deadliest disease among them. Due to being unaware and careless, breast cancer spreads to nearby tissues and leads to death. The earliest detection is the only remedy to solve these issues without any death. Mammography imaging is considered to predict the types of cancer, such as benign or malignant. Efficient training and learning of mammogram data would help for accurate earlier prediction. Consequently, the suggested approach is divided into three phases, such as feature extraction, feature selection, and classification, in order to carry out effective training and prediction of breast cancer. i) Using a revolutionary cascading feature pyramid U-Net (FPUNet) technique, feature extraction is a crucial task for training. To provide an effective feature extraction, this cascading FPUNet technique is cascaded with many FPUNet techniques. FPUNet is a hybrid of two well-known techniques, namely feature pyramid network and U-Net; ii) Next, the hybrid AdaBoost-Back Propagation Neural Network (BPNN) technique is used to classify breast cancer; this method provided an accurate classification with superior performance in terms of precision (97.15%), recall (98.77%), accuracy (97.08%), F1 value (94.14%), RMSE (1.7386), and MAE (2.119). Finally, a metaheuristic Lemur Optimization (LO) algorithm is used to perform feature selection by removing redundant features and optimally selecting features for classification. As a result, the suggested results are validated and contrasted with a well-used method that already exists. In conclusion, the suggested approach has produced a more effective result than the current approach when compared.
breast cancer prediction, feature extraction, cascade feature pyramid U-Net, AdaBoost-BPNN, Lemur Optimization, performance evaluation
In recent times, breast cancer has emerged as the deadliest disease for women due to the formation of unwanted tissue in the breast [1]. This abnormal development originates from the cancer cells found in the breast milk glands. Breast cancer is the second most frequent disease among women worldwide, affecting an estimated 2.1 million people, according to estimates from the World Health Organization (WHO). Unfortunately, 15% of breast cancer fatalities occur in women who receive therapy after the illness has progressed [2].
A radiologist uses modern technologies, like computer-aided detection systems, to diagnose breast cancer. The breast region's cancer prediction is found using mammography pictures [3]. Mammography is a screening method that typically handles the duty of cancer prediction easily. Mammography can do several tasks, including as pre-processing, feature extraction, and classification [4]. In order to identify the tissue impacted by cancer, these tasks are employed to create an aberrant and normal region of the breast. However, using a malignant and benign tumor feature extraction and classification for a mass dataset, this mammography imaging is not very accurate.
Many methods from Deep Learning and Machine Learning are used to deliver an efficient mammography task. Numerous well-known models are implemented using both ML and DL methods [5]. Digital medical imaging is handled by a number of techniques, including Support Vector Machine (SVM), Decision Tree, AdaBoost, Artificial Neural Networks (ANN), K-nearest neighborhood (KNN), Naive Bayes, Convolutional Neural Networks (CNN), AlexNet, ResNet, UNet, InceptionNet, MobileNet, Feature Pyramid Network (FPN), and so forth [6, 7].
Many existing methods are based on handcrafted features or traditional deep learning architectures. But these models failed to capture the complex multi-scale features present in mammography images. This results in reduced sensitivity to classify the abnormalities like microcalcifications and irregular masses. Additionally, the feature selection processes used in conventional CAD systems are computationally expensive.
To address these challenges, this work introduces a novel cascade framework comprising the Feature Pyramid U-Net (FPUNet), Lemur Optimization (LO) Algorithm, and AdaBoost-Backpropagation Neural Network (AdaBoost-BPNN). LO, a metaheuristic-based technique, is described for optimal feature selection, while the unique cascade FPUNet technique is presented for feature extraction. The hybrid AdaBoost-BPNN is presented to conduct a classification task. All tasks are completed and yield useful results; altogether, the suggested techniques beat the current methods in terms of performance measures.
The remainder of this work is organized as follows: Section 2 discusses the literature based on breast cancer prediction, whereas Section 3 presents the preliminary findings. The procedures and resources for the suggested work in this study were covered in Section 4. The experimental findings and performance comparison were covered in Section 5. The document concludes with a Section 6 conclusion, which is followed by references.
In order to get a greater performance, a variety of modern methodologies are used to perform breast cancer forecasts. Several DL approaches are sometimes offered in order to get a successful breast cancer evaluation.
Based on CEUS videos, a 3D CNN is offered for the diagnosis of breast cancer. There are 221 instances in the Breast-CEUS dataset. With an accuracy rate of 86.3%, the 3D CNN has demonstrated improved performance [8]. Implemented in reference [9], the CMOS-integrated Lab-on-Chip (LoC) system is linked to variant-specific isothermal amplification chemicals. It demonstrated the tracking of mutations in the ESR1 gene DNA in cancer tissue, along with patient classification and metastatic monitoring. Finding a static breast thermogram in reference [10] allows for the detection of breast cancer in the BIRADS V category. The performance of this advanced thermogram diagnosis was compared to benign cancer thermograms in the BIRADS II category.
In order to predict the progression of breast cancer, the XGBoost network's ties-based survival analysis is proven to use the gradient boosting technique (EXSA) [11]. The experiment confirmed that, in comparison to the conventional ways, the XGBoost ability was optimized and improved. The ensemble model was employed in reference [12] to extract the various features from the B-mode and SE pictures. 90% accuracy was achieved in the DL classification of benign from malignant tumors using a semantic feature by the AlexNet and ResNet models.
A convex optimization-based field-focussing technique is used to identify cancer. Within a layer that mimics the breast, the field level is raised. Using electromagnetic power, the millimeter-wave frequency fluctuations are concentrated even within a lossy medium [13]. The different ensemble classifiers and ANN are then employed for the diagnosis and prognosis of breast cancer. The balanced class weight based on prognosis is analyzed and contrasted. Compared to conventional methods, this ensemble technique's performance achieved a 98.83% accuracy [14].
The deep autoencoder for a one-class, semi-supervised network is implemented in reference [15]. Out of the 11,000 affected cases, it has identified 50,000 photos. Based on the detection of calcification lesions, the experimental results show that, out of 1,883 testing images, 1238 were negative and 645 were malignant. Compared to the prior model, the suggested method imaged breast cancer more effectively by processing mm-wave data. The Conditional Generative Adversarial Network (CGAN) based on mammography is introduced [16]. The Radon Cumulative Distribution Transform (RCDT) is trained using the CNN model in order to detect breast cancer. Compared to the previous ways, the suggested method has performed better and is more accurate.
This section covers the first part of the recommended procedure. The feature extraction procedure makes use of the cascading structure of the FPUNet model, often known as the cascade FPUNet approach. An overview of the FPUNet technology was given in the section that followed [17].
3.1 Feature pyramid U-Net technique
The feature pyramid U-Net model (FPUNet) is a revolutionary CNN technique in the DL model that is referred to as a multiscale neural network. The image processing system uses the deep CNN technique to do feature extraction on a pixel-by-pixel basis. CNN is created with two methods to improve its quality: U-Net and a feature pyramid network (FPN). The U-Net method is based on the DL methodology, which looks like a U-structure layer arrangement. The FPN approach is situated inside a mainstream architecture. The FPUNet architecture, which employs a multi-layered feature fusion method, is shown in Figure 1. The high-level features processed high-resolution data, while the low-level features handled semantic information [18].
Figure 1. FPUNet architecture
The FPUNet technique is based on the encoder-decoder work of the U-Net procedure. An encoder's feature map output can be mapped with varying spatial resolutions using pooling layers and layered convolution. The decoder may also recreate the feature maps by using upsampling layers and layered convolution. The feature mappings of an encoder are directly copied to a corresponding decoder via a skip link.
This FPUNet model builds two feature pyramid routes to generate a multilayer representation. The fundamental distinctions between an encoder and a decoder form the basis of the feature pyramid pathway's implementation. Feature extraction is managed as a layer-by-layer abstraction in the CNN paradigm. Thus, an encoder function was used to extract low-level information characteristics including texture and object edges. This kind of raw feature offered accurate object location information. A decoder, on the other hand, retrieves the high-level semantic data associated with a class of objects. This method implements the two feature pyramid paths in an encoder and decoder to extract multilevel representations. Consequently, the output from the two feature pyramid paths is combined to create a final result of an extraction.
The input dataset photographs, pre-processing, feature extraction, feature selection, and classification are shown in the recommended technique block diagram in Figure 2, in that order. The recommended method successfully satisfies the previously described goals for breast cancer identification.
Figure 2. Proposed block diagram
The Cascade FPUNet consists of two sequentially connected FPUNet models where the output of the first FPUNet acts as an enhanced input for the second FPUNet. This cascading mechanism allows the architecture to progressively refine feature representations and predictions.
The input image is processed by the first FPUNet, which performs coarse-level feature extraction and segmentation. The outputs are generated at an intermediate level of resolution. It captures essential spatial and semantic features. The second stage refines the outputs from the first FPUNet using more detailed feature extraction and enhancement. It focuses on correcting errors or inconsistencies from the first stage and improves the overall prediction accuracy. The input image is resized to 512×640 pixels to ensure uniformity and compatibility with the architecture. Padding is applied to maintain spatial dimensions through convolutions and reduces boundary effects. In First FPUNet, the convolutional encoder extracts hierarchical features at multiple levels. The bottleneck layer captures the global context and reduces dimensionality. Likewise, in the Second FPUNet, the encoder-decoder structure refines features using both global and local spatial information.
4.1 Dataset description
This work makes use of a public database called "Curated Breast Imaging Subset Digital Database is collected for Screening Mammography (CBIS-DDSM)". The DICOM format is used to store all of these dataset images. Up to 1546 datasets that are categorized as BENIGN, MALIGNANT, and BENIGN_WITHOUT_CALLBACK are obtained within it. This dataset comprises 528 cases impacted by BENIGN, 544 cases impacted by MALIGNANT, and 474 cases impacted by a BENIGN_WITHOUT_CALLBACK. For training and testing, all of these data are divided into 70% and 30% groups, respectively.
4.2 Pre-processing
The dataset is first translated from the DICOM format to the PNG format. The RGB value of photos is then equalized by using the grayscale function. The fore-ground and background images are then separated via processing of the contrast sharpening. The histogram equalization function is then used to sharpen the contrast.
4.3 Proposed methodology
4.3.1 Cascade FPUNet-based feature extraction
To perform a feature extraction, the novel cascade FPUNet architecture is proposed. The FPUNet technique is discussed in the previous section which is a combination of both the Unet model and the FPN model.
The proposed cascade FPUNet architecture, which consists of two FPUNet techniques, two encoders, and two decoders, is shown in Figure 3. The resized pre-processed images are sent into an FPUNet-1 architecture [19]. The feature extraction output from FPUNet-1 is fed into FPUNet-2 using the cascade function. Following that, the FPUNet-2 preserved the original resolution while enhancing feature extraction performance.
Initially, 512×512 dataset pictures are trained using FPUNet-1. However, the huge size requires more memory and takes longer to consume because it has more original resolution data stored. Consequently, it is preferable to use the 512×640 size in order to restore the functionality [19].
The padding operation is performed if an original image size is a minimum of 512×640. If the output of FPUNet-1 extracted feature output image is greater than 512×640, then the size to be resized. The image resolution is changed for effectiveness. This cascade method achieved 90% of the original image resolution in FPUNet-2 architecture.
Figure 3. Novel Cascade FPUNet architecture
The output of FPUNet-1 was obtained and used as input by FPUNet-2. Both an approximate position and an information size were provided by the FPUNet-1. The original photos are diagnosed for FPUNet-2 training by making an immediate replica of the FPUNet-1 results. Consequently, elliptical fitting, cutoff, and geometric modification are used to eliminate the FPUNet-1 outputs. The geometry has been translated, micro-scaled, and randomly rotated 180 degrees. Thus, this novel cascade FPUNet technique effectively achieves an extracted features output.
Therefore, extracted feature outputs are achieved effectively by this novel cascade FPUNet technique.
4.3.2 LO-based feature selection
To choose a particular feature for a classification process, feature selection is carried out in addition to feature extraction. In order to shorten the travel time of features and remove a redundant feature, the feature selection operation is carried out. In order to select a crucial and necessary feature, the metaheuristic optimization model is employed. LO is employed in this work to choose features [19].
Lemur optimizer
The two lemur locomotor behaviors, such as leap up and dancing hub, are the foundation of the LO model. The two phases of exploration and exploitation are how these two behaviors are displayed. During the exploration stage, a leap-up behavior is used to identify the ideal lemur location inside the search space. The best adjacent location in one direction is selected during the exploitation stage using a dance-hub [20].
The LO model is derived mathematically from these ideas. Assume that each lemur solution consists of a unique vector with a unique coordinate for each lemur. Determine each lemur's ideal location by considering its fitness function. The lemurs adjusted their position vectors in accordance with the values. A dance-hup can get the best neatest lemur, and a leap-up can get the world's best lemur. Algorithm 1 presents the LO model's pseudocode.
Algorithm 1: Pseudocode of LO model
Input parameters: Number of iterations, Number of dimensions (Dim), Number of solutions, Lower Bound (LB), Upper Bound (UB)
Initialize randomness of population in search space
while the current iteration number of iterations do
Determine objective function
Evaluate free risk rate
Generate the Global Best Lemur
For all lemur indexed using by i do
Evaluate Best Nearest Lemur
for every decision variable of i indexed by j do
Fix random([0, 1]) to rand.
if rand < JumpingRrate then
Update case-1 decision variable j
else
Update case-2 decision variable j
end if
end for
end for
end while
Return global best
The best global results are obtained by using a LO model for a feature selection task. Next the selected features are moved to perform a classification.
4.3.3 AdaBoost-BPNN-based classification
The hybrid of AdaBoost and BPNN is given in the classification challenge to achieve an effective prediction performance. To get the best classification performance, several weak classifiers are first combined using the Adaboost approach. Similar to that, the AdaBoost-BPNN model employed a BPNN as the first weak classifier in this study. After that, it used the output of several BPNN weak classifiers in a recurrent training procedure. The weak classifiers of the BPNN model are then combined with the AdaBoost model to create a robust classifier.
Figure 4. AdaBoost-BPNN classifier
AdaBoost-BPNN classifier, which uses multiple n-number BPNN techniques to achieve a classification, is depicted in Figure 4. The BPNN model is fault-tolerant, self-learning, and sufficiently generalizable. These models perform well on datasets intended for training and learning. Assume that a 60-19-10 network structure is used to classify the chosen feature input, which is 60-dimensional. Tansig functions handle the network's hidden layer, while purelin functions handle the output layer. Every weak classifier uses a variety of data attributes as training samples. Almost 15 BPNN models of weak classifiers are obtained with associated weights in this work. To create the best classifier for an image classification problem, all of these weak classifiers are processed. As a result, categorization performs better than any other traditional activity with better variants.
The performance of the suggested procedures is assessed through a discussion of the experimental data. 30% and 70% of breast cancer predictions, respectively, are made using the CBIS-DDSM database for testing and training. Metrics measuring recall, precision, accuracy, F1 value, root mean square error (RMSE), and mean absolute error (MAE) are used to assess the suggested performance. To demonstrate its efficacy, the suggested outcomes are contrasted with those of the current methods.
$Precision =\frac{T^{+}}{T^{+}+F^{+}}$ (1)
$Recall =\frac{T^{+}}{T^{+}+F^{-}}$ (2)
$Accuracy=\frac{T^{+}+T^{-}}{T^{+}+T^{-}+F^{+}+F^{-}}$ (3)
$F 1 \,value$ $=\frac{2 \text { Precision } \times \text { Recall }}{\text { Recall }+ \text { precision }}$ (4)
$RMSE =\sqrt{\frac{1}{n} \sum_{a=1}^n(\text { predicted cases }- \text { Actual cases })^2}$ (5)
$M A E=\sqrt{\left.\frac{1}{n} \sum_{a=1}^n \right\rvert\, \text { predicted cases }- \text { Actual cases }\left.\right|^2}$ (6)
where, $T^{+}$and $T^{-}$represent a True positive and True Negative, $F^{+}$and $F^{-}$denote a False positive and a False Negative. The term n denotes the number of patients respectively.
From Table 1 and Figure 5, the performance metrics of both the proposed and existing techniques such as FPN, UNet, AdaBoost-BPNN, Auto Encoder, DenseNet, ShuffleNet and CNN respectively. From the experiment evaluation, the proposed method has achieved a superior result than all the existing techniques in terms of Precision, Recall, Accuracy, F1 value, RMSE and MAE as 97.15%, 98.77%, 97.08%, 94.14%, 1.7386 and 2.119 individually.
Table 1. Performance metrics result of the proposed technique
Techniques |
Precision (%) |
Recall (%) |
Accuracy (%) |
F1 Value (%) |
RMSE |
MAE |
Proposed |
97.15 |
98.77 |
97.08 |
94.14 |
1.7386 |
2.119 |
FPN |
95.27 |
95.52 |
96.76 |
91.02 |
3.121 |
3.742 |
UNet |
94.87 |
94.32 |
94.95 |
90.54 |
3.304 |
4.720 |
AdaBoost-BPNN |
94.34 |
93.30 |
94.10 |
89.51 |
3.690 |
5.674 |
Auto Encoder |
93.54 |
93.19 |
93.89 |
89.12 |
4.923 |
5.918 |
DenseNet |
92.89 |
92.84 |
93.34 |
88.86 |
5.212 |
6.103 |
ShuffleNet |
91.43 |
93.32 |
92.01 |
88.45 |
5.876 |
6.432 |
CNN |
90.98 |
91.52 |
91.89 |
87.12 |
6.021 |
6.891 |
Figure 5. Overall result for proposed and existing techniques
Figure 6. Precision results for proposed and existing
Figure 7. Recall results for proposed and existing
The precision performance measures of the suggested and current approaches, such as, were displayed in Figure 6. The suggested approach has undoubtedly attained the maximum precision of 97.15%, whereas the individual results for FPN, UNet, AdaBoost-BPNN, Auto Encoder, DenseNet, ShuffleNet, and CNN are 95.27%, 94.87%, 94.34%, 93.54%, 92.89%, 91.43% and 90.98%.
Figure 7 displays the recall performance metrics for both currently used and suggested methods. The suggested approach has clearly attained a higher precision value of 98.77%, while the individual precision values for FPN, UNet, AdaBoost-BPNN, Auto Encoder, DenseNet, ShuffleNet, and CNN were 95.52%, 94.32%, 93.30%, 93.19%, 92.84%, 93.32% and 91.52%.
Figure 8. Accuracy metrics of proposed and existing techniques
Figure 9. F1 value result for proposed and existing techniques
Figure 10. RMSE result for proposed and existing techniques
Figure 11. MAE result for proposed and existing techniques
Table 2. Ablation analysis of the proposed model
Configuration |
Precision (%) |
Recall (%) |
Accuracy (%) |
F1 Score (%) |
RMSE |
MAE |
Baseline (Single FPN) |
91.45 |
91.78 |
91.22 |
87.92 |
6.002 |
6.751 |
Single FPN |
93.52 |
93.89 |
93.31 |
89.21 |
4.910 |
5.870 |
Double FPN |
95.68 |
96.11 |
95.22 |
92.10 |
3.312 |
3.890 |
Double FPN + Optimizer |
96.54 |
97.10 |
96.32 |
93.45 |
2.119 |
2.742 |
Double FPN + Optimizer + Classifier |
97.15 |
98.77 |
97.08 |
94.14 |
1.7386 |
2.119 |
Table 3. Statistical analysis of the proposed model
Metric |
Mean (Proposed) |
Mean (FPN) |
T-Statistic |
P-Value (T-Test) |
W-Statistic |
P-Value (Wilcoxon) |
Interpretation (α =0.05) |
Cohen's d |
Precision (%) |
97.15 |
95.27 |
5.87 |
< 0.001 |
0 |
< 0.001 |
Significant improvement |
1.47 |
Recall (%) |
98.77 |
95.52 |
6.42 |
< 0.001 |
0 |
< 0.001 |
Significant improvement |
1.61 |
Accuracy (%) |
97.08 |
96.76 |
4.35 |
0.002 |
1 |
0.004 |
Significant improvement |
1.09 |
F1-score (%) |
94.14 |
91.02 |
5.20 |
0.001 |
0 |
< 0.001 |
Significant improvement |
1.30 |
The accuracy metrics of the suggested and current procedures are displayed in Figure 8. It is evident that the suggested approach has achieved the highest accuracy of 97.08%, while the respective greatest accuracy for FPN, UNet, AdaBoost-BPNN, Auto Encoder, DenseNet, ShuffleNet, and CNN is 96.76%, 94.95%, 94.10%, 93.89%, 93.34%, 92.01% and 91.89%.
Figure 9 displays the F1 value outcome for both the suggested and current methods. While FPN, UNet, AdaBoost-BPNN, Auto Encoder, DenseNet, ShuffleNet, and CNN achieved as much as 91.02%, 90.54%, 89.51%, 89.12%, 88.86%, 88.45% and 87.12%, respectively, it is evident that the suggested technique has produced a higher F1 value of 94.14%.
The RMSE result for the suggested and current approaches is displayed in Figure 10. It is evident that the suggested approach produced a minimum error value of 1.7386 for the RMSE rate, while the corresponding values for FPN, UNet, AdaBoost-BPNN, Auto Encoder, DenseNet, ShuffleNet, and CNN were 3.121, 3.304, 3.690, 4.923, 5.212, 5.876, and 6.021.
The MAE result for both suggested and current strategies is displayed in Figure 11. The suggested approach has clearly achieved a minimum error value of 2.119 for the MAE rate, while the equivalent values for FPN, UNet, AdaBoost-BPNN, Auto Encoder, DenseNet, ShuffleNet, and CNN are 3.742, 4.720, 5.674, 5.918, 6.103, 6.432, and 6.891.
To evaluate the individual contributions of its components, ablation studies are carried out. The following Table 2 gives the analysis of break down of proposed method.
From the table observed that cascading two FPNs is used for hierarchical feature extraction. It supports the model to capture intricate patterns in the image. The improvement from Single FPN to Double FPN indicates that deeper feature integration is essential for performance enhancement. The optimizer refines the training process by minimizing errors more effectively. Further, the classifier further improves the model's decision-making ability which results in a significant boost in F1-Score and overall accuracy.
To validate the improvements achieved by the proposed method, statistical tests are conducted on key performance. The results are given in Table 3. The mean value denotes the average metric values for the proposed method and the best existing method of FPN across multiple runs t-statistic and p-value (t-test) are the results of the paired t-test, which assesses whether the mean differences between the proposed method and FPN are statistically significant. A p-value < 0.05 indicates significance. W-statistic and p-value (Wilcoxon) result from the Wilcoxon signed-rank test, a non-parametric alternative for validating paired data significance. A p-value < 0.05 confirms statistical significance. The table observed that the proposed method shows statistically significant improvement over FPN at a significance level of α=0.05. Also, Cohen’s d is the measure of effect size. The proposed model achieves greater than 0.8 which indicates large and practically meaningful improvements.
In recent decades, there has been a 15% increase in the death rate from breast cancer due to delayed diagnosis and treatment. Efficient feature extraction, feature selection, and classification are carried out for a mammography-based breast cancer in order to achieve an accurate and earlier prognosis. The new cascade FPUNet technique is used in this work to extract features that have achieved a more successful outcome in the hybridization and cascading process of two well-known models. Subsequently, the LO technique yielded the best outcome for feature selection, reducing the number of redundant features and choosing one important characteristic for classification. AdaBoost-BPNN is used to complete the classification problem with a small feature set, yielding the best classifier. The experimental results demonstrated an effective performance in the prediction of breast cancer in terms of F1 value (94.14%), RMSE (1.7386), Accuracy (97.08%), Precision (97.15%), and Recall (98.77%), respectively. Consequently, the suggested strategy outperformed every other method that was previously in use.
The proposed method requires significant computational resources due to the cascading architecture and complex optimizations. It may limit scalability for real-time applications. Future efforts will focus on optimizing the architecture for real-time deployment by reducing computational overhead.
[1] Mohamed, A.A., Berg, W.A., Peng, H., Luo, Y., Jankowitz, R.C., Wu, S. (2018). A deep learning method for classifying mammographic breast density categories. Medical Physics, 45(1): 314-321. https://doi.org/10.1002/mp.12683
[2] Mohamed, A.A., Luo, Y., Peng, H., Jankowitz, R.C., Wu, S. (2018). Understanding clinical mammographic breast density assessment: A deep learning perspective. Journal of Digital Imaging, 31: 387-392. https://doi.org/10.1007/s10278-017-0022-2
[3] Becker, A.S., Marcon, M., Ghafoor, S., Wurnig, M.C., Frauenfelder, T., Boss, A. (2017). Deep learning in mammography: diagnostic accuracy of a multipurpose image analysis software in the detection of breast cancer. Investigative Radiology, 52(7): 434-440. https://doi.org/10.1097/RLI.0000000000000358
[4] Cheng, J.Z., Ni, D., Chou, Y.H., Qin, J. et al. (2016). Computer-aided diagnosis with deep learning architecture: Applications to breast lesions in US images and pulmonary nodules in CT scans. Scientific Reports, 6(1): 24454. https://doi.org/10.1038/srep24454
[5] Gao, F., Wu, T., Li, J., Zheng, B., Ruan, L., Shang, D., Patel, B. (2018). SD-CNN: A shallow-deep CNN for improved breast cancer diagnosis. Computerized Medical Imaging and Graphics, 70: 53-62. https://doi.org/10.1016/j.compmedimag.2018.09.004
[6] Hamidinekoo, A., Denton, E., Rampun, A., Honnor, K., Zwiggelaar, R. (2018). Deep learning in mammography and breast histology, an overview and future trends. Medical Image Analysis, 47: 45-67. https://doi.org/10.1016/j.media.2018.03.006
[7] Suk, H.I., Lee, S.W., Shen, D., Alzheimer’s Disease Neuroimaging Initiative. (2015). Latent feature representation with stacked auto-encoder for AD/MCI diagnosis. Brain Structure and Function, 220: 841-859. https://doi.org/10.1007/s00429-013-0687-3
[8] Chen, C., Wang, Y., Niu, J., Liu, X., Li, Q., Gong, X. (2021). Domain knowledge powered deep learning for breast cancer diagnosis based on contrast-enhanced ultrasound videos. IEEE Transactions on Medical Imaging, 40(9): 2439-2451. https://doi.org/10.1109/TMI.2021.3078370
[9] Alexandrou, G., Moser, N., Mantikas, K.T., Rodriguez-Manzano, J., Ali, S., Coombes, R.C., Shaw, J., Georgiou, P., Toumazou, C., Kalofonou, M. (2021). Detection of multiple breast cancer ESR1 mutations on an ISFET based lab-on-chip platform. IEEE Transactions on Biomedical Circuits and Systems, 15(3): 380-389. https://doi.org/10.1109/TBCAS.2021.3094464
[10] Singh, D., Singh, A.K., Tiwari, S. (2021). Breast thermography as an adjunct tool to monitor the chemotherapy response in a triple negative BIRADS V cancer patient: A case study. IEEE Transactions on Medical Imaging, 41(3): 737-745. https://doi.org/10.1109/TMI.2021.3122565
[11] Liu, P., Fu, B., Yang, S.X., Deng, L., Zhong, X., Zheng, H. (2020). Optimizing survival analysis of XGBoost for ties to predict disease progression of breast cancer. IEEE Transactions on Biomedical Engineering, 68(1): 148-160. https://doi.org/10.1109/TBME.2020.2993278
[12] Misra, S., Jeon, S., Managuli, R., Lee, S. et al. (2021). Bi-modal transfer learning for classifying breast cancers via combined b-mode and ultrasound strain imaging. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, 69(1): 222-232. https://doi.org/10.1109/TUFFC.2021.3119251
[13] Iliopoulos, I., Di Meo, S., Pasian, M., Zhadobov, M. et al. (2020). Enhancement of penetration of millimeter waves by field focusing applied to breast cancer detection. IEEE Transactions on Biomedical Engineering, 68(3): 959-966. https://doi.org/10.1109/TBME.2020.3014277
[14] Naseem, U., Rashid, J., Ali, L., Kim, J., Haq, Q.E.U., Awan, M.J., Imran, M. (2022). An automatic detection of breast cancer diagnosis and prognosis based on machine learning using ensemble of classifiers. IEEE Access, 10: 78242-78252. https://doi.org/10.1109/ACCESS.2022.3174599
[15] Hou, R., Peng, Y., Grimm, L.J., Ren, Y., Mazurowski, M.A., Marks, J.R. (2021). Anomaly detection of calcifications in mammography based on 11,000 negative cases. IEEE Transactions on Biomedical Engineering, 69(5): 1639-1650. https://doi.org/10.1109/TBME.2021.3126281
[16] Lee, J., Nishikawa, R.M. (2021). Identifying women with mammographically-occult breast cancer leveraging GAN-simulated mammograms. IEEE Transactions on Medical Imaging, 41(1): 225-236. https://doi.org/10.1109/TMI.2021.3108949
[17] Liu, Y.P., Rui, X., Li, Z., Zeng, D., Li, J., Chen, P., Liang, R. (2021). Feature pyramid U-Net for retinal vessel segmentation. IET Image Processing, 15(8): 1733-1744. https://doi.org/10.1049/ipr2.12142
[18] Zhang, Y., Lai, H., Yang, W. (2021). Cascade UNet and CH-UNet for thyroid nodule segmentation and benign and malignant classification. In Segmentation, Classification, and Registration of Multi-modality Medical Imaging Data, pp. 129-134. https://doi.org/10.1007/978-3-030-71827-5_17
[19] Abasi, A.K., Makhadmeh, S.N., Al-Betar, M.A., Alomari, O.A. et al. (2022). Lemurs optimizer: A new metaheuristic algorithm for global optimization. Applied Sciences, 12(19): 10057. https://doi.org/10.3390/app121910057
[20] Cao, J., Chen, J., Li, H. (2014). An AdaBoost-backpropagation neural network for automated image sentiment classification. The Scientific World Journal, 2014(1): 364649. https://doi.org/10.1155/2014/364649