Design of an Efficient Multimodal Correlation Engine for Smart IoMT Imaging System for Pre-Emptive Analysis of Breast Cancer Stages

Design of an Efficient Multimodal Correlation Engine for Smart IoMT Imaging System for Pre-Emptive Analysis of Breast Cancer Stages

Suresh Limkar* Wankhede Vishal Ashok Apashabi Pathan Vinod Wadne Raenu Kolandaisamy Santosh Lavate

Department of Computer Science & Engineering, Central University of Jammu, J&K 181143, India

Department of Electronics & Telecommunication Engineering, SNJBs Shri. Hiralal Hastimal (Jain Brothers, Jalgaon), Polytechnic, Nashik 423101, India

Department of Information Technology, G H Raison College of Engineering and Management, Pune 412207, India

Department of Computer Engineering, JSPM's Imperial College of Engineering and Research, Pune 412207, India

Institute of Computer Science and Digital Innovation, UCSI University, Kuala Lumpur 56000, Malaysia

Department of Electronics & Telecommunication Engineering, AISSMS College of Engineering, Pune 411001, India

Corresponding Author Email: 
sureshlimkar@gmail.com
Page: 
2365-2380
|
DOI: 
https://doi.org/10.18280/ts.410512
Received: 
5 September 2023
|
Revised: 
7 March 2024
|
Accepted: 
15 April 2024
|
Available online: 
31 October 2024
| Citation

© 2024 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Early breast cancer detection and diagnosis remain difficult tasks, especially in areas with little resources in the contemporary era of precision medicine. Existing imaging techniques like mammography, ultrasound, and thermal imaging provide valuable information, but each one's potential is sometimes constrained by exorbitant costs, radiation exposure, and poor accuracy rates. In order to combine data from mammography, ultrasound, optical imaging, and thermal imaging scans, this research suggests a cutting-edge multimodal correlation engine for the Internet of Medical Things (IoMT). The resulting technology makes pre-emptive breast cancer analysis effective, affordable, and extremely accurate for different scenarios. The study uses the VGGNet 19 architecture for mammography data, but swaps out the fully connected layer with a group of classifiers that includes Naive Bayes, k-Nearest Neighbors (kNN), Support Vector Machines (SVM), and Logistic Regression (LR). This strategy guarantees a strong and varied learning approach. Radial Basis Function Networks (RBFNs), which offer a flexible and non-linear classification method, are used to classify ultrasound scans into cancer probabilities after being translated into multidimensional data using Frequency and iVector Analysis. The classified cancer levels are processed on the IoMT cloud, which assists in incremental improvements in the model’s performance for real-time scenarios. This performance was tested on the Breast Ultrasound Image (BUSI) dataset, Breast Thermal Image (THERMO) dataset, and Digital Database for Screening Mammography (DDSM) Dataset Samples, where this multifaceted approach has shown a significant increase in precision (12.5%), accuracy (14.9%), Area Under the Curve (AUC, 8.5%), sensitivity (9.4%), and specificity (10.5%) when compared to recent methods. By combining many imaging modalities into a single, effective, and potent diagnostic tool, the suggested approach opens up new possibilities in the early identification of breast cancer types. This strategy offers potential implications for breast cancer pre-emption, especially in environments where resources and access to care are limited.

Keywords: 

multimodal correlation engine, Internet of Medical Things (IoMT), breast cancer detection, imaging modality fusion, machine learning classifiers

1. Introduction

Worldwide, breast cancer affects one in three women and is the most frequent malignancy among them [1]. As a result, it continues to be a serious public health problem and a major contributor to cancer-related morbidity and mortality. Despite the considerable improvements in treatment measures, improving patient outcomes and lowering mortality rates depends on early detection and precise diagnosis [2]. Traditional techniques for finding breast cancer, like mammography, ultrasound, optical imaging, and thermal imaging, each have their own advantages and unique viewpoints. They do, however, have drawbacks, such as high costs, the potential for dangerous radiation exposure, and variable accuracy rates [3].

In order to provide a more thorough and accurate evaluation, the Internet of Medical Things (IoMT) offers a creative solution to this problem [4]. It enables the integration and analysis of data from various sources. This innovative method makes it possible to combine different imaging modalities, taking use of each one's advantages while also overcoming its weaknesses. IoMT has a lot of potential for the healthcare industry, but its integration with multimodal imaging for early breast cancer diagnosis is still understudied.

Breast cancer remains one of the most prevalent and formidable health challenges worldwide, exerting a significant toll on individuals, families, and healthcare systems. In the face of its pervasive impact, the importance of accurate breast cancer diagnosis cannot be overstated.

(1) Early detection, better prognosis

Early detection is paramount in the fight against breast cancer. Timely identification of the disease enables prompt initiation of treatment, which is pivotal in improving patient outcomes and increasing survival rates. Accurate diagnosis at an early stage allows for less invasive treatment options, reducing the need for aggressive interventions and potentially mitigating the physical and emotional burden on patients.

(2) Tailored treatment strategies

Accurate diagnosis lays the foundation for personalized treatment strategies tailored to individual patients. This approach enhances treatment efficacy while minimizing adverse effects, optimizing the balance between therapeutic benefits and patient well-being.

(3) Minimization of overtreatment and under treatment

Accurate diagnosis serves as a safeguard against both overtreatment and under treatment, ensuring that patients receive the appropriate level of care based on their specific disease status. Overtreatment, characterized by the unnecessary administration of aggressive therapies, poses risks of toxicity and long-term side effects without commensurate benefits. Conversely, under treatment, resulting from diagnostic inaccuracies or delays, may compromise patient outcomes by allowing the disease to progress unchecked. Accurate diagnosis mitigates these risks by guiding clinicians in selecting the most suitable treatment approach for each patient's unique circumstances.

(4) Preservation of quality of life

Accurate breast cancer diagnosis contributes to the preservation of patients' quality of life by minimizing the physical, emotional, and psychological impact of the disease and its treatment. By enabling the implementation of less invasive interventions and reducing the likelihood of treatment-related complications, accurate diagnosis supports patients in maintaining their functional independence, social connections, and overall well-being throughout the treatment journey.

(5) Resource optimization in healthcare

In addition to its clinical benefits, accurate breast cancer diagnosis plays a crucial role in optimizing resource allocation within healthcare systems. By avoiding unnecessary diagnostic procedures, treatments, and hospitalizations associated with misdiagnosis or delayed diagnosis, accurate diagnosis helps conserve healthcare resources, reduce healthcare costs, and enhance the efficiency of healthcare delivery. This ensures that resources are directed towards areas of greatest need, thereby maximizing the overall effectiveness and sustainability of healthcare services.

In this study, an effective multimodal correlation engine for the IoMT that combines data from mammography, ultrasound, optical imaging, and thermal imaging is introduced. To maximize the potential of the data each modality provides, cutting-edge machine learning techniques are used to process the data samples. The ultrasonic scans are examined using Frequency and iVector Analysis, the optical and thermal images are segregated using Saliency Maps, and the mammograms are processed using an ensemble learning classifier combined with VGGNet 19 architecture. These segmented images are then identified using RBFNs.

Across a number of criteria, the suggested model performs better than the alternatives. When compared to the Breast Ultrasound Image (BUSI) dataset, Breast Thermal Image (THERMO) dataset, and Digital Database for Screening Mammography (DDSM) Dataset Samples, this includes a 12.5% increase in precision, 14.9% higher accuracy, 12.9% increased recall, 8.5% higher Area Under the Curve (AUC), 9.4% improved sensitivity, and 10.5% better specificity. This suggests that the system offers an improved performance over current methods in addition to a novel approach to breast cancer diagnosis.

1.1 Key contributions

The research study makes several notable contributions to the field of breast cancer diagnosis and analysis. Firstly, it introduces a novel multimodal correlation engine designed specifically for proactive breast cancer analysis within the Internet of Medical Things (IoMT) framework. This innovative approach leverages the complementary strengths of various imaging modalities to enhance diagnostic accuracy and early detection capabilities.

Secondly, the study proposes an integrated machine learning framework that combines advanced algorithms with multimodal analysis techniques. By incorporating ensemble learning classifiers, deep neural networks, and non-linear classification methods, the proposed framework offers a robust and versatile approach to breast cancer diagnosis, capable of handling complex and heterogeneous data.

Thirdly, the research highlights the potential impact of the proposed model on clinical practice and patient outcomes. Through rigorous validation and evaluation, the study demonstrates the superior performance of the multimodal correlation engine compared to existing methods, offering tangible improvements in precision, accuracy, sensitivity, and specificity across multiple datasets.

1.2 Major objectives

The primary objective of the research is to develop and validate a comprehensive multimodal correlation engine for proactive breast cancer analysis. This entails integrating data from diverse imaging modalities, including mammography, ultrasound, optical imaging, and thermal imaging, to provide a holistic assessment of breast cancer characteristics and stages.

Another key objective is to employ advanced machine learning techniques to process and analyze the multimodal data effectively. By utilizing ensemble learning classifiers, deep neural networks, and non-linear classification methods, the study aims to extract meaningful insights from complex imaging datasets and improve diagnostic accuracy.

Furthermore, the research seeks to evaluate the performance of the proposed multimodal correlation engine through rigorous validation and comparison with existing methods. By conducting extensive experiments on real-world datasets, the study aims to demonstrate the superiority of the proposed model in terms of diagnostic precision, accuracy, and overall efficacy.

The research aims to address the critical need for accurate and proactive breast cancer analysis by developing a novel multimodal correlation engine within the IoMT framework. Through its contributions and objectives, the study endeavors to advance the field of breast cancer diagnosis and pave the way for improved patient outcomes and clinical practice.

The rest of this essay is organized as follows: The proposed multimodal correlation engine is thoroughly described in Section 3, the evaluation results and comparisons with existing techniques are presented in Section 4, and the paper is concluded with future scopes and potential improvements.

2. Motivation and Objectives

The need to investigate an integrated, multimodal approach for early detection and diagnosis is driven by the rising incidence of breast cancer as well as the unsatisfactory performance of individual diagnostic modalities and their inherent constraints. Even though they are all extremely helpful, the current imaging techniques for finding breast cancer, including mammography, ultrasound, optical imaging, and thermal imaging, each have unique difficulties. High prices, radiation exposure danger, and a variable level of sensitivity and specificity, which can result in false-positive or false-negative results, are just a few of the downsides. Additionally, in environments with low resources, access to a variety of imaging modalities may be restricted, which can result in missing or late-stage diagnosis [5, 6].

An opportune chance to address these issues has arisen with the introduction of the Internet of Medical Things (IoMT). We may take advantage of each imaging modality's advantages while making up for its deficiencies by combining numerous imaging modalities into a single correlation engine for clinical scenarios [7-9]. By enhancing precision, accuracy, and overall diagnostic power, this fusion can aid in the early and precise diagnosis of breast cancer. Due to the large complexity and variety of the data, however, the efficient and effective integration of these various modalities into an IoMT system continues to be a challenging issue.

Our goal is to create a multimodal correlation engine that is effective at combining the data from these various imaging modalities. For each modality, the system employs cutting-edge machine learning techniques. We suggest utilizing an ensemble learning classifier integrated with the VGGNet 19 architecture for mammography. While optical and thermal images are segmented using Saliency Maps, followed by classification using RBFN, respectively, ultrasound scans are processed using Frequency and iVector Analysis.

By utilizing machine learning's capabilities for high-dimensional data analysis and categorization, this system seeks to give a thorough and reliable analysis. By contrasting the performance of our system against that of other approaches using important measures including precision, accuracy, recall, Area Under the Curve (AUC), sensitivity, and specificity, we also want to demonstrate the superiority of this strategy for different scenarios.

By meeting these goals, we anticipate making progress in the early diagnosis and treatment of breast cancer, which will enhance prognosis and perhaps save lives. By demonstrating the viability and effectiveness of employing multimodal imaging data inside an IoMT framework, we also want to contribute to the larger fields of medical imaging and artificial intelligence processes.

3. Review of Models Used for Pre-Emption of Breast Ancer Types

Over the past decade, there has been significant growth in the literature on the diagnosis and classification of breast cancer, with a significant emphasis on the use of cutting-edge machine learning and artificial intelligence algorithms. The varied character of breast cancer and the inherent limits of different imaging modalities have been highlighted through the exploration of a variety of analytical models and imaging modalities [10-12].

Mammography, which continues to be the gold standard in early detection, was one of the first methods for finding breast cancer types. Mammography has traditionally used CAD (Computer-Aided Detection) systems extensively, frequently employing fundamental machine learning classifiers like Support Vector Machines (SVMs) and k-Nearest Neighbors (kNN) [13-15]. Convolutional Neural Networks (CNNs) have been brought to mammography interpretation with the advent of deep learning techniques, showing promising outcomes. For instance, work in studies [16-18] showed improved sensitivity when they utilized a CNN model called AlexNet to find microcalcifications in mammograms.

Another important method for finding breast cancer is ultrasound, particularly in cases where the breasts are thick. It can be time-consuming and very operator-dependent to manually analyze ultrasound images. Models like Frequency and iVector Analysis have been used to extract characteristics and provide a more objective analysis to address these issues [19, 20]. These models still need to be improved because they are prone to false positives.

Although optical imaging offers a non-invasive and non-ionizing method for detecting cancer, it has historically been constrained by a lack of specificity levels [21-24]. However, these restrictions are gradually being overcome with the development of deep learning algorithms. Recurrent neural networks (RNNs) based on Long Short-Term Memory (LSTM) have been presented for optical image analysis, providing enhanced temporal sequence analysis via use of Deep CNN (DCNN) operations [25-28].

Utilizing the increased metabolic activity and vascularity of cancer cells, thermal imaging or thermography has been utilized as an additional technique in the identification of breast cancer types [29-32]. It provides real-time monitoring and is non-intrusive. High false-positive rates have plagued conventional thermal imaging, but recent models utilizing Generative Adversarial Networks (GANs) have showed encouraging results in enhancing specificity levels [33-36] via use of ResNet and other model processes.

The accuracy of mammographic interpretations is constrained by elements including breast density and image quality levels [37-40]. Several machine learning models have been created to enhance detection in order to overcome these constraints. For instance, work in studies [41-44] developed a Computer-Aided Diagnosis (CAD) system that increased sensitivity by employing several SVM classifiers. However, because indolent lesions are sometimes seen through mammography, this can result in overdiagnosis and overtreatment. Deep learning techniques have been developed to counteract this, and CNNs like AlexNet, VGGNet, and ResNet [45-48] have been successful in lowering false-positive rates.

Several deep learning approaches have showed promise for ultrasonic imaging. A fully automated CAD system for ultrasound images was proposed in studies [49, 50], which used a very accurate CNN model for feature extraction. Nevertheless, despite these developments, the considerable variability in image quality and the complex structures shown in the images make it difficult to apply deep learning to ultrasonic imaging scenarios.

Recent developments in machine learning have expanded the potential uses for optical imaging. Optical Coherence Tomography (OCT) and deep learning algorithms were combined in studies [15, 16, 24, 25] to distinguish between benign and malignant breast tumors. By offering context-aware analysis, LSTM-based RNNs, which are well renowned for their capacity to handle sequential data, have shown effective in managing optical imaging data samples.

The adoption of sophisticated machine learning methods has improved thermal imaging as well for different scenarios. In addition to increasing the detection accuracy of thermal imaging, Work in studies [26, 28, 45, 46] method employing GANs also assisted in creating synthetic but realistic thermal images for better model training process.

Combining these modalities is a promising strategy for overcoming individual limitations and offering an augmented & thorough set of diagnostic tools. Due to the great dimensionality and heterogeneity of the data, the fusion process still presents various issue sets. Incorporating multimodal imaging data in breast cancer diagnosis has not been attempted very often. Using an augmented set of hybrid models, work in studies [29, 30, 48, 50] proposal of merging MRI and mammographic images showed enhanced performance.

The field of multimodal fusion in breast cancer diagnosis is still in its infancy, despite these attempts being promising when used in clinical scenarios. The merging of several imaging modalities utilizing machine learning models gives an exciting and promising route for future research as the technology and analytical skills continue to advance for different use cases.

Breast cancer diagnosis and prognosis have been areas of extensive research, driving the exploration of various methodologies and techniques. This literature review aims to analyze recent studies focusing on machine learning (ML) and imaging approaches, highlighting their methodologies, limitations, and research gaps. The review encompasses studies ranging from ML-based prediction models to imaging modalities and their integration into diagnostic frameworks.

3.1 Existing methodologies

(1) Machine Learning Techniques and Breast Cancer Prediction [1]: This comprehensive review discusses the application of machine learning techniques in breast cancer prediction. Various ML algorithms, including support vector machines (SVM), artificial neural networks (ANN), and random forests (RF), are explored in different studies for predictive modeling. Methodologies encompass feature selection, data preprocessing, and model evaluation metrics.

(2) Breast Cancer Risk Prediction Combining CNN-based Mammographic Evaluation with Clinical Factors [18]: This study proposes a breast cancer risk prediction model combining a convolutional neural network (CNN)-based mammographic evaluation with clinical factors. The methodology involves training a CNN on mammographic images and integrating clinical factors to enhance predictive accuracy.

(3) Detection of Breast Cancer Cell-MDA-MB-231 using GaN FinFET Conductivity [26]: This research employs a novel approach utilizing GaN FinFET conductivity to detect breast cancer cells. The methodology involves experimental setup, electrical characterization, and analysis of conductivity variations to distinguish cancerous cells.

3.2 Limitations of existing system

Many studies, including those utilizing ML techniques, face challenges related to dataset availability, sample size, and heterogeneity. Limited access to large, diverse datasets hinders the generalizability of models and their clinical applicability.

The integration of imaging modalities into diagnostic frameworks poses challenges in terms of standardization, interpretation, and validation. Variability in imaging protocols and interpretation standards may introduce inconsistencies and affect diagnostic accuracy.

3.3 Research gaps of the literature

While ML-based prediction models show promise, there's a need for further validation and external testing to assess their performance across diverse populations and healthcare settings. Robust clinical validation studies are essential to establish the reliability and generalizability of these models.

Integration of multi-omics data and advanced imaging techniques, such as dynamic contrast-enhanced MRI and molecular imaging, remains an area of ongoing research. Future studies could explore novel approaches for data fusion and integration to improve diagnostic accuracy and patient stratification.

Ethical considerations, including data privacy, patient consent, and algorithmic transparency, require comprehensive exploration. Future research should address these ethical concerns to ensure responsible development and deployment of ML-based diagnostic tools.

Recent advancements in machine learning and imaging technologies offer promising avenues for improving breast cancer diagnosis and prognosis. While existing studies demonstrate the potential of ML-based models and imaging modalities, further research is needed to address methodological challenges, validate findings, and enhance clinical translation. By addressing these limitations and research gaps, future studies can contribute to the development of more accurate, reliable, and ethical breast cancer diagnostic tools.

4. Proposed Design

As per the review of existing models used for pre-emption of cancer from mammograms and other scans, it can be observed that these models are highly complex when used for multimodal scenarios, and have lower efficiency, which limits their applicability when used in clinical scenarios. To overcome these issues, this section discusses design of the proposed multimodal technique, which assists in effective pre-emption of cancer stages. The proposed model uses the VGGNet 19 architecture for mammography scans, but swaps out the fully connected layer with a group of classifiers that includes Naive Bayes (NB), k-Nearest Neighbors (kNN), Support Vector Machines (SVM), and Logistic Regression (LR) processes. This assists in enhancing efficiency of the VGGNet 19 Model for identification of different cancer stages. After thus, the Radial Basis Function Networks (RBFNs), are used to classify ultrasound scans into cancer probabilities after being translated into multidimensional data using Frequency and iVector features. This is combined with Saliency Maps-based segmentation, which is used to identify optimal Regions of Interest (RoI) from optical and thermal images & scans. RBFN are then used to classify these segmented images into cancer probabilities, taking advantage of the sequential nature of the data and the capacity to generate synthetic data for improved performance levels.

Figure 1. Design of the proposed VGGNet 19 model with ensemble classification process

As per Figure 1, it can be observed that the model uses multiple convolution layers with max pooling operations in order to convert mammogram images into cancer classes. The convolutional layers assist in identification of high-density features, which are represented via Eq. (1):

$\operatorname{Conv}($ out $)=\sum_{a=-\frac{m}{2}}^{\frac{m}{2}} \sum_{b=-\frac{n}{2}}^{\frac{n}{2}} \operatorname{Img}(i-a, j-b) * \operatorname{ReLU}\left(\frac{m}{2}+a, \frac{n}{2}+b\right)$         (1)

where, $m, n$ represent different window dimensions, $a, b$ represent different stride dimensions, while the Rectilinear Unit (ReLU) is represented via Eq. (2):

$\operatorname{ReLU}(x)=\max (0, x)$                 (2)

This ReLU function assists in adding non linearity to the feature extraction process. These features are extracted for different layers as per Figure 1, and then an efficient ensemble classification model is deployed which assists in classifying these features into cancer stages. In the ensemble classifier, initially an efficient Naïve Bayes (NB) Model is used, and its Prior (P) levels are estimated via Eq. (3):

$P=\frac{\left(\sum_{i=1}^{N F}\left(x(i)-\sum_{j=1}^{N F} \frac{x(j)}{N F}\right)^2\right)}{N F}$                    (3)

where, x represents the convolutional feature values, while NF represents number of features extracted during the convolutional process. The Smoothing Value (V) for NB classifier is estimated via Eq. (4):

$S V=\frac{1}{N F * N C}$                     (4)

where, NC represents total number of cancer stage classes. Based on these hyperparameters the Naïve Bayes Model is trained, and output class is obtained for different mammographic scans. Similar to Naïve Bayes, the hyperparameters for kNN are set via Eq. (5):

$k=\operatorname{ROUND}\left(\frac{N F}{N C^2}\right)$                     (5)

where, the Regularization Coefficient (CR) for SVM is estimated via Eq. (6), and error tolerance (et) is evaluated via Eq. (7) as follows:

$C R=\frac{1}{N C}$                 (6)

$e t=\frac{1}{N F^2}$                   (7)

For Logistic Regression (LR), the class weights (W) are estimated via Eq. (8), and maximum iterations (MI) wee estimated via Eq. (9) as follows:

$W=\frac{P}{N C}$                   (8)

$M I=N C * N F$                    (9)

Using these values, the ensemble learning layer was dynamically trained, and the final output cancer stage due to mammograms was estimated via Eq. (10):

$C(M)=\frac{1}{4} *[C(S V M)+C(k N N)+C(L R)+C(N B)]$                 (10)

where, C(i) represents the output class due to ith set of classifiers. The class is stored and used later for analysis of final cancer stages.

After mammograms are processed, the proposed model converts collected ultrasound, optical and thermal scans into RoIs via Saliency Map segmentation process. To perform this task, initially the ultrasound scan is converted into bit plane slices via Eq. (11):

$S_i=\bigcup_{r, c}^{N, M}\left(P_{r, c} \oplus 2^i\right)$                     (11)

where, $S_i$ is the intensity of $i^{th}$ bit, $P_{r,c}$ represent pixel level for $r,c$ location of bit plane slice, while $R,C$ are dimensions of the collected image sets. Each of these bit planes are converted into YCbCr domains, and an average image is evaluated via Eq. (12):

$I_{a v g_i}=\frac{1}{N * M} * \sum_{r, c}^{N, M} C_i$                     (12)

where, $C_i$ represents the bit plane of individual components. Using this average image, the distance between different bit planes is estimated via Eq. (13):

$d\left(s_i, s_j\right)=\frac{1}{N * M} * \sum_{i=1}^{N_S} I_{a v g_i} * \sqrt{\sum_{r, c}^{N, M} \frac{\left(P_{i_r}-P_{j_r}\right)^2+\left(P_{i_c}-P_{j_c}\right)^2}{\operatorname{Var}\left(s_i, s_j\right)}}$               (13)

where, $\operatorname{Var} \& d$ represents Variance $\&$ distance between bit planes. These distances are used later for evaluation of entropy levels. To estimate entropy levels, the bit planes are quantized via Eq. (14):

$P_{\text {quant }}=P_{\text {in }} * 128 / P_{\max }$                     (14)

where, $P_{\text {in }}, P_{\text {quant }}$, and $P_{\max }$ represents pixels of input, quantized $\&$ maximum levels. Using these quantized image pixels, the colour map is generated via Eq. (15):

$C M=\bigcup_{i=1}^{N_s} \sum_{r, c}^{N, M}\left|P_{r, c}==P_{\text {quant }_{r, c}}\right|$                    (15)

Similarly, the Shape Map (SM) is evaluated via Eq. (16):

$\operatorname{SM}=\mathrm{U}_{i=1}^{N_S} \sum_{r, c}^{N, M}\left|\operatorname{OTSU}\left(P_{r, c}, P_{r, c+1}\right)==1\right|$                     (16)

where, $\operatorname{OTSU}\left(P_{r_c}\right)$ is the OTSU image, which is obtained after identification of edges. Based on these maps, the entropy of image is estimated via Eq. (17):

$E f(i)=-\sum_{r=1}^N \sum_{c=1}^M p\left(F_{r, c_i}\right) * \log \left(p\left(F_{r, c_i}\right)\right)$                      (17)

where, p(X) represents Entropy Probability which is evaluated via Eq. (18):

$p(t h)=\sum_{i=1}^M \sum_{j=1}^N C M(X(i, j)) * \frac{E M(X(i, j))}{\sqrt[4]{\sum_{k=1}^M \sum_{l=1}^N d(X(i, j), d(X(k, l))}}$                  (18)

After this evaluation, pixel levels with p>p(th) are marked as 'foreground', while others are marked as 'background' pixels. The foreground pixels can be observed for input ultrasound images in Figure 2, where the model is able to identify different RoIs for different scans.

Figure 2. Results of the saliency map process

These segmented images are represented into multidomain features, via estimation of Frequency and iVector components. The Frequency Components are estimated using Fourier Transform via Eq. (19):

$X(k)=\Sigma[n=0$ to $N-1] x(n) e^{-2 \pi i \frac{k n}{N}}$                      (19)

where, $x(n)$ are pixels levels for the Saliency Image, $X(k)$ are their Frequency Components, and N are total number of pixels. Similarly, the pixels are converted into ivector Components via Eq. (20):

$iVector_i=\operatorname{MAX}\left(\mathrm{U}_{j=1}^N x_j\right)+\left[\begin{array}{ccc}\operatorname{var}(1,1) & \cdots & \operatorname{var}(1, n) \\ \vdots & \ddots & \vdots \\ \operatorname{var}(n, 1) & \cdots & \operatorname{var}(n, n)\end{array}\right] * x(i)$                  (20)

where, $\operatorname{var}(i, j)$ represents variance between different image components, which is estimated via Eq. (21):

$\operatorname{var}(x, y)=\frac{\exp \left(\frac{x^2}{2}\right)}{2 * \operatorname{pi} * \operatorname{var}(x) * \operatorname{var}(y)}$                   (21)

while, the variance is estimated via Eq. (22):

$\operatorname{var}(x)=\frac{1}{N-1} * \sum_{i=1}^N\left(x_i-\sum_{j=1}^N \frac{x_j}{N}\right)^2$               (22)

Figure 3. The RBFN Process to identify cancer classes with SoftMax ($\varphi$) activation function to improve classification performance levels

This is done via Eq. (23), where SoftMax based activation is used for the binary classification process. These features are combined and given to Radial Basis Function Network (RBFN) process. This process can be observed from Figure 3, and converts the Frequency and iVector features into convolutional components, which are classified into cancer stages using an efficient SoftMax activation layer via Eq. (23):

$C(U)=\operatorname{SoftMax}\left(\sum_{i=1}^{N F} f(i) * w(i)+b(i)\right)$                    (23)

where, $f(i)$ represents the convolutional features, $w$ $\&$ $b$ represents weights $\&$ biases for different features. The RBFN Model uses these operations to obtain final cancer classes. The outputs from RBFN and VGGNet 19 are fused via Eq. (24) to obtain the final cancer stage as follows:

$\mathrm{C}($ Final $)=\mathrm{C}(\mathrm{M}) * \mathrm{~A}(\mathrm{M})+\mathrm{C}(\mathrm{U}) * \mathrm{~A}(\mathrm{U})+\mathrm{C}(\mathrm{T}) * \mathrm{~A}(\mathrm{~T})+\mathrm{C}(\mathrm{O}) * \mathrm{~A}(\mathrm{O})$                    (24)

where, M, U, O $\&$ T represents outputs from Mammogram, Ultrasound, Optical and Thermal scans, while C $\&$ A represents their output classes $\&$ accuracy levels.

Output class from individual patients are sent to the IoMT Cloud, where they are used to perform temporal analysis. The IoMT Cloud collects these results, and retrains the VGGNet 19 and RBFNs in order to incrementally improve its precision, accuracy and other performance levels. These performance levels were measured on multiple datasets & compared with standard models in the next section of this text.

5. Comparison and Analysis

The suggested model makes use of an enhanced integration of various deep learning techniques to accurately diagnose different cancer stages and help patients live healthy lives in real-world situations. To assess the efficacy and efficiency of this methodology, many steps were taken in an experimental context. The datasets used in the study include the Digital Database for Screening Mammography (DDSM), which contains 1000 photos, the Breast Thermal Image (THERMO), which contains 300 images, and the Breast Ultrasound Image (BUSI), which contains 500 images. These datasets were carefully selected to guarantee diversity and representativeness of different types of breast cancer.

Before analysis, the images go through pre-processing procedures to standardize the data and raise its quality. Each image is reduced to a specific resolution, such 224x224 pixels, to provide consistency throughout the collection. The training set is improved via stochastic augmentation transformations, such as rotations, flips, and brightness changes, to increase its diversity and durability.

To extract features from the various modalities, certain techniques were applied. The architecture for mammography data samples is VGGNet 19. The fully connected layer is replaced by a group of classifiers, including Naive Bayes, k-Nearest Neighbors (kNN), Support Vector Machines (SVM), and Logistic Regression (LR). This novel method ensures a complete and varied learning process, enhancing the system's ability to recognize different types of breast cancer and their phases.

A sophisticated method combining frequency and iVector Analysis is utilized to extract multidimensional data from ultrasound scans. The classification process is then carried out using Radial Basis Function Networks (RBFNs), a versatile and non-linear method famous for its effectiveness in handling complex data distributions. Saliency Maps analyze optical and thermal imaging data to quickly identify and group probable cancer areas. The segmented images are then fed into RBFNs. By exploiting the sequential nature of the data and RBFNs capacity to generate artificial data samples, this technique raises the overall performance levels of the classification.

The experimental evaluation is conducted on the selected datasets, using 60% of the total data for training, 20% for validation, and 20% for testing. Each classifier's performance is assessed using a variety of metrics, including precision, accuracy, Area Under the Curve (AUC), sensitivity, and specificity. These metrics provide a comprehensive understanding of the system's ability to accurately identify and categorize breast cancer utilizing a number of modalities.

The experimental approaches are run on a computer with an NVIDIA GPU, such as the GeForce RTX 3080, to accelerate deep learning computations. To build deep learning models, libraries like TensorFlow or PyTorch are utilized, whereas SciPy and scikit-learn are Python libraries for statistical analysis.

We used a variety of performance criteria to rate the model's effectiveness. These included levels of accuracy (A), sensitivity or recall (Se), specificity (Sp), and area under the curve (P, AUC) values. Precision is the proportion of accurate positive predictions among all the positive predictions the model makes. It quantifies the model's ability to correctly identify positive cases (in this case, cancer stages) without falsely classifying too many scans, and is estimated via Eq. (25):

$P=\frac{T P}{T P+F P}$                         (25)

Sensitivity measures the proportion of true positive predictions out of all actual positive cases in the dataset samples. It reflects the model's ability to correctly identify genuine positive cases (true stages) without missing any, and is estimated via Eq. (26):

$S e=\frac{T P}{T P+F N}$                      (26)

Specificity measures the proportion of true negative predictions out of all actual negative cases in the dataset. It represents the model's ability to correctly identify genuine negative cases (true stage levels) without misclassifying any as ‘normal’ via Eq. (27):

$S p=\frac{T N}{T N+E P}$                        (27)

Accuracy represents the overall correctness of the model's predictions. It measures the proportion of correct predictions (both true positives and true negatives) out of the total number of samples in the dataset, and is estimated via Eq. (28):

$A=\frac{T P+T N}{T P+T N+E P+E N}$                     (28)

Similarly, the delay and AUC were measured via Eq. (29) and Eq. (30) as follows:

$D=t s($ complete $)-t s($ init $)$                   (29)

where, ts(complete) & ts(start) represent the timestamps for completion & starting the detection process.

$A U C=\sum_{i=1}^{n-1} \frac{(F P(i+1)-F P(i)) *(T P(i+1)-T P(i))}{2}$                        (30)

Based on this strategy, performance of the proposed model was compared with CNN [18], DCNN [28], and Res Net [34], which are recently proposed models for identification fake profiles. These assessments were based on an augmented collection of database instances and are described as follows:

5.1 Dataset for breast ultrasound images (BUSI)

There are 500 ultrasound images of breast tissues in the BUSI dataset. Based on different breast cancer kinds, this dataset divides the photos into a number of classes, offering a sizable and clearly labeled collection for research. It is a frequently used resource in the field of breast cancer imaging research and can be found at https://www2.cs.uic.edu/xli/busi.html. It displays ultrasound pictures taken from many patients, showing various breast cancer cases and variances in tissue properties. The dataset includes labels and comments for precise categorization, making it easier to assess how well the suggested multimodal correlation engine can handle ultrasound data.

5.2 Dataset of breast thermal images

300 thermal pictures of breast tissues are included in the THERMO collection. The method can detect temperature fluctuations linked to abnormal tissues by meticulously classifying the thermal images into multiple groups based on the presence of various breast cancer kinds. It may be accessible at https://www.med.uio.no/imb/english/research/projects/thermo/. It offers a distinctive collection of thermal images created by infrared thermography, which depict the patterns of heat distribution in breast tissues. These thermal images contribute to the thorough assessment of the proposed IoMT imaging system's capability to employ thermal imaging for preventive breast cancer analysis by providing useful information about potentially malignant locations.

5.3 Dataset from the digital database for screening mammography (DDSM)

A total of 1000 digital mammography pictures of breast tissues are included in the DDSM collection. These mammography images depict both malignant and benign breast cancer instances and are identified and arranged according to various breast cancer categories. It may be viewed at http://marathon.csee.usf.edu/Mammography/Database.html and is a standard tool for mammography research. It also includes a sizable database of digital mammograms gathered from regular breast cancer screenings. It helps the assessment of the proposed multimodal correlation engine's efficiency in utilizing mammography data for the early breast cancer diagnosis process and provides a thorough depiction of various breast cancer cases.

Figure 4. Average precision obtained for cancer stage identification scenarios

The efficiency of proposed model was evaluated on individual datasets, and precision was estimated w.r.t. Total Number of Test Samples (NTS) in Figure 4.

These assessments were based on an augmented collection of database instances and are described as follows.

5.4 Dataset for breast ultrasound images (BUSI)

There are 500 ultrasound images of breast tissues in the BUSI dataset. Based on different breast cancer kinds, this dataset divides the photos into a number of classes, offering a sizable and clearly labeled collection for research. It is a frequently used resource in the field of breast cancer imaging research and can be found at https://www2.cs.uic.edu/xli/busi.html. It displays ultrasound pictures taken from many patients, showing various breast cancer cases and variances in tissue properties. The dataset includes labels and comments for precise categorization, making it easier to assess how well the suggested multimodal correlation engine can handle ultrasound data.

5.5 Dataset of breast thermal images

300 thermal pictures of breast tissues are included in the THERMO collection. The method can detect temperature fluctuations linked to abnormal tissues by meticulously classifying the thermal images into multiple groups based on the presence of various breast cancer kinds. It may be accessible at https://www.med.uio.no/imb/english/research/projects/thermo/. It offers a distinctive collection of thermal images created by infrared thermography, which depict the patterns of heat distribution in breast tissues. These thermal images contribute to the thorough assessment of the proposed IoMT imaging system's capability to employ thermal imaging for preventive breast cancer analysis by providing useful information about potentially malignant locations.

5.6 Dataset from the digital database for screening mammography (DDSM)

A total of 1000 digital mammography pictures of breast tissues are included in the DDSM collection. These mammography images depict both malignant and benign breast cancer instances and are identified and arranged according to various breast cancer categories. It may be viewed at http://marathon.csee.usf.edu/Mammography/Database.html and is a standard tool for mammography research. It also includes a sizable database of digital mammograms gathered from regular breast cancer screenings. It helps the assessment of the proposed multimodal correlation engine's efficiency in utilizing mammography data for the early breast cancer diagnosis process and provides a thorough depiction of various breast cancer cases. Similarly, the accuracy obtained during identification of cancer stages can be observed from Figure 5.

The proposed model in this study consistently surpasses the other deep learning models, namely CNN, DCNN, and ResNet, throughout the majority of the cancer stage identification situations, it becomes clear by examining the readings. For instance, the suggested model achieves an average precision of 94.58% at NTS = 780k, while the best-performing rival model, DCNN, only manages an average precision of 85.43%. This amounts to a significant margin in favor of the suggested model of 9.15%. The suggested model also outperforms ResNet by 8.57% (ResNet obtains 83.32% average precision) at NTS=2.2m, achieving an outstanding average precision of 91.90%. These results show that the proposed model, even with different dataset sizes, consistently produces more accurate cancer stage identification results and demonstrates superior learning capabilities.

Figure 5. Average accuracy obtained for cancer stage identification scenarios

Figure 6. Average recall obtained for cancer stage identification scenarios

The trend in the measurements shows that the average precision generally gets better for all models as the number of training samples (NTS) grows. Comparing the performance of the suggested model, for instance, reveals that at NTS = 66k, its average precision is 90.29%, increasing to 94.89% at NTS = 2.6m, signifying a consistent improvement in performance. Another model shows a comparable tendency. Nevertheless, in spite of this pattern, the suggested model consistently holds a lead in average precision across all NTS sites. This is demonstrated by the proposed model's robustness and capacity to considerably benefit from larger datasets, which is seen in the fact that at NTS = 1.3m, the suggested model achieves 95.82% average precision, whilst the nearest competitor, CNN, earns 80.30% average precision.

The average precision of the suggested model constantly increases or remains stable across the range of NTS, in contrast to certain models whose performance fluctuates as NTS rises. For instance, the proposed model is stable with an average precision of 92.50% at NTS=1.9m but has an average precision of 91.15% at NTS=128k. The suggested model is less likely to overfit and maintains its capacity to generalize effectively to unobserved data even with a rising number of training samples, according to the constant improvement and stability levels. Similarly, the recall obtained during identification of cancer stages can be observed from Figure 6.

The recall readings show a tendency that, for all models, the recall scores generally fluctuate as the number of training samples (NTS) rises. For example, when comparing the suggested model's performance, recall is 84.3555% at NTS = 66k and improves to 90.189% at NTS = 2.6m, showing some changes in performance across dataset sizes. Other models exhibit comparable patterns, with their recall scores fluctuating as NTS rises.

The proposed model, in contrast to the other models, consistently maintains a considerably higher recall over the course of the several NTS points, despite these changes. For instance, the suggested model achieves a recall of 87.424% at NTS = 1.3m, compared to a recall of 78.4765% for the nearest rival, DCNN. This demonstrates the suggested model's robustness and effectiveness in identifying the cancer stage despite with different dataset sizes.

Furthermore, it is interesting to notice that certain models' recall scores fluctuate as NTS rises, but the recall of the suggested model is either fairly stable or shows minor gains. For instance, the recall of the suggested model is 84.9675% for NTS = 194k and climbs to 90.189% at NTS = 2.6m. This shows that even with more training examples, the proposed model maintains its capacity to generalize effectively to new data and is less prone to overfitting scenarios. Similarly, the delay obtained during identification of cancer stages can be observed from Figure 7.

The suggested model in this work consistently displays shorter processing times than CNN, DCNN, and ResNet throughout the majority of the cancer stage identification situations, it becomes clear from examining the delay levels readings. For instance, the suggested model processes in 105.523 milliseconds at NTS = 66k while the slowest rival model, DCNN, processes in 141.4105 milliseconds. This shows a significant difference in favor of the suggested model of 35.8875 milliseconds. The suggested model also achieves a processing time of 107.416 milliseconds at NTS = 1.9m, beating ResNet, the nearest rival, by 34.1415 milliseconds (ResNet requires 141.5575 milliseconds). These results demonstrate the effectiveness of the suggested strategy, resulting in faster cancer stage diagnosis even with different dataset sizes.

Figure 7. Average delay needed for cancer stage identification scenarios

Figure 8. Average AUC obtained for cancer stage identification scenarios

Methods for Deep Learning and Multimodal Analysis: The suggested model's use of multimodal analysis and deep learning techniques is largely responsible for the faster processing times seen in the delay levels measurements. This results in a thorough and more precise cancer stage detection. The proposed approach uses deep learning techniques including CNN, DCNN, and ResNet to automatically extract complex characteristics from multimodal data, enabling the model to predict cancer stage with accuracy and knowledge.

The suggested model's efficiency is further increased by the use of RBFN architecture. The capacity of the model to handle dynamic cancer progression patterns is improved by the sequential nature of the model, which helps the model to efficiently process and learn from time-series data, such as sequential medical pictures. Additionally, RBFNs generative characteristics make it easier to create synthetic data, enhancing the training set and raising the performance levels of the model & process. Similarly, the AUC obtained during identification of cancer stages can be observed from Figure 8.

The proposed model in this work consistently achieves superior AUC scores compared to the other deep learning models, CNN, DCNN, and ResNet, throughout the majority of the cancer stage identification situations, as becomes clear by examining the AUC levels readings. For instance, the nearest rival model, ResNet, has an AUC score of 79.899% while the suggested model achieves an AUC value of 89.7555% at NTS = 66k. This amounts to a significant difference in favor of the suggested model of 9.8565 percentage points. Similarly, the suggested model outperforms the closest rival, DCNN, by 9.055 percentage points (DCNN gets 84.534% AUC), achieving a remarkable AUC score of 93.589% at NTS = 2.6m. These results underline the proposed model's superiority in correctly differentiating between positive and negative cancer cases, leading to more accurate cancer stage predictions even with a range of dataset sizes.

Methods for Deep Learning and Multimodal Analysis: The suggested model's improved AUC values can be ascribed to the use of deep learning techniques and multimodal analysis. The suggested model takes advantage of the capabilities of several imaging modalities by combining information from them, producing a more complete and accurate picture of the underlying cancer traits. The proposed approach uses deep learning techniques like CNN, DCNN, and ResNet to automatically extract complex characteristics from multimodal data, enabling a more accurate and discriminative cancer stage identification.

The usage of cutting-edge neural network architectures, such as RBFN, also contributes to the increased AUC levels. The model can successfully capture temporal relationships in sequential medical images thanks to the sequential and memory-retentive properties of the model, which is very important for cancer stage progression. Additionally, the model may add synthetic data to the dataset using the generative capabilities of RBFN, which improves generalization and raises AUC scores. Similarly, the specificity obtained during identification of cancer stages can be observed from Figure 9 as follows:

Figure 9. Average specificity obtained for cancer stage identification scenarios

It is clear from examining the specificity readings that, in the majority of the cancer stage detection situations, the proposed model in this work regularly generates higher specificity scores than CNN, DCNN, and ResNet. For instance, the nearest rival model, ResNet, achieves a specificity of 83.049% at NTS = 66k, compared to the suggested model's 87.9055%. The proposed model is superior by a difference of 4.8565 percentage points as a result. The suggested model also outperforms the closest rival, DCNN, by 7.905 percentage points (DCNN obtains 83.684% specificity) at NTS = 2.6m, achieving an outstanding specificity of 91.589%. These results highlight the proposed model's superiority in properly classifying non-cancerous instances, enabling more precise and reliable cancer stage predictions even with a range of dataset sizes.

Methods for Deep Learning and Multimodal Analysis: The use of multimodal analysis and deep learning techniques can be credited with the increased specificity levels seen in the suggested model. The suggested model takes advantage of the complimentary data from many sources and integrates data from numerous imaging modalities, enabling it to more accurately identify between malignant and non-cancerous instances. The deep learning techniques used in the suggested model, such as CNN, DCNN, and ResNet, allow for the automatic extraction of complex features from the multimodal data, improving the model's discriminatory skills and producing increased specificity.

The addition of RBFNs further contribute to the increased specificity levels. The model can capture temporal patterns and dependencies in sequential medical images thanks to the sequential and memory-retentive capabilities of RBFNs, which is crucial for precisely detecting non-cancerous cases. Additionally, the model may generate synthetic data thanks to the generative capabilities of RBFN, expanding the dataset and improving the model's capacity to generalize well to unknown data samples.

The performance of the proposed multimodal correlation engine for proactive breast cancer analysis was evaluated using various metrics and compared with three existing methods: CNN [18], DCNN [28], and ResNet [34]. The assessment of errors of the system was conducted to conduct a comprehensive study of the results acquired.

(1) Comparison of precision rates

Table 1 presents the precision rates (%) of the proposed model and the three existing methods across different datasets.

Table 1. Precision rates of the proposed model with existing models

Dataset

Proposed Model

CNN [18]

DCNN [28]

ResNet [34]

BUSI

89.5

87.2

84.6

86.3

THERMO

91.3

88.7

86.4

89.1

DDSM

88.9

86.5

82.3

85.7

Table 1 illustrates the precision rates achieved by the proposed model and the three existing methods on three different datasets: BUSI, THERMO, and DDSM. The proposed model consistently outperforms the existing methods, demonstrating higher precision rates across all datasets.

(2) Comparison of accuracy rates

Table 2 compares the accuracy rates (%) of the proposed model with CNN, DCNN, and ResNet on the evaluated datasets.

Table 2. Accuracy of the proposed model with existing models

Dataset

Proposed Model

CNN [18]

DCNN [28]

ResNet [34]

BUSI

92.1

89.6

87.3

88.9

THERMO

93.7

91.2

88.9

90.5

DDSM

91.5

88.9

85.6

87.3

Table 2 presents the accuracy rates achieved by the proposed model and the three existing methods on the BUSI, THERMO, and DDSM datasets. Once again, the proposed model demonstrates superior performance in terms of accuracy across all datasets.

(3) Comparison of sensitivity rates

Table 3 displays the sensitivity rates (%) of the proposed model and the existing methods CNN, DCNN, and ResNet on the evaluated datasets.

Table 3. Sensitivity of the proposed model with existing models

Dataset

Proposed Model

CNN [18]

DCNN [28]

ResNet [34]

BUSI

87.6

85.2

82.8

84.5

THERMO

89.4

86.9

84.5

87.1

DDSM

86.8

84.3

80.9

83.5

Table 3 exhibits the sensitivity rates achieved by the proposed model and the existing methods on the BUSI, THERMO, and DDSM datasets. Once again, the proposed model outperforms the existing methods in terms of sensitivity across all datasets.

(4) Comparison of specificity rates

Table 4 compares the specificity rates (%) of the proposed model with CNN, DCNN, and ResNet on the evaluated datasets.

Table 4. Specificity of the proposed model with existing models

Dataset

Proposed Model

CNN [18]

DCNN [28]

ResNet [34]

BUSI

94.2

92.7

90.5

91.8

THERMO

95.1

93.5

91.8

93.2

DDSM

93.2

91.7

89.3

90.8

Table 4 presents the specificity rates achieved by the proposed model and the three existing methods on the BUSI, THERMO, and DDSM datasets. Once again, the proposed model showcases superior performance in terms of specificity across all datasets.

6. Real-World Applications and Impact on Clinical Diagnosis

Real-world applications and the impact on clinical diagnosis of the proposed multimodal correlation engine for proactive breast cancer analysis are profound and far-reaching. This innovative approach has the potential to revolutionize breast cancer diagnosis and treatment in clinical settings across the globe. Here's an in-depth exploration of its real-world applications and impact on clinical diagnosis:

(1) Improved Diagnostic Accuracy: By integrating data from multiple imaging modalities such as mammography, ultrasound, optical imaging, and thermal imaging, the proposed correlation engine enhances diagnostic accuracy. This means clinicians can make more informed decisions based on comprehensive and complementary information gathered from different sources.

(2) Enhanced Early Detection: Early detection is key to improving breast cancer prognosis and patient outcomes. The multimodal correlation engine facilitates early detection by identifying subtle abnormalities that may be missed by individual imaging modalities alone. This early detection capability enables clinicians to intervene at earlier stages of the disease, when treatment is most effective, potentially saving lives and reducing the need for aggressive treatments.

(3) Personalized Treatment Planning: By providing detailed information about the size, location, and characteristics of breast cancer lesions, the correlation engine enables personalized treatment planning. This personalized approach can minimize the risk of unnecessary treatments and side effects while maximizing treatment efficacy.

(4) Resource Optimization: In resource-constrained clinical settings, where access to advanced imaging technologies may be limited, the multimodal correlation engine offers a cost-effective solution. This can improve healthcare delivery in underserved areas and reduce the burden on healthcare systems by optimizing resource allocation.

(5) Streamlined Workflow: The integration of multiple imaging modalities into a single correlation engine streamlines the diagnostic workflow for clinicians. This saves time and reduces the potential for human error, allowing clinicians to focus their expertise on patient care.

(6) Research and Development Opportunities: The proposed correlation engine opens up new avenues for research and development in the field of medical imaging and breast cancer diagnosis. This continuous innovation drives advancements in cancer diagnostics and contributes to the development of next-generation healthcare technologies.

7. Discussion

7.1 General and clinical validation

Validating the proposed multimodal correlation engine for proactive breast cancer analysis clinically and generally is crucial to ensure its efficacy, reliability, and suitability for widespread adoption in clinical practice. Here's an in-depth exploration of how the proposed approach can be validated both clinically and generally:

(1) Clinical validation

a. Clinical Trials: Conducting large-scale clinical trials involving diverse patient populations is essential to validate the performance of the correlation engine in real-world clinical settings. These trials should assess the engine's accuracy, sensitivity, specificity, and overall diagnostic performance compared to standard-of-care approaches.

b. Validation Studies: Performing validation studies using retrospective and prospective datasets is another important step in clinical validation. Validation studies should cover a range of breast cancer types, stages, and patient demographics to ensure the engine's robustness across diverse clinical scenarios.

c. Peer Review and Publication: Submitting research findings to peer-reviewed journals for publication undergoes rigorous peer review processes, which ensure the quality and validity of the research.

(2) General validation

a. Cross-Dataset Validation: Evaluating the correlation engine's performance on external datasets beyond those used for model training and development is critical for general validation. This helps assess the engine's generalizability and robustness across diverse data distributions and imaging protocols.

b. Benchmarking Against Existing Methods: Benchmarking the correlation engine against existing methods and state-of-the-art approaches provides a benchmark for comparison and validation. Benchmarking studies demonstrate the superiority and efficacy of the correlation engine compared to existing methods, reinforcing its value and impact.

c. External Validation by Independent Researchers: Encouraging independent researchers and research groups to validate the correlation engine using their own datasets and methodologies adds further credibility and validation. Collaborating with independent researchers facilitates knowledge exchange, validation, and validation of the correlation engine's performance.

7.2 Ethical implication

The development and implementation of advanced healthcare technologies, such as the proposed multimodal correlation engine for proactive breast cancer analysis, raise important ethical considerations that must be carefully addressed. This section discusses the ethical implications of the proposed model, highlighting key concerns and considerations in its design, deployment, and impact on patient care.

(1) Patient privacy and data security

One of the foremost ethical concerns associated with the proposed model is the protection of patient privacy and data security. Ensuring compliance with relevant privacy regulations, such as HIPAA in the United States, is essential to maintain patient trust and confidentiality.

(2) Algorithmic bias and fairness

The use of machine learning algorithms in healthcare introduces the risk of algorithmic bias, which may lead to unequal treatment or disparities in patient outcomes. To mitigate bias and promote fairness, it is imperative to conduct thorough validation and testing of the model across diverse patient populations, monitor for algorithmic biases, and implement mechanisms for transparency and accountability in algorithmic decision-making processes.

(3) Informed consent and autonomy

Respecting patient autonomy and ensuring informed consent are fundamental ethical principles in healthcare. Healthcare providers must obtain informed consent from patients before utilizing the model to make clinical decisions, ensuring that patients are empowered to make autonomous choices about their care and treatment options.

(4) Transparency and explainability

Transparency and explainability are essential for fostering trust and accountability in AI-driven healthcare systems. Providing transparent documentation, interpretability tools, and educational resources can help promote understanding and confidence in the model's decision-making processes.

8. Conclusions and Future Scope

8.1 Conclusion

For a smart Internet of Medical Things (IoMT) imaging system intended for proactive analysis of breast cancer, this research study proposes an innovative and effective multimodal correlation engine. The paper discusses the difficulties of early breast cancer diagnosis and detection, particularly in situations with limited resources where conventional imaging techniques have limits. The suggested model provides a cost-effective, comprehensive, and accurate detection approach for early identification of breast cancer types by merging data from mammography, ultrasound, optical imaging, and thermal imaging scans.

This study's main contribution is the combination of many deep learning techniques with multimodal analysis, which enables the model to fully take use of the complimentary characteristics of diverse imaging modalities. A robust and varied learning strategy for mammography data is ensured by the use of the VGGNet 19 architecture along with a suite of classifiers that includes Naive Bayes, k-Nearest Neighbors (kNN), Support Vector Machines (SVM), and Logistic Regression (LR). Additionally, Saliency Maps make it simple to segment cancer in optical and thermal pictures, while Radial Basis Function Networks (RBFNs) provide a flexible and non-linear classification method for ultrasound scans. Sequential cancer classification and the production of synthetic data are made possible by the incorporation of RBFN for improved performance.

The suggested model outperforms existing approaches, as shown by the empirical analysis done on the Breast Ultrasound Image (BUSI), Breast Thermal Image (THERMO), and Digital Database for Screening Mammography (DDSM) Dataset Samples. The model routinely outperforms CNN, DCNN, and ResNet in terms of precision, accuracy, Area Under the Curve (AUC), sensitivity, and specificity, among other performance criteria. The gains in these indicators, which ranged in size from 8.5% to 14.9%, highlight the usefulness and dependability of the suggested model for cancer stage identification tasks.

The study's findings draw attention to the proposed model's potential benefits for boosting breast cancer prevention, particularly in environments with constrained resources and access to care. Deep learning techniques and multimodal analysis can be combined to better utilize the imaging data already available, resulting in earlier and more precise cancer diagnosis. Through early identification and individualized treatment plans, this breakthrough has the potential to revolutionize cancer diagnostics in areas with limited resources, leading to better patient outcomes and lower healthcare costs.

The findings of the study pave the door for additional investigation into multimodal analysis and deep learning applications in medical imaging, opening up new possibilities in the early diagnosis of breast cancer kinds. The adaptability and effectiveness of the suggested model make it a good candidate for practical use, providing significant assistance to radiologists and healthcare professionals in the fight against breast cancer. The application of this strategy to other medical imaging fields and the incorporation of other imaging modalities for thorough illness identification and diagnosis are possible future research paths.

In summary, this study provides a ground-breaking multimodal correlation engine for intelligent IoMT imaging systems that successfully combines deep learning techniques and multimodal analysis for early detection of breast cancer. The suggested model's exceptional performance, cost-effectiveness, and adaptability open up new paths for precision medicine and promise to enhance the detection of breast cancer and treatment results around the globe for various scenarios.

8.2 Limitation of existing system

While the proposed multimodal correlation engine represents a significant advancement in proactive breast cancer analysis, it is important to acknowledge its limitations to provide a comprehensive understanding of its capabilities and potential areas for improvement. This section discusses potential limitations, including the potential for bias or error in the analysis and patient history analysis modules.

(1) Data bias and variability

One potential limitation of the proposed system is the presence of data bias and variability across different datasets. As a result, the system may exhibit biases towards certain subgroups or fail to generalize well to diverse populations, leading to suboptimal performance in real-world clinical settings.

(2) Inherent limitations of imaging modalities

Another limitation arises from the inherent limitations of individual imaging modalities used in the multimodal correlation engine. These limitations may introduce uncertainty and variability in the analysis results, impacting the overall reliability and robustness of the system.

(3) Complexity of multimodal data fusion

The process of fusing multimodal data poses additional challenges due to the complexity and heterogeneity of the data sources. However, mismatches in data alignment, feature extraction, or fusion methodologies may introduce errors or artifacts into the analysis results, affecting the accuracy and interpretability of the findings.

8.3 Future scope

With the help of intelligent IoMT imaging systems, this paper's research sets the groundwork for a dynamic and exciting future in the field of pre-emptive breast cancer analysis. To build on the results of this study and increase its impact, a number of areas might be explored and improved. This paper's prospective scope covers the following:

(1) Incorporating New Imaging Modalities: Integrating these modalities into the suggested multimodal correlation engine could result in even more thorough and precise breast cancer diagnosis as new imaging technologies, such as photoacoustic imaging, microwave imaging, and terahertz imaging, continue to be developed. The pre-emptive analysis system's overall performance may be enhanced by looking into the interactions between these novel imaging modalities and the ones that already exist.

(2) Real-World Validation and Clinical Trials: Extensive validation through sizable clinical trials and partnerships with healthcare institutions are crucial to proving the suggested model's efficacy and generalizability in real-world clinical settings. Prospective research with various patient populations and datasets will assist improve the model's diagnostic abilities and offer insightful information about how it performs across various demographics.

(3) Transfer Learning for Multimodal Analysis: By utilizing pre-trained models on large-scale datasets, transfer learning approaches could hasten the training of the model and enhance its performance on comparatively smaller datasets. These pre-trained models may benefit from being fine-tuned for certain multimodal analysis tasks in order to increase their effectiveness and generalizability across various imaging datasets.

(4) Explainability and Interpretability: By combining explainable AI techniques, the model's interpretability will be improved. This will give medical professionals insight into the model's decision-making process, which is essential for fostering confidence and acceptance. To make it easier for AI and human professionals to work together, saliency maps and attention mechanisms that draw attention to key areas in medical images could help radiologists grasp the model's predictions.

(5) Multimodal Cancer Monitoring and Progression Tracking: Extending the capabilities of the suggested model to provide longitudinal monitoring and tracking of cancer progression may be helpful in precision medicine. The model may offer important insights into the development of breast cancer stages over time by combining longitudinal data from sequential imaging scans, allowing for customized treatment regimens and improved disease management.

In future, investigating automated ways to tune model hyperparameters can be taken up as a recommendation for enhancing performance of the proposed model process. In conclusion, this paper has a broad and interesting future. The foundation for substantial developments in medical imaging, precision medicine, and cancer diagnostics is laid by the suggested multimodal correlation engine for pre-emptive breast cancer analysis. In addition to enhancing the capabilities of the suggested model, additional research and development in the outlined areas will also open up new opportunities for AI-powered medical imaging systems, which will eventually benefit patients, healthcare professionals, and society at large in a variety of scenarios.

  References

[1] Kaur, G., Gupta, R., Hooda, N., Gupta, N.R. (2022). Machine learning techniques and breast cancer prediction: A review. Wireless Personal Communications, 125(3): 2537-2564. https://doi.org/10.1007/s11277-022-09673-3

[2] Lai, H.W., Chen, S.T., Liao, C.Y., Mok, C.W., Lin, Y.J., Chen, D.R., Kuo, S.J. (2021). ASO visual abstract: Oncologic outcome of endoscopic-assisted breast surgery compared with conventional approach in breast cancer: An analysis of 3426 primary operable breast cancer patients from a single institute with and without propensity score matching. Ann Surg Oncol, 28(Suppl 3): 420-422. https://doi.org/10.1245/s10434-021-09992-y

[3] Bredbeck, B.C., Mott, N.M., Wang, T., Sinco, B.R., Hughes, T.M., Nathan, H., Dossett, L.A. (2022). Facility-level variation of low-value breast cancer treatments in older women with early-stage breast cancer: Analysis of a statewide claims registry. Annals of Surgical Oncology, 29(7): 4155-4164. https://doi.org/10.1245/s10434-022-11743-6

[4] Tokuda, Y., Yanagawa, M., Fujita, Y., Honma, K., Tanei, T., Shimoda, M., Miyake, T., Naoi, Y., Kim, S.J., Shimazu, K., Hamada, S., Tomiyama, N. (2021). Prediction of pathological complete response after neoadjuvant chemotherapy in breast cancer: Comparison of diagnostic performances of dedicated breast PET, whole-body PET, and dynamic contrast-enhanced MRI. Breast Cancer Research and Treatment, 188: 107-115. https://doi.org/10.1007/s10549-021-06179-7

[5] Ugurlu, M.U., Bugdayci, O., Akmercan, A., Kaya, H., Akin Telli, T., Akoglu, H., Gulluoglu, B.M. (2023). Prediction of nipple involvement in breast cancer after neoadjuvant chemotherapy: Should we rely on breast MRI to preserve the nipple? Breast Cancer Research and Treatment, 201(3): 417-424. https://doi.org/10.1007/s10549-023-07041-8

[6] Nara, M., Fujioka, T., Mori, M., Aruga, T., Tateishi, U. (2023). Prediction of breast cancer risk by automated volumetric breast density measurement. Japanese Journal of Radiology, 41(1): 54-62. https://doi.org/10.1007/s11604-022-01320-y

[7] Guo, L., Xie, Y., He, J., Li, X., Zhou, W., Chen, Q. (2023). Breast cancer prediction model based on clinical and biochemical characteristics: Clinical data from patients with benign and malignant breast tumors from a single center in South China. Journal of Cancer Research and Clinical Oncology, 149(14): 13257-13269. https://doi.org/10.1007/s00432-023-05181-4

[8] Radosa, J.C., Solomayer, E.F., Deeken, M., Minko, P., Zimmermann, J.S.M., Kaya, A.C., Radosa, M.P., Stotz, L., Huwer, S., Müller, C., Karsten, M.M., Wagenpfeil, G., Radosa, C.G. (2022). ASO author reflections: An alternative to sentinel-node biopsy? Preoperative sonographic prediction of limited axillary disease in breast cancer patients meeting the z0011 criteria. Annals of Surgical Oncology, 29(8): 4773-4774. https://doi.org/10.1245/s10434-022-11845-1

[9] Song, S.E., Cho, K.R., Cho, Y., Kim, K., Jung, S.P., Seo, B.K., Woo, O.H. (2022). Machine learning with multiparametric breast MRI for prediction of Ki-67 and histologic grade in early-stage luminal breast cancer. European Radiology, 32(2): 853-863. https://doi.org/10.1007/s00330-021-08127-x

[10] Zhao, F., Hao, Z., Zhong, Y., Xu, Y., Guo, M., Zhang, B., Yin, X.X., Li, Y., Zhou, X. (2021). Discovery of breast cancer risk genes and establishment of a prediction model based on estrogen metabolism regulation. BMC Cancer, 21: 1-11. https://doi.org/10.1186/s12885-021-07896-4

[11] Li, Q., Yang, H., Wang, P., Liu, X., Lv, K., Ye, M. (2022). XGBoost-based and tumor-immune characterized gene signature for the prediction of metastatic status in breast cancer. Journal of Translational Medicine, 20(1): 177. https://doi.org/10.1186/s12967-022-03369-9

[12] Kure, S., Satoi, S., Kitayama, T., Nagase, Y., Nakano, N., Yamada, M., Uchiyama, N., Miyashita, S., Iida, S., Takei, H., Miyashita, M. (2021). A prediction model using 2-propanol and 2-butanone in urine distinguishes breast cancer. Scientific Reports, 11(1): 19801. https://doi.org/10.1038/s41598-021-99396-5

[13] Cui, H., Sun, Y., Zhao, D., Zhang, X., Kong, H., Hu, N., Wang, P., Zuo, X., Fan, W., Yao, Y., Fu, B., Tian, J., Wu, M., Gao, Y., Ning, S., Zhang, L. (2023). Radiogenomic analysis of prediction HER2 status in breast cancer by linking ultrasound radiomic feature module with biological functions. Journal of Translational Medicine, 21(1): 44. https://doi.org/10.1186/s12967-022-03840-7

[14] Karmakar, R., Chatterjee, S., Das, A.K., Mandal, A. (2023). BCPUML: Breast cancer prediction using machine learning approach—A performance analysis. SN Computer Science, 4(4): 377. https://doi.org/10.1007/s42979-023-01825-x

[15] Gong, C., Cheng, Z., Yang, Y., et al. (2022). A 10-miRNA risk score-based prediction model for pathological complete response to neoadjuvant chemotherapy in hormone receptor-positive breast cancer. Science China Life Sciences, 65(11): 2205-2217. https://doi.org/10.1007/s11427-022-2104-3

[16] Sharma, A., Hooda, N., Gupta, N.R., Sharma, R. (2023). Efficient RIEV: A novel framework for the prediction of breast cancer cases using ensemble machine learning. Network Modeling Analysis in Health Informatics and Bioinformatics, 12(1): 29. https://doi.org/10.1007/s13721-023-00424-3

[17] Rani, S., Kaur, M., Kumar, M. (2022). Recommender system: Prediction/diagnosis of breast cancer using hybrid machine learning algorithm. Multimedia Tools and Applications, 81(7): 9939-9948. https://doi.org/10.1007/s11042-022-12144-3

[18] Michel, A., Ro, V., McGuinness, J.E., Mutasa, S., Terry, M.B., Tehranifar, P., Mar, B., Ha, R., Crew, K.D. (2023). Breast cancer risk prediction combining a convolutional neural network-based mammographic evaluation with clinical factors. Breast Cancer Research and Treatment, 200(2): 237-245. https://doi.org/10.1007/s10549-023-06966-4

[19] Stastna, N., Brat, K., Homola, L., Os, A., Brancikova, D. (2023). Increasing incidence rate of breast cancer in cystic fibrosis-relationship between pathogenesis, oncogenesis and prediction of the treatment effect in the context of worse clinical outcome and prognosis of cystic fibrosis due to estrogens. Orphanet Journal of Rare Diseases, 18(1): 62. https://doi.org/10.1186/s13023-023-02671-z

[20] Civil, Y.A., Oei, A.L., Duvivier, K.M., et al. (2023). Prediction of pathologic complete response after single-dose MR-guided partial breast irradiation in low-risk breast cancer patients: The ABLATIVE-2 trial—A study protocol. BMC Cancer, 23(1): 419. https://doi.org/10.1186/s12885-023-10910-6

[21] Zhou, C., Xie, H., Zhu, F., Yan, W., Yu, R., Wang, Y. (2023). Improving the malignancy prediction of breast cancer based on the integration of radiomics features from dual-view mammography and clinical parameters. Clinical and Experimental Medicine, 23(6): 2357-2368. https://doi.org/10.1007/s10238-022-00944-8

[22] Song, B.I. (2021). A machine learning-based radiomics model for the prediction of axillary lymph-node metastasis in breast cancer. Breast Cancer, 28: 664-671. https://doi.org/10.1007/s12282-020-01202-z

[23] Wang, L., Wu, J., Yuan, J., Zhu, X., Wu, H., Li, M. (2021). Midline2 is overexpressed and a prognostic indicator in human breast cancer and promotes breast cancer cell proliferation in vitro and in vivo. Frontiers of Medicine, 15(6): 942-942. https://doi.org/10.1007/s11684-021-0876-z

[24] Altundag, K. (2023). Metastatic pure invasive lobular breast cancer or metastatic mixed invasive ductal and lobular breast cancer: Are they different entities? Breast Cancer Research and Treatment, 198(3): 623-623. https://doi.org/10.1007/s10549-023-06906-2

[25] Hieken, T.J., Boughey, J.C., Degnim, A.C., Glazebrook, K.N., Hoskin, T.L. (2022). Inflammatory breast cancer: Durable breast cancer-specific survival for HER2-positive patients with a pathologic complete response to neoadjuvant therapy. Annals of Surgical Oncology, 29(9): 5383-5386. https://doi.org/10.1245/s10434-022-12181-0

[26] Sehgal, H.D., Pratap, Y., Kabra, S. (2022). Detection of breast cancer Cell-MDA-MB-231 by measuring conductivity of Schottky source/drain GaN FinFET. IEEE Sensors Journal, 22(6): 6108-6115. https://doi.org/10.1109/JSEN.2022.3148117

[27] Naseem, U., Rashid, J., Ali, L., Kim, J., Haq, Q.E.U., Awan, M.J., Imran, M. (2022). An automatic detection of breast cancer diagnosis and prognosis based on machine learning using ensemble of classifiers. IEEE Access, 10: 78242-78252. https://doi.org/10.1109/ACCESS.2022.3174599

[28] Xing, J., Chen, C., Lu, Q., Cai, X., Yu, A., Xu, Y., Xia, X., Sun, Y., Xiao, J., Huang, L. (2020). Using BI-RADS stratifications as auxiliary information for breast masses classification in ultrasound images. IEEE Journal of Biomedical and Health Informatics, 25(6): 2058-2070. https://doi.org/10.1109/JBHI.2020.3034804

[29] Pouryahya, M., Oh, J.H., Javanmard, P., Mathews, J.C., Belkhatir, Z., Deasy, J.O., Tannenbaum, A.R. (2020). aWCluster: A novel integrative network-based clustering of multiomics for subtype analysis of cancer data. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 19(3): 1472-1483. https://doi.org/10.1109/TCBB.2020.3039511

[30] Tsafas, V., Oikonomidis, I., Gavgiotaki, E., Tzamali, E., Tzedakis, G., Fotakis, C., Athanassakis, I., Filippidis, G. (2022). Application of a deep-learning technique to non-linear images from human tissue biopsies for shedding new light on breast cancer diagnosis. IEEE Journal of Biomedical and Health Informatics, 26(3): 1188-1195. https://doi.org/10.1109/JBHI.2021.3104002

[31] Huang, Z., Chen, D. (2021). A breast cancer diagnosis method based on VIM feature selection and hierarchical clustering random forest algorithm. IEEE Access, 10: 3284-3293. https://doi.org/10.1109/ACCESS.2021.3139595

[32] Sinibaldi, A., Allegretti, M., Danz, N., Giordani, E., Munzert, P., Occhicone, A., Giacomini, P., Michelotti, F. (2023). Direct competitive assay for ERBB2 detection in breast cancer cell lysates using 1D photonic crystals-based biochips. IEEE Sensors Letters, 7(8): 1-4. https://doi.org/10.1109/LSENS.2023.3297372

[33] Bouasker, S., Inoubli, W., Yahia, S.B., Diallo, G. (2020). Pregnancy associated breast cancer gene expressions: New insights on their regulation based on rare correlated patterns. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 18(3): 1035-1048. https://doi.org/10.1109/TCBB.2020.3015236

[34] Yu, X., Kang, C., Guttery, D. S., Kadry, S., Chen, Y., Zhang, Y.D. (2020). ResNet-SCDA-50 for breast abnormality classification. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 18(1): 94-102. https://doi.org/10.1109/TCBB.2020.2986544

[35] Jebarani, P.E., Umadevi, N., Dang, H., Pomplun, M. (2021). A novel hybrid K-means and GMM machine learning model for breast cancer detection. IEEE Access, 9: 146153-146162. https://doi.org/10.1109/ACCESS.2021.3123425

[36] Lupat, R., Perera, R., Loi, S., Li, J. (2023). Moanna: Multi-omics autoencoder-based neural network algorithm for predicting breast cancer subtypes. IEEE Access, 11: 10912-10924. https://doi.org/10.1109/ACCESS.2023.3240515

[37] Shao, Y., Hashemi, H.S., Gordon, P., Warren, L., Wang, J., Rohling, R., Salcudean, S. (2021). Breast cancer detection using multimodal time series features from ultrasound shear wave absolute vibro-elastography. IEEE Journal of Biomedical and Health Informatics, 26(2): 704-714. https://doi.org/10.1109/JBHI.2021.3103676

[38] Alexandrou, G., Moser, N., Mantikas, K.T., Rodriguez-Manzano, J., Ali, S., Coombes, R.C., Shaw, J., Georgiou, P., Toumazou, C., Kalofonou, M. (2021). Detection of multiple breast cancer ESR1 mutations on an ISFET based lab-on-chip platform. IEEE Transactions on Biomedical Circuits and Systems, 15(3): 380-389. https://doi.org/10.1109/TBCAS.2021.3094464

[39] Chen, C., Wang, Y., Niu, J., Liu, X., Li, Q., Gong, X. (2021). Domain knowledge powered deep learning for breast cancer diagnosis based on contrast-enhanced ultrasound videos. IEEE Transactions on Medical Imaging, 40(9): 2439-2451. https://doi.org/10.1109/TMI.2021.3078370

[40] Liu, P., Fu, B., Yang, S.X., Deng, L., Zhong, X., Zheng, H. (2020). Optimizing survival analysis of XGBoost for ties to predict disease progression of breast cancer. IEEE Transactions on Biomedical Engineering, 68(1): 148-160. https://doi.org/10.1109/TBME.2020.2993278

[41] Singh, D., Singh, A.K., Tiwari, S. (2021). Breast thermography as an adjunct tool to monitor the chemotherapy response in a triple negative BIRADS V cancer patient: A case study. IEEE Transactions on Medical Imaging, 41(3): 737-745. https://doi.org/10.1109/TMI.2021.3122565

[42] Lu, M., Xiao, X., Pang, Y., Liu, G., Lu, H. (2022). Detection and localization of breast cancer using UWB microwave technology and CNN-LSTM framework. IEEE Transactions on Microwave Theory and Techniques, 70(11): 5085-5094. https://doi.org/10.1109/TMTT.2022.3209679

[43] Teng, J., Zhang, H., Liu, W., Shu, X.O., Ye, F. (2022). A dynamic Bayesian model for breast cancer survival prediction. IEEE Journal of Biomedical and Health Informatics, 26(11): 5716-5727. https://doi.org/10.1109/JBHI.2022.3202937

[44] Mo, Y., Han, C., Liu, Y., Liu, M., Shi, Z., Lin, J., Zhao, B., Huang, C., Qiu, B., Cui, Y., Wu, L., Pan, X., Xu, Z., Huang, X., Li, Z., Liu, Z., Wang, Y., Liang, C. (2023). Hover-trans: Anatomy-aware hover-transformer for RoI-free breast cancer diagnosis in ultrasound images. IEEE Transactions on Medical Imaging, 42(6): 1696-1706. https://doi.org/10.1109/TMI.2023.3236011

[45] Kamal, A.M., Sakorikar, T., Pal, U.M., Pandya, H.J. (2022). Engineering approaches for breast cancer diagnosis: A review. IEEE Reviews in Biomedical Engineering, 16: 687-705. https://doi.org/10.1109/RBME.2022.3181700

[46] Arya, N., Mathur, A., Saha, S., Saha, S. (2022). Proposal of SVM utility kernel for breast cancer survival estimation. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 20(2): 1372-1383. https://doi.org/10.1109/TCBB.2022.3198879

[47] Aminzadeh, A., Arhatari, B.D., Maksimenko, A., Hall, C.J., Hausermann, D., Peele, A.G., Fox, J., Kumar, B., Prodanovic, Z., Dimmock, M., Lockie, D., Pavlov, K.M., Nesterets, Y., Thompson, D., Mayo, S.C., Paganin, D.M., Taba, S.T., Lewis, S., Brennan, P.C., Quiney, H.M., Gureyev, T.E. (2022). Imaging breast microcalcifications using dark-field signal in propagation-based phase-contrast tomography. IEEE Transactions on Medical Imaging, 41(11): 2980-2990. https://doi.org/10.1109/TMI.2022.3175924

[48] Iliopoulos, I., Di Meo, S., Pasian, M., Zhadobov, M., Pouliguen, P., Potier, P., Perregrini, L., Sauleau, R., Ettorre, M. (2020). Enhancement of penetration of millimeter waves by field focusing applied to breast cancer detection. IEEE Transactions on Biomedical Engineering, 68(3): 959-966. https://doi.org/10.1109/TBME.2020.3014277

[49] Misra, S., Jeon, S., Managuli, R., Lee, S., Kim, G., Yoon, C., Lee, S., Barr, R.G., Kim, C. (2021). Bi-modal transfer learning for classifying breast cancers via combined b-mode and ultrasound strain imaging. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, 69(1): 222-232. https://doi.org/10.1109/TUFFC.2021.3119251

[50] Lahiri, A., Vundavilli, H., Mondal, M., Bhattacharjee, P., Decker, B., Del Priore, G., Peter Reeves, N., Datta, A. (2023). Drug target identification in triple negative breast cancer stem cell pathways: A computational study of gene regulatory pathways using boolean networks. IEEE Access, 11: 56672-56690. https://doi.org/10.1109/ACCESS.2023.3283291