Enhanced Prediction of Fetal Heart Chamber Defects Using a Deep Belief Network-Based Transit Search Method

Enhanced Prediction of Fetal Heart Chamber Defects Using a Deep Belief Network-Based Transit Search Method

Shobana Nageswari Chandrasekaran* Vimal Kumar Maanuguru Nagaraju Vini Antony Grace Nicholas Thiyagarajan Jayaraman

Department of Electronics and Communication Engineering, R.M.D. Engineering College, Chennai 601206, Tamilnadu, India

Department of Mechatronics Engineering, Sona College of Technology, Salem 636005, Tamilnadu, India

Corresponding Author Email: 
shobana.ece@rmd.ac.in
Page: 
585-597
|
DOI: 
https://doi.org/10.18280/ts.410204
Received: 
20 June 2023
|
Revised: 
25 November 2023
|
Accepted: 
29 December 2023
|
Available online: 
30 April 2024
| Citation

© 2024 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

The detection of Fetal Heart Chamber Defects (FHCD) through ultrasound (US) imaging presents significant challenges due to issues with contrast, lighting, and image clarity. This impedes the physical detection of FHCDs, thus affecting the accuracy of computer-assisted diagnosis outcomes. Addressing this challenge, an innovative approach is proposed, employing a novel segmentation and prediction methodology using Enhanced Deep Belief Network-based Transit Search Algorithm (EDBN-TS). The key innovation of this study is the development of an augmented deep learning model, fine-tuned with an optimization algorithm, thereby enhancing the precision in predicting FHCDs. In the initial phase, US fetal imaging data are collected and subjected to pre-processing. This involves frame extraction, label removal, and application of filtering techniques, significantly enhancing image quality. Subsequently, feature extraction is carried out on these pre-processed images using the Grey Level Co-occurrence Matrix (GLCM) method. This step crucially minimizes redundant data in the pre-processed images. The segmented representation of the extracted features is then achieved via the Otsu Thresholding method, simplifying the image representation into a format more conducive for analysis. The final prediction stage utilizes the EDBN, wherein the hidden neurons of the Deep Belief Network (DBN) are fine-tuned using the Transit Search (TS) algorithm. This is aimed at maximizing accuracy and precision, the primary objectives for enhancing the prediction of FHCD. The integration of an optimization method in tuning the deep learning model represents a significant advancement in the prediction of FHCDs, yielding superior results in comparison to conventional methods across various analytical parameters. The proposed EDBN-FS model demonstrates an improvement in precision, accuracy, sensitivity, F1 Score, and specificity by 1.76%, 0.75%, 1.76%, 2.22%, and 2.80% respectively, compared to existing models. Furthermore, the rate of successful FHCD predictions is enhanced by approximately 25%.

Keywords: 

Enhanced Deep Belief Network (EDBN), Fetal Heart Chamber Defect (FHCD) Segmentation and Prediction, Grey Level Co-occurrence Matrix (GLCM), Otsu Thresholding, Transit Search (TS) algorithm

1. Introduction

The examination of fetal heart defect (FHD) by ultrasound is challenging due to inadequate lighting, lack of clarity, and insufficient intensity [1]. The primary reason for the failure to visually recognize FHDs is primarily attributed to the lack of adequate ultrasound imaging of fetal echocardiograms. The accuracy of computer-based diagnostic results is reduced due to the low quality of images in the United States [2]. Currently, prenatal testing frequently fails to detect cardiac defects in the womb using ultrasound, which can lead to severe illness or even death [3]. Screening programs in most rich nations often detect just 30% to 60% of heart abnormalities, depending on the type of cardiac disease and the skill level of the sonographer [4]. The accuracy of prenatal detection is significantly impacted by the sonographer's experience in doing a substantial number of routine abnormality examinations [5]. During the implementation of a standard anomaly search, approximately 49% of missed occurrences can be attributed to a lack of flexible human capabilities [6]. The efficacy of FHD prenatal identification is notably impacted by the quality of ultrasound images. When comparing cases where FHDs were not diagnosed to cases where FHD was detected, there was a higher occurrence of inadequate fetal cardiac ultrasound pictures [7]. Even if the photos from the United States have high quality, FHDs are not visible in 20% of unidentified cases. Therefore, in order to potentially improve the rate of detecting FHD, it is necessary to enhance the accuracy of the inspection [8].

Prenatal testing is predominantly conducted using US devices in the US for several reasons, including their affordability, low sensitivity, non-ionizing radioactivity, and user-friendly nature [9]. Despite the advantages mentioned, the United States still encounters substantial challenges due to issues such as poor contrast, low illumination, or inadequate lighting conditions, which result in a decrease in image quality [10]. Identifying cardiac anomalies on echocardiography is difficult due to the overall darkness of the image [11]. Inadequate visibility and brightness in US photos can have significant consequences that may lead to an incorrect diagnosis. To improve the identification and forecasting of FHDs, it is possible to eliminate these deficiencies and transform low-light US photos into high-quality, sharp visuals [12]. Prior to establishing a prognosis, it is crucial to enhance the image quality of the low-light ultrasound in the fetal echocardiography. The suggested methodology involves preprocessing the collected US fetal pictures utilizing extracted frames, label removal, and filtering algorithms.

Various image processing techniques can be employed to enhance the brightness and contrast of photographs, although these methods sometimes involve complex quantitative calculations [13]. The efficacy of an AI-focused medical image analysis platform varies significantly when it comes to the quality of images obtained by basic techniques [14]. Deep learning (DL) models have shown promising effectiveness in many medical imaging modalities. Automatic segmentation of the cardiac chambers is crucial for the assessment and prediction of heart diseases [15]. The continuous progress of deep learning has led to significant advantages in image processing and computer vision problems, as evidenced by many techniques.

Contributions of this paper are:

  • To segment and predict FHCD utilizing a cutting-edge intelligence technology called EDBN-TS.
  • To initially gather and pre-process data from US fetal imaging using the extracted frames, label removal, and filtering techniques.
  • To extract the features from the pre-processed images using the GLCM approach.
  • To segment the extracted features by the Otsu Thresholding method.
  • To do the prediction by the EDBN, in which the hidden neurons of the DBN are tuned by TS with the intention of accuracy and precision maximization as the main objective function.
  • To utilize EDBN as the input for all reconstructed images to produce the most accurate classifier for predicting FHD.

The paper organization is. Section I is the introduction regarding the fetal heart chamber. Section II is a literature survey. Section III is the proposed model and data collection for the introduced FHCD. Section IV is pre-processing and feature extraction for the introduced FHCD. Section V is segmentation and prediction for the introduced FHCD. Section VI is results. Section VII is the conclusion.

2. Related Works

The literature survey related to the FHCD model is categorized into three kinds, such as deep learning models, machine learning models, and other models.

About Deep learning models: In 2020, Dong et al. [16] proposed a general deep learning (DL) approach for fetal US CFP quality monitoring that operates automatically. To accomplish completely autonomous quality assurance, an entire qualitative score of every CFP was calculated to demonstrate its flexibility and generalization capabilities.

In 2020, Gong et al. [17] proposed a brand-new method called DGACNN, which performs optimally at identifying FHD, attaining a rate of 85%. This system was created to address the issue of not having enough training datasets to build a strong classifier. There were a lot of unidentified video slices. DANomaly and GACNN are the two components that make up the architecture of DGACNN (Wgan-GP and CNN). A much more reliable and accurate end-to-end OCC system was trained using DANomaly, which was comparable to the ALOCC system and integrated cycle adversarial learning to screen video slices.

In 2022, Sutarno et al. [18] suggested low-light fetal echocardiogram improvement stacking using the classifier "FetalNet," a DNN. 460 images were used for the suggested FetalNet system. The findings demonstrated that entire raw US images might be enhanced with adequate accuracy, raising the PSNR to 30.85 dB, the SSIM to 0.96, and the MSE to 18.16. In order to create the optimal classifier for forecasting FHD, entire rebuilt images were also employed as inputs in a CNN.

About Machine learning models: In 2023, Qiao et al. [19] proposed a PSFFGAN that uses FC sketch images to create high-quality perspectives. Additionally, we put forth a brand-new TGALF, which enhances PSFFGAN and completely extracts the cardiac anatomical framework data. The empirical findings demonstrate the greatest appropriate assessment values, with MS-SSIM, SSIM, and FID being correspondingly 0.6224, 0.4627, and 83.92. The PSFFGAN's efficiency was again demonstrated by the evaluation by two expert cardiologists.

In 2022, Qiao et al. [20] developed a straightforward yet efficient RLDS to diagnose embryonic CHD to increase diagnostic accuracy. Our process utilizes CNNs to extract distinguishing information from the fetal cardiac anatomical diagrams. We offer a comprehensive graphical description of the RLDS's diagnosing procedure in order to increase its trustworthiness.

About Other models: In 2022, Pu et al. [21] proposed to combine UNet, MobileNet, and an explicit FPN to create MobileUNet-FPN for the separation of 13 important cardiac components. This was the initial AI-oriented technique that, according to the information, was capable of segmenting various anatomical features in the fetal A4C image. Four stages make up the MobileNet backbone system, and we utilize the characteristics of each step as the encoder as well as the upsampling process as the decoder. The MobileUNet-FPN network is next trained concurrently at every edge node to significantly minimize network communication overhead. Several tests were run, and the findings demonstrate the suggested model's higher efficiency on the fetal A4C as well as femoral-length images.

In 2022, Qiao et al. [22] developed an intelligent FLDS. The FLDS incorporates the MRHAM described in this research for learning strong and robust characteristics, assisting the FLDS in precisely localizing the four chambers in the fetal FC images. Thorough testing shows that the developed FLDS beats the traditional methods with consideration of recall, precision, mAP, F1 score, and FPS, which are completely 0.971, 0.919, 0.944, and 43, respectively. Furthermore, the PASCAL VOC dataset, which visualizes images of nature, was used to evaluate the suggested FLDS. This dataset yields a greater mAP of 0.878.

In 2022, Sengan et al. [23] proposed a unique ARVNet framework. FRRV, FRLV, FRRA, FRLA, and FRTV were investigated in this research. Images without rhabdomyoma signify "NC." The findings show that the suggested approach has strong CRD detection accuracy even with a fairly limited count of datasets. The findings demonstrated better results when it came to discovering CRDs.

In 2020, Xu et al. [24] release a CU-net. Firstly, it obtains distinct tissue borders and resolves the gradient-vanishing issue brought on by extending the depth of the system. Secondly, the CU-between-net links may communicate previous data from the thin pool to the deepest layer in order to provide more accurate segmentation. Finally, to maintain fine-grained structural data and establish distinct borders, the technique makes use of SSIM loss. The suggested technique obtains a Hausdorff distance of 3.33 and pixel accuracy of 0.929, as shown by numerous tests, demonstrating its usefulness and promise as a therapeutic agent.

In 2022, Ogenyi et al. [25] assessed and documented typical sonographic fetal heart rate (FHR) values to identify the function of FHR in predicting gestational age. It was chosen to carry out planned cross-sectional research on 2727 low-risk singleton expectant mothers. From January 2019 to December 2020, a specialist radiologist and three skilled sonographers used a transabdominal technique to collect FHR values. Every individual underwent two FHR assessments. During the initial trimester, the fetal appearance and lying were also recorded. SPSS version 24 was used to examine the data (Armonk, IBM, USA, NY).

Research gaps and challenges: The assessed literary works depict the results of screening investigations that offer insights into the proportion and prevalence of heart abnormalities. The authors classified all anomalies in the bulk of these research as noteworthy. Some cardiac disorders, such as complete heart block, cardiomyopathies, and cardiac malignancies, may not exhibit symptoms until the later stages of pregnancy. Some forms of CHD, such as pulmonary and aortic stenosis, could not have been detectable at 11 or 13 weeks of pregnancy due to the possibility of their progression into more serious abnormalities. Despite undergoing the customary anomaly scan during the second trimester, these cardiac abnormalities often remain undetected. Technical challenges, such as low imaging resolution, insufficient clarity relative to the size of the organs being observed, and fetal movements, can hinder the accurate assessment of the fetal heart. Therefore, in order to address the limitations in the existing literature, innovative deep learning-based optimization methods are employed to precisely forecast the FHCD.

3. Proposed Model and Data Collection for the Introduced FHCD Model

3.1 Proposed model

The developed FHCD model includes phases such as data collection, pre-processing, feature extraction, segmentation, and prediction. Initially, the images were gathered from US fetal imaging. The collected data undergoes pre-processing using the extracted frames, label removal, and filtering techniques. From the pre-processed images, the features are extracted using the GLCM approach. Using these extracted features, segmentation is performed by the Otsu Thresholding method.

Figure 1. Proposed FHCD model

The prediction of these segmented images is also done by EDBN, with the goal of improving accuracy and precision being the final objective function. This is done by tuning the hidden neurons of DBN. This EDBN predicts the final outcome as to whether the FHCD is present or not. The proposed FHCD model is pictorially given in Figure 1.

3.2 Data collection

With a frame rate of 25 frames per second as well as an average duration of 1 minute, 2D US cineloop videos of the fetal heart were created at gestational ages ranging from 21 to 27 weeks. The fetal heart's four-chamber apical image was employed for research. The Chennai-based Mediscan Pvt. Ltd. contributed the cineloop patterns. For the investigation, 80 echocardiographic frames were employed, 40 of which were normal and 40 of which were abnormal. In the case of any abnormal functionalities in the fetal heart, they are classified as abnormal; otherwise, they are categorized as normal. The defective images featured several embryonic cardiac anomalies.

4. Pre-Processing and Feature Extraction for the Introduced FHCD Model

4.1 Pre processing

The pre-processing of the gathered images for the developed FHCD model is done through the extraction of frames, label removal, and filtering techniques. The retrieved frames were originally made in grayscale. The frames were next downsized to 256×256 in size. This size is mainly considered for its superior clarity and quality. Speckle noise is a common feature of US images, making successful chamber delineation dependent on preprocessing. The resized frames are first processed with the anisotropic diffusion filter, which accentuates the edges, and then with the LoG filter. The visual definition of the chamber borders improved as a consequence. The filtering operation's implementation yields a PSNR value of 26.87 dB. This describes the quality measurement between the original and a compressed image. This specified PSNR value demonstrates the better quality of the reconstructed or compressed image than using conventional models. The LoG filter can be described below.

The $\nabla^2G$ operator is another name for the LoG operator. The initial step of the neurological filter in biological vision frameworks is similar to the $\nabla^2G$ operator. It combines two functions: the Laplacian function and the Gaussian function, respectively. A non-directional linear differential operator defines the Laplacian. The Laplacian operator linked with an image, $g(x,y)$, may be described as:

$M(g(x, y))=\frac{\partial^2 g(x, y)}{\partial x^2}+\frac{\partial^2 g(x, y)}{\partial y^2}$                        (1)

The 2-D Gaussian operator is characterized as follows:

$\mathrm{H}(\mathrm{x}, \mathrm{y})=\frac{1}{2 \pi \sigma^2} \exp \left(-\frac{\mathrm{s}^2}{2 \sigma^2}\right)$                   (2)

Here, $\mathrm{s}^2=\mathrm{x}^2+\mathrm{y}^2$ and $\sigma$ respectively represent the Gaussian function's space constant and the standard deviation. Consequently, the application of the $\nabla^2G$ operator to the image f(x, y) yields the resulting output.

$g^{\prime \prime}(x, y)=\left[\left(\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}\right) g(x, y)\right] * g(x, y)$                   (3)

Hence, the 2-D LoG operator with a zero center has the following format:  

$\operatorname{LoG}(\mathrm{x}, \mathrm{y})=\frac{-1}{\pi \sigma^4}\left[1-\frac{\mathrm{s}^2}{2 \sigma^2}\right] \exp \left(-\frac{\mathrm{s}^2}{2 \sigma^2}\right)$                    (4)

The edges are produced via convolution of an image with the LoG operator using the second derivative loci of zero crossing points. The anisotropic diffusion filter is better at providing variance reduction, mean preservation, and edge localization. Similarly, a LoG filter minimizes the effect of changes being produced by noise. Both of these filters are effective in removing speckle noise; therefore, it becomes easy to perform the feature extraction process of the proposed FHCD model since it is free from speckle noise.

4.2 Feature extraction

The feature extraction for the developed FHCD model is accomplished by the GLCM method. GLCM describes a second-order statistical texture analysis approach. It analyzes the spatial relationship within the pixels and describes how commonly a mixture of pixels are available in an image with respect to the provided distance and direction. One of the familiar texture descriptors provides a measurement of the intensity changes in relation to every pixel. Being spaced by a certain distance, it is used to identify the characteristics. The characteristics that were extracted are listed below.

Contrast: It reveals the image's intensity's range of fluctuation. In contrast to a zero value, which denotes uniformity, a value of a large number denotes the existence of noise as well as edges in the image. The below equation yields it.

contrast $=\sum_{\mathrm{i}=0}^{\mathrm{H}-1} \sum_{\mathrm{k}=0}^{\mathrm{H}-1}(\mathrm{j}-\mathrm{k})^2 \mathrm{Q}_{\mathrm{j}, \mathrm{k}}$                    (5)

Here, $Q_{i, k}$ shows the element $j, k$ and $H$ shows the total count of grey levels indicated by the indices $(\mathrm{j}, \mathrm{k})$.

Correlation: This metric quantifies the extent to which a pixel is influenced by its neighboring pixels. The correlation is determined by the following equation. When the pixels are uniformly distributed, a high correlation value is typically observed.

correlation $=\sum_{\mathrm{j}=1}^{\mathrm{H}} \sum_{\mathrm{k}=1}^{\mathrm{h}} \frac{(\mathrm{jk}) \mathrm{Q}(\mathrm{j}, \mathrm{k})-\left(\mu_{\mathrm{y}} \times \mu_{\mathrm{z}}\right)}{\sigma_{\mathrm{y}} \times \sigma_{\mathrm{z}}}$                   (6)

Here, the below equations yield $\mu_{\mathrm{y}}$ and $\mu_{\mathrm{z}}$ as the mean and the standard deviation, accordingly.

$\mu_y=\sum_{j=0}^{H-1} j Q_y(j)$                      (7)

$\mu_{\mathrm{z}}=\sum_{\mathrm{k}=0}^{\mathrm{H}-1} \mathrm{kQ}_{\mathrm{z}}(\mathrm{k})$                      (8)

The row and column represent the mean and standard deviation of $\sigma_y$ and $\sigma_z$, respectively.

$\sigma_y^2=\sum_{j=0}^{H-1}\left(Q_y(j)-\mu_y(j)\right)^2$                       (9)

$\sigma_z^2=\sum_{\mathrm{k}=0}^{\mathrm{H}-1}\left(\mathrm{Q}_{\mathrm{z}}(\mathrm{k})-\mu_{\mathrm{z}}(\mathrm{k})\right)^2$                          (10)

Entropy: This measure reflects the level of disorder or randomness in the pixel arrangement of an image, and it is inversely related to the uniformity of the image's pixel distribution.

entropy $=-\sum_{\mathrm{j}=0}^{\mathrm{H}-1} \sum_{\mathrm{k}=0}^{\mathrm{H}-1} \mathrm{Q}(\mathrm{j}, \mathrm{k}) \log (\mathrm{Q}(\mathrm{j}, \mathrm{k}))$                    (11)

Homogeneity: This term is a measure of the smoothness or uniformity in an image, and it varies inversely with the level of contrast present in the image.

homogenity $=\sum_{j=1}^H \sum_{k=1}^H \frac{Q(j, k)}{1+(j-k)^2}$                      (12)

Sum Average: This parameter is calculated using the equation below and represents the average of the sum of grey levels distributed throughout the image.

sum average $=\sum_{\mathrm{j}=0}^{2 \mathrm{H}-2} \mathrm{jq}_{\mathrm{y}+\mathrm{z}}(\mathrm{j})$                       (13)

$\mathrm{q}_{\mathrm{y}+\mathrm{z}}(\mathrm{l})=\sum_{\mathrm{j}=1}^{\mathrm{H}} \sum_{\mathrm{k}=1}^{\mathrm{H}} \mathrm{q}(\mathrm{j}, \mathrm{k}), \quad \mathrm{l}=2,3,4, \cdots, 2 \mathrm{H}$                    (14)

Sum Entropy: This metric indicates the degree of disorder or randomness in the distribution of the sum of grey levels in the image.

sum entropy $=-\sum_{i=2}^{2 H} q_{y+z}(j) \log \left\{q_{y+z}(j)\right\}$                     (15)

Autocorrelation: This parameter quantifies the degree of repetition or regularity in the texture of an image, essentially distinguishing between fine and coarse textures.

autocorrelation $=\sum_{j=0}^{\mathrm{H}-1} \sum_{\mathrm{k}=0}^{\mathrm{H}-1}(\mathrm{jk}) \mathrm{q}(\mathrm{j}, \mathrm{k})$                        (16)

Cluster Prominence and Cluster Shade serve as indicators of the asymmetry or skewness in the matrix, with high values indicating an imbalance in the image distribution.

cluster prominence $=\sum_{j=0}^{H-1} \sum_{j=0}^{H-1}\left(j+k-\mu_y-\mu_z\right)^4 \cdot Q(j, k)$                        (17)

cluster shade $=\sum_{j=0}^{\mathrm{H}-1} \sum_{\mathrm{j}=0}^{\mathrm{H}-1}\left(\mathrm{j}+\mathrm{k}-\mu_{\mathrm{y}}-\mu_{\mathrm{z}}\right)^3 \cdot(\mathrm{j}, \mathrm{k})$                  (18)

The GLCM is used for extracting the above-mentioned statistical texture parameters. These parameters are very helpful in performing the segmentation process of the proposed FHCD model.

5. Segmentation and Prediction for the Introduced FHCD Model

5.1 Segmentation

For the extracted set of features, segmentation is accomplished by the Otsu Thresholding method. Otsu thresholding is employed for the automatic binarization-level decision on the basis of the shape associated with the histogram. The Otsu thresholding technique, which operates on the presumption that an image's pixels have two classes or a bimodal histogram, autonomously chooses the ideal global threshold for an image. In Otsu thresholding, a threshold that minimizes the variation inside the classes is thoroughly explored. Eq. (19) describes the intra-class variance as the weighted equation of the variances of every class:

$\sigma_{\mathrm{x}}^2(\mathrm{u})=\mathrm{r}_1(\mathrm{u}) \sigma_1^2(\mathrm{u})+\mathrm{r}_2(\mathrm{u}) \sigma_2^2(\mathrm{u})$                    (19)

Here, the subscripts 1 and 2 respectively denote the foreground and background classes. The predicted variances and probabilities for these classes are computed as follows:

$r_1(u)=\sum_{j=1}^u Q(j)$                       (20)

In the above equation, the foreground classes are represented by $r_1(u)$ respectively.

$\mathrm{r}_2(\mathrm{u})=\sum_{\mathrm{j}=\mathrm{u}+1}^{\mathrm{L}} \mathrm{Q}(\mathrm{j})$                      (21)

In the above equation, the background classes are represented by $\mathrm{r}_2(\mathrm{u})$ respectively.

$\sigma_1^2(u)=\sum_{j=1}^u\left[j-\mu_1(u)\right]^2 \frac{Q(j)}{r_1(u)}$                        (22)

Here, the variance of class 1 is shown by $\sigma_1^2(u)$ respectively.

$\sigma_2^2(u)=\sum_{j=u+1}^L\left[j-\mu_2(u)\right]^2 \frac{Q(j)}{r_2(u)}$                     (23)

Here, the variance of class 2 is shown by $\sigma_2^2(u)$ respectively.

Here, the class means, $\mu_1(u)$ and $\mu_2(u)$, are measured as beneath:

$\mu_1(u)=\sum_{j=1}^u \frac{j Q(j)}{r_1(u)}$                          (24)

In the above equation, the mean of class 1 is shown by $\mu_1(u)$ respectively.

$\mu_2(u)=\sum_{j=u+1}^L \frac{j Q(j)}{r_2(u)}$                   (25)

In the above equation, the mean of class 2 is shown by $\mu_2(u)$ respectively. Here, an image's pixel values vary from 0 to L.

5.2 Prediction

The prediction of the segmented images for the developed FHCD model is performed by the EDBN model. The EDBN is selected here since it can accurately predict the FHCD model when compared with state-of-the-art deep learning models, and it is clearly revealed in the results section in terms of various analyses. Here, the hidden neurons of DBN are tuned by the TS algorithm with the consideration of accuracy and precision maximization as the major objective functions. In DBN, the regrets, which are described as the discrepancy between forecasts and actual outcomes, can be used to optimize the learning framework of the DBN through back propagation. Hence, DBN is appropriately feasible for the needs of prediction. Multiple layers associated with the DBN's model are made up of RBM. The below process illustrates the request sequence learning procedure:

Input: The three-dimensional vectors in [0, 1] space serve as the inputs for DBN. To determine $\forall y_j \in Y_k(u)$, the $I_j(u)$ is substituted with $Y_k(u)$.

$\mathrm{y}_{\mathrm{j}}=\frac{\mathrm{u}_{\mathrm{j}}-\mathrm{u}_{\mathrm{in}}}{\mathrm{u}_{\mathrm{la}}-\mathrm{u}_{\mathrm{in}}}$                        (26)

Here, $u_{i n}$ represents the first iteration and $u_{l a}$ represents the last iteration. It is evident that $y_j \in[0,1]$. Hence, the $Y_k(u)$ can be selected as input.

RBM’S training process: The energy function linked with the RBM is described as: Unit $\mathrm{j}$ is designated as $\mathrm{b}_{\mathrm{j}}$ in the visible layer while Unit $\mathrm{k}$ is designated as $\mathrm{c}_{\mathrm{j}}$ in the hidden layer. Assume the weight be designated as $\mathrm{x}_{\mathrm{jk}}$, where $\phi=\{\mathrm{x}, \mathrm{b}, \mathrm{c}\}$.

$G_\phi(l, m)=-\sum_{j \in o_1} b_j l_j-\sum_{k \in o_m} c_k m_k-\sum_{\{j, k\} \in o_1 \times o_m} x_{j, k} l_j m_k$                            (27)

The vectors $l$ hold for the visible layer, and the vectors $m$ for the hidden layer. The active probability $h_k$ of hidden unit $k$ is described as:

$Q\left(h_k \mid l\right)=\operatorname{sig}\left(c_k+\sum_{j \in o_l} x_{j, k} l_j\right)$                          (28)

The sigmoid function with the form $\frac{1}{1+e^{-x}}$ is called $\operatorname{sig}(.)$. As a result, we may infer that the activation threshold is specified as $\Omega$.

$h_k=\left\{\begin{array}{cc}1, & Q\left(h_k \mid l\right) \geq \Omega \\ 0, & \text { otherwise }\end{array}\right.$                       (29)

The conditional joint probability of $(1, m)$ beneath $\phi$ can be attained using:

$\mathrm{Q}(\mathrm{l}, \mathrm{m} \mid \phi)=\frac{1}{\mathrm{~A}_{\mathrm{c}}} \mathrm{e}^{-\mathrm{G}(\mathrm{l}, \mathrm{m})}$                      (30)

In the above equation, the conditional probability is shown by $\mathrm{Q}(\mathrm{l}, \mathrm{m} \mid \phi)$ respectively.

$\mathrm{A}_\phi=\sum_{\mathrm{l}, \mathrm{m} \in \oplus} \mathrm{e}^{-\mathrm{G}_{\mathrm{A}_\phi}(\mathrm{l}, \mathrm{m})}$                         (31)

The below cost function is provided to upgrade $\phi$:

$\mathrm{A}(\phi)=\sum_{\mathrm{e} \in \mathrm{E}} \log \mathrm{Q}_{\mathrm{e}}(\mathrm{l} \mid \mathrm{m}, \phi)$                     (32)

Here, $E$ shows the training group and the upgrade of $\phi$ may be provided as follows:

$\phi=\phi+\rho \frac{\partial \mathrm{A}(\phi)}{\partial \phi}$                       (33)

Here, $\rho$ indicates the DL rate in RBM.

Regulation and outputs: The units for output represents sigmoid functions, where $\mathrm{z}_{\mathrm{d}}=\operatorname{sig}\left(\mathrm{f}+\sum_{\mathrm{j} \in \mathrm{O}_{\mathrm{w}}} \mathrm{x}_{\mathrm{j}, \mathrm{k}} \mathrm{m}_{\mathrm{j}}\right)$ in output layer, $f$ denotes the basis. To instantaneously monitor and control the DBN, the regrets are provided by $l(\mathrm{u})=\sum_{\mathrm{e} \in \mathrm{E}} \sum_{\mathrm{d} \in \mathrm{O}}\left\|\mathrm{z}_{\mathrm{d}}(\mathrm{u}-1)-\mathrm{n}_{\mathrm{e}, \mathrm{d}}(\mathrm{u})\right\|^2$, where $\mathrm{z}_{\mathrm{d}}(\mathrm{u}-1)$ denotes the value of $\mathrm{z}_{\mathrm{d}}$ at $\mathrm{u}-1$ and $n_{e, d}(u-1)$ is described as follows:

$\mathrm{n}_{\mathrm{e}, \mathrm{d}}=\left\{\begin{array}{cc}1, & \text { prediction count } \\ 0, & \text { otherwise }\end{array}\right.$                       (34)

The EDBN-based prediction for the FHCD model is given pictorially in Figure 2. The major objective of the EDBN-based prediction for the FHCD model is to optimize the hidden neurons of DBN by the TS with the intention of accuracy plus precision maximization as below.

Objective function $=\underbrace{\operatorname{argmax}}_{\mathrm{HN}_{\mathrm{DBN}}}($ accuracy + precision $)$                      (35)

Here, the term $\operatorname{argmax}$ defines the maximization function, and hidden neurons are shown by $\mathrm{HN}_{\mathrm{DBN}}$ respectively.

Figure 2. EDBN-based prediction for the developed FHCD model

5.3 TS algorithm

The TS algorithm was chosen to improve the hidden neurons in the proposed FHCD framework's EDBN-based prediction model. The TS is mainly chosen here since it can return better local as well as global optimal solutions. It enhances the prediction phase of the proposed FHCD model by optimizing the hidden neurons of the existing DBN model to produce a novel EDBN model that, in turn, can return better prediction accuracy than the considered traditional models. The mentioned parameters are being optimized in order to derive accuracy plus precision maximization as the major objective. The SNR as well as the count of host stars $\left(o_t\right)$ represent two factors that are specified in the algorithm model. On the basis of the transit framework, the SN parameter is chosen. Moreover, the standard deviation is used to assess the noise. The feasibility that is attained from star images exists in reality. It must be remembered that the quantity of TS is equal to the product of the two algorithmic parameters $\left(\mathrm{o}_{\mathrm{t}}\right.$ and $\left.\mathrm{TO}\right)$. The TS implementation process consists of five stages: galaxy, star, transit, planet, and neighbor, as well as exploitation.

The program chooses a galaxy to begin with. The galactic center is selected as the appropriate place in the search space in a random manner. Once this position is known, it will be essential to establish the galaxy's habitable planets (life belt). To achieve this, the prospective for the optimal stellar groups is determined by evaluating the $\mathrm{o}_{\mathrm{t}} * \mathrm{TO}$ random areas using Eqs. (36) to (38). Afterwards, the ones with the finest fitness are chosen. The method's following phases start with the locations that were chosen because they have the possibility of sustaining life.

$\mathrm{M}_{\mathrm{S}, \mathrm{m}}=\mathrm{M}_{\text {galaxy }}+\mathrm{E}-$ noise $\mathrm{m}=1, \cdots,\left(\mathrm{o}_{\mathrm{t}} \times \mathrm{TO}\right)$                    (36)

Here, the location is shown by $\mathrm{M}_{\mathrm{S}, \mathrm{m}}$ respectively.

$E= \begin{cases}d_1 M_{\text {galaxy }}-M_s & \text { if } a=1 \text { (negative region) } \\ d_1 M_{\text {galaxy }}+M_s & \text { if } a=2 \text { (positive region) }\end{cases}$                    (37)

In the above equation, the parameter is shown by the term $E$ respectively.

noise $=\left(\mathrm{d}_2\right)^3 \mathrm{M}_{\mathrm{s}}$                    (38)

The galaxy's center is represented by $M_{\text {galaxy }}$ in the equations described above. In the search space, $M_s$ represents a random place as well. A random integer $\left(d_1\right)$ and a random vector $\left(\mathrm{d}_2\right)$ having the optimization issue are two coefficients.

To illustrate how the condition of the research region differs from that of the galaxy's center, parameter $E$ is used as a comparison. This area may be found either on the front (positive portion) or back (negative portion) of the galaxy's center area. Here, the zone parameter $(a)$ defines a chance count between 1 and 2. In order to improve positioning accuracy, noise associated with the data gathered from the signals that were acquired must also be eliminated. It significantly varies from the settings for which it is designed. In order to lower the computational cost, a coefficient $d_2$ having a power of 3 has been utilized.

The following phase involves selecting a star that corresponds to a stellar system utilizing Eqs. (39) to (41), one from each of the designated areas. As a result, the method can search for $ot$ stars at the conclusion of this step. $\mathrm{M}_{\mathrm{t}}$ in Eq. (39) displays the stars' positions. In these equations, the coefficients $\mathrm{d}_3$ and $\mathrm{d}_4$ represent random values between 0 and 1, while $\mathrm{d}_5$ defines a random vector between 0 and 1.

$M_{T, j}=M_{S, j}+E-$ noise $j=1, \cdots, o_t$                   (39)

In the above equation, the position of the star is shown by $M_{T, j}$ respectively.

$E= \begin{cases}d_4 M_{S, j}-d_3 M_s & \text { if } a=1 \text { (negative region) } \\ d_4 M_{S, j}+d_3 M_s & \text { if } a=2 \text { (positive region) }\end{cases}$                     (40)

In the above equation, the parameter is shown by the term $E$ respectively.

noise $=\left(d_5\right)^3 M_s$                (41)

The galaxy stage of the suggested method is run only once before the iterations begin. This phase's goal is to determine the proper circumstances in which to carry out the method's key steps.

It is essential to reassess the light obtained from the star in order to identify any potential minimization in received light signals that could signal the transit. The $M_T$ along with its corresponding fitness $\left(g_T\right)$ contain two meanings $\left(N_1\right.$ and $N_2$ ) in the TS method. $N_1$ is employed when it is desired to estimate and modify the position of a planet using the position of the star. $\mathrm{N}_2$ is employed when it is desired to ascertain and modify the brightness obtained from the star. Consequently, a variation in $\mathrm{M}_{\mathrm{T}}$ in the instance of $\mathrm{N}_2$ denotes a novel light signal, whereas a variation in $\mathrm{M}_{\mathrm{T}}$ in the consideration of $\mathrm{N}_1$ denotes a variation in the star's position.

The star’s class must be specified in the TS method. Every star's brightness is taken into account for this reason, utilizing the concept of $N_2$. It is obvious that a close distance makes it possible to capture more photons. As a result, the brightness of the star is roughly determined by Eq. (42) in the suggested approach.

$M_j=\frac{S_j / o_t}{\left(e_j\right)^2} \quad j=1, \cdots$, ot $\quad S_j \in\left\{1, \cdots, o_t\right\}$                      (42)

In the above equation, the location of the brightness of the star is shown by $M_j$ respectively.

$e_j=\sqrt{\left(M_T-M_U\right)^2} \quad j=1, \cdots, o_t$                   (43)

Here, $\mathrm{M}_{\mathrm{j}}$ and $\mathrm{S}_{\mathrm{j}}$ stand for the $\mathrm{j}$ star's brightness as well as rank, respectively. Moreover, $\mathrm{e}_{\mathrm{j}}$ (Eq. (43)) addresses the separation between the telescope and star $\mathrm{j}$. At the beginning of the procedure, a random position for the telescope, $\mathrm{M}_{\mathrm{U}}$, is chosen; this position remains constant throughout optimization. By modifying the value of $\mathrm{M}_{\mathrm{T}}$ and utilizing the specification of $\mathrm{N}_2$, the novel signal is obtained. The Eqs. (44) to (46) are applied for this reason. A random integer among -1 and 1 as well as a random vector among 0 and 1 correspondingly make up coefficients $\mathrm{d}_6$ and $\mathrm{d}_7$.

$M_{T, n e w, j}=M_{T, j}+E-$ noise $j=1, \cdots, o_t$                   (44)

In the above equation, the new position is shown by $M_{T, n e w, j}$ respectively.

$\mathrm{E}=\mathrm{d}_6 \mathrm{M}_{\mathrm{T}, \mathrm{j}}$                       (45)

In the above equation, the parameter is shown by the term $E$ respectively.

noise $=\left(d_7\right)^3 M_T$                    (46)

Last but not least, the amount of the star's brightness is computed (the acquired $\mathrm{g}_{\mathrm{t}}$ utilizing the novel $\mathrm{M}_{\mathrm{T}, \mathrm{new}}$ ), and therefore, the amount of the star's novel luminosity, $\mathrm{M}_{\mathrm{j}, \text { new }}$, is established by Eq. (47).

$M_{j, \text { new }}=\frac{S_{j, \text { new }} / o_t}{\left(e_{j, \text { new }}\right)^2} \quad j=1, \cdots, o_t$                    (47)

The novel $\mathrm{M}_{\mathrm{T}}$ as well as the position of the telescope may be used to determine the parameter $\mathrm{e}_{\mathrm{j}, \text { new }}$. It is possible to establish whether a transit is possible by contrasting $M_j$ with $M_{j, n e w}$. On the basis of Eq. (48), this probability, $Q_U$, is described as 1 (probability of transit) and 0 (probability of non-transit). At the present iteration, the planet stage is utilized if $Q_U=1$, else, the neighbor stage is employed. 

$\begin{aligned} & \text { if } M_{j, \text { new }}<M_j \quad Q_U=1 \text { (transit) } \\ & \text { if } M_{j, \text { new }} \geq M_j \quad Q_U=0 \text { (no transit) }\end{aligned}$                     (48)

If the transit is seen $(Q_U=1)$, the planet stage is carried out in the TS method by supplying the value of $Q_U$ in the earlier stage. The original position of the identified planet is initially established at this stage. When the planet passes in front of the star as well as the telescope, less light is seen (a transit happens). This allows for the determination of the planet's original position $(M_A)$ . This is accomplished by Eq. (49) in the TS method.

$M_A=\left(d_8 M_U+S_M M_{T, j}\right) / 2 \quad j=1, \cdots, o_t$                     (49)

In the above equation, the planet’s original position is shown by the term $M_A$ respectively.

$S_M=M_{T, n e w, j} / M_{T, j}$                       (50)

The luminance ratio, as determined by Eq. (50), is represented by the parameter $S_M$. Moreover, the value of the random coefficient $d_8$ ranges from 0 to 1. The condition of the planet, which is currently located between the star and the telescope, is computed in Eq. (49).

As was already established, one of the major crucial factors in establishing transit as well as minimizing the effect of noise is the SNR. The amount of signal obtained is analyzed to pinpoint the planet's position in its star framework by estimating the planet's precise position. For this reason, the TS method takes into account a variety of TO signals (Eq. (51)). In this equation, the coefficient $d_{9}$ defines a chance value between 0 and 1. Moreover, the value of the random vector $d_{10}$ ranges from -1 to 1. Following signal determination $\left(M_n\right)$, the final location of the planet $\left(M_Q\right)$ is adjusted as in Eq. (52).

$M_{n, k}=\left\{\begin{array}{cl}M_A+d_9 M_s \quad \text { if } A=1 & \text { for Aphelion region } \\ M_A-d_9 M_s \quad \text { if } A=2 \quad k=1, \cdots, T 0 & \text { for Perihelion region } \\ M_A+d_{10} M_s \quad \text { if } A=3 & \text { for Neutral region }\end{array}\right.$                     (51)

In the above equation, the signal determination is shown by $M_{n, k}$ respectively.

$\mathrm{M}_{\mathrm{Q}}=\frac{\sum_{\mathrm{k}=1}^{\mathrm{TO}} \mathrm{M}_{\mathrm{n}, \mathrm{k}}}{\mathrm{TO}}$                    (52)

The terms Aphelion and Perihelion are used in astronomy to define the maximum and minimum distances between a planet (such as Earth) and its host star, respectively. The Aphelion, Perihelion, and Neutral areas are three zones that are influenced by the zone parameter(A) in the planet stage. The value of this option will be either 1, 2, or 3. If the planet discovered within the star structure that is currently under investigation has superior characteristics for supporting life compared to the previously located planet, its position is retained in each iteration of the procedure.

If a star in the present observation does not have a transit, the neighborhood planets of the star's recently identified planet will be investigated. In other terms, the existing planet of the star will take its position. Eqs. (52) to (54) are used in the neighbor stage of the TS method to accomplish this. Initially, the host star $\left(\mathrm{M}_{\mathrm{T} \text {,new }}\right)$ as well as a random position $\left(\mathrm{M}_{\mathrm{S}}\right)$ are taken into account when estimating the neighbor's starting position ( $\left.\mathrm{M}_{\mathrm{A}}\right)$ utilizing Eq. (53). Eqs. (54) and (55) are next used to calculate the neighbor planet's $\left(\mathrm{M}_{\mathrm{O}}\right)$ precise position. In Eq. (53), the coefficients $d_{11}$ and $d_{12}$ cope with a random count between 0 and 1 . Moreover, the coefficients $d_{13}$ and $d_{14}$ in Eq. (53) are, accordingly, a random integer and a vector between- 1 and 1 .

 $\mathrm{M}_{\mathrm{A}}=\left(\mathrm{d}_{11} \mathrm{M}_{\mathrm{t} \text {,new }}+\mathrm{d}_{12} \mathrm{M}_{\mathrm{s}}\right) / 2$                    (53)

In the above equation, the neighbor’s starting position is shown by $\mathrm{M}_{\mathrm{A}}$ respectively.

$M_{o, k}=\left\{\begin{array}{cl}M_A-d_{13} M_s \quad \text { if } A=1 & \text { for Aphelion region } \\ M_A+d_{13} M_s \text { if } A=2 \quad k=1, \cdots, \text { To } & \text { for Perihelion region } \\ M_A+d_{14} M_s \quad \text { if } A=3 & \text { for Neutral region }\end{array}\right.$                     (54)

In the above equation, the precise position of the neighbor’s planet is shown by $M_{o, k}$ respectively.

$\mathrm{M}_{\mathrm{O}, \mathrm{j}}=\frac{\sum_{\mathrm{k}=1}^{\mathrm{TO}} \mathrm{M}_{\mathrm{o}, \mathrm{k}}}{\mathrm{TO}}$                    (55)

The ideal planet for every star is chosen in the earlier rounds. As was previously said, the mere discovery of a planet is meaningless. In reality, research on the planet's properties as well as the prerequisites for supporting life is essential. This is carried out in the Exploitation step of the TS method. A novel definition of the $\mathrm{M}_{\mathrm{Q}}$ is presented at this stage. In other terms, the properties linked with the planet are what $\mathrm{M}_{\mathrm{Q}}$ in the present stage $\left(M_F\right)$ relates to (like its materials, density, atmosphere, etc.). Finally, utilizing Eqs. (56) and (57), the final features of the planet are adjusted TOtimes $(\mathrm{k}=1, \cdots, \mathrm{TO})$ by the addition of novel information $(\mathrm{L})$.

In this equation, the random numbers $\mathrm{d}_{15}$ and $\mathrm{d}_{16}$ are between 0 and 2 , respectively. Moreover, the random vector $\mathrm{d}_{17}$ is between 0 and 1. The value $Q$ in the Eq. (57) specifies a random power between 1 and $\left(o_t * T 0\right)$. The knowledge index, denoted in this equation by the random integer $d_1(1,2,3$, or 4$)$, is present. In this step, the optimal $M_F$ may be discovered for every star, which is the optimal planet. The method's overall solution produces the finest planet ever among all o $o_t$ discovered planets.

$M_{F, k}=\left\{\begin{array}{ccc}d_{16} M_Q+d_{15} L & \text { if } d_1=1 & (\text { State } 1) \\ d_{16} M_Q-d_{15} L & \text { if } d_1=2 & (\text { State } 2) \\ M_Q-d_{15} L & \text { if } d_1=3 & (\text { State } 3) \\ M_Q+d_{15} L & \text { if } d_1=4 & (\text { State } 4)\end{array}\right.$                   (56)

Algorithm 1: TS algorithm

Input: Host star count o, SNR TO, and iteration count $o_{i t}$ (segmented images of the proposed FHCD model)

Output: Position of the optimal planet ever $M_C$ along with its respective fitness $g_C$ (predicted image of the proposed FHCD model)

Initialize the initial position of the galaxy

$E= \begin{cases}d_1 M_{\text {galaxy }}-M_s & \text { if } a=1 \text { (negative region) } \\ d_1 M_{\text {galaxy }}+M_s & \text { if } a=2 \text { (positive region) }\end{cases}$

noise $=\left(d_2\right)^3 M_s$

$M_{S, m}=M_{\text {galaxy }}+E-$ noise $\quad m=1, \cdots,\left(o_t \times T O\right)$

$E= \begin{cases}d_4 M_{S, j}-d_3 M_s & \text { if } a=1 \text { (negative region) } \\ d_4 M_{S, j}+d_3 M_s & \text { if } a=2 \text { (positive region) }\end{cases}$

noise $=\left(d_5\right)^3 M_s$

$M_{T, j}=M_{S, j}+E-$ noise $\quad j=1, \cdots, o_t$

Return optimal stars $M_T$ (hidden neurons of novel EDBN-based FHCD model)

While (end criteria not met) do

$\quad$

$M_{T, j}=M_{S, j}+E-$ noise $\quad j=1, \cdots, o_t$

$\quad$

$M_j=\frac{S_j / o_t}{\left(e_j\right)^2} \quad j=1, \cdots$, ot $\quad S_j \in\left\{1, \cdots, o_t\right\}$

$\quad$

$M_{j, \text { new }}=\frac{S_{j, \text { new }} / o_t}{\left(e_{j, \text { new }}\right)^2} \quad j=1, \cdots, o_t$

$\quad$

$\quad$

For $\mathrm{j}=1: \mathrm{o}_{\mathrm{t}}$

$\quad$

$\quad$

$\quad$

If transit is detected

$\quad$

$\quad$

$\quad$

$\quad$

$S_M=M_{T, n e w, j} / M_{T, j}$

$\quad$

$\quad$

$\quad$

$\quad$

$M_A=\left(d_8 M_U+S_M M_{T, j}\right) / 2 \quad j=1, \cdots, o_t$

$M_{n, k}=\left\{\begin{array}{cc}M_A+d_9 M_s \quad \text { if } A=1 & \text { for Aphelion region } \\ M_A-d_9 M_s \quad \text { if } A=2 \quad k=1, \cdots, \text { TO} & \text {for Perihelion region } \\ M_A+d_{10} M_s \quad \text { if } A=3 & \text { for Neutral region }\end{array}\right.$

$\quad$

$\quad$

$\quad$

$\quad$

 $\mathrm{M}_{\mathrm{Q}}=\frac{\sum_{\mathrm{k}=1}^{\mathrm{TO}} \mathrm{M}_{\mathrm{n}, \mathrm{k}}}{\mathrm{TO}}$

$\quad$

$\quad$

$\quad$

else

$\quad$

$\quad$

$\quad$

$\quad$

$M_A=\left(d_{11} M_{t, \text { new }}+d_{12} M_s\right) / 2$

$M_{o, k}=\left\{\begin{array}{cl}M_A-d_{13} M_s \quad \text { if } A=1 & \text { for Aphelion region } \\ M_A+d_{13} M_s \quad \text { if } A=2 \quad k=1, \cdots, T O & \text { for Perihelion region } \\ M_A+d_{14} M_s \quad \text { if } A=3 & \text { for Neutral region }\end{array}\right.$

$\quad$

$\quad$

$\quad$

$\quad$

$\mathrm{M}_{\mathrm{O}, \mathrm{j}}=\frac{\sum_{\mathrm{k}=1}^{\mathrm{TO}} \mathrm{M}_{\mathrm{o}, \mathrm{k}}}{\mathrm{TO}}$

$\quad$

$\quad$

$\quad$

end

$\quad$

$\quad$

end

$\quad$

Return $\mathrm{M}_{\mathrm{Q}, \mathrm{j}}$ along with its respective fitness $\left(\mathrm{g}_{\mathrm{Q}, \mathrm{j}}\right)$ for every star (accuracy plus precision maximization of the proposed FHCD prediction model)

$\quad$

$M_{F, k}=\left\{\begin{array}{lll}d_{16} M_Q+d_{15} L & \text { if } d_1=1 & (\text { State } 1) \\ d_{16} M_Q-d_{15} L & \text { if } d_1=2 & (\text { State } 2) \\ M_Q-d_{15} L & \text { if } d_1=3 & (\text { State } 3) \\ M_Q+d_{15} L & \text { if } d_1=4 & (\text { State } 4)\end{array}\right.$

$\quad$

$\quad$

Return $M_Q$ along with its respective fitness $\left(g_{Q, j}\right)$ for every star (accuracy plus precision maximization of the proposed FHCD prediction model)

end

Return $M_C$ along with its respective fitness $\left(g_C\right)$ (final predicted image of the proposed FHCD model)

Stop

In the above equation, the present stage is shown by MF,K respectively.

$\mathrm{L}=\left(\mathrm{d}_{17}\right)^{\mathrm{Q}} \mathrm{M}_{\mathrm{s}}$                       (57)

The generic pseudocode, that is apparent in Algorithm 1, is supplied here to describe the TS implementation procedure.

6. Results

6.1 Experimental setup

The proposed FHCD model was implemented in MATLAB 2020a, and the outcomes were analyzed and shown in the Figure 3. An Intel processor was used with four logical cores. A RAM of around 5-8 GB was used for typical installation. The population size as well as the iteration count were fixed at 10 and 100. The developed FHCD model was compared with various optimization algorithms such as GOA, GWO, TS, and TS-GWA in terms of distinct analyses such as accuracy, precision, sensitivity, specificity, and F1 Score to prove the overall betterment of the proposed FHCD model. Some of the sample experimental images attained are shown below.

Figure 3. Sample experimental images considered for the FHCD model

6.2 Implementation details

The F1-score, Accuracy, Specificity, Precision, and Sensitivity are employed to evaluate performance and properly detect FHCD. There are TP, FP, TN, and FN amounts of True Positives, False Positives, True Negatives, and False Negatives, correspondingly.

The terms TP and TN represent the benefits and drawbacks of making correct forecasts in reference to the actual facts. FP and FN represent, respectively, the advantages and disadvantages of erroneous forecasts in comparison to the actual facts. A greater number yields better results from the FHCD model. The different equations used for the analysis are shown below.

specificity $=\frac{\mathrm{TN}}{\mathrm{FP}+\mathrm{TN}}$                     (58)

precision $=\frac{\mathrm{TP}}{\mathrm{FP}+\mathrm{TP}}$                      (59)

sensitivity $=\frac{\mathrm{TP}}{\mathrm{FN}+\mathrm{TP}}$                    (60)

accuracy $=\frac{\mathrm{TN}+\mathrm{TP}}{\mathrm{TN}+\mathrm{FN}+\mathrm{FP}+\mathrm{TP}}$                    (61)

f1 score $=2 \times \frac{\text { sensitivity } \times \text { precision }}{\text { sensitivity }+ \text { precision }}$                    (62)

6.3 Specificity analysis

Table 1 clearly portrays the specificity analysis of the developed FHCD model along with the existing methods. The proposed EDBN-TS reveals better sensitivity analysis than the other considered traditional methods, thereby revealing the supremacy of the proposed model in predicting the FHCD.

The analysis also improves with the proposed method compared to state-of-the-art methods over the course of iterations. Hence, from this analysis, it can be clearly concluded that the proposed EDBN-TS model returns better specificity analysis than the conventional methods at all iterations, respectively and shown in the Figure 4.

Table 1. Specificity analysis

Methods

Iteration

20

40

60

80

100

GOA [26]

91.40

94.58

98.11

98.16

96.09

GWO [27]

93.94

95.47

97.38

97.43

97.33

TS [28]

90.90

96.80

98.34

98.39

95.96

TS-GWA [29]

90.39

96.53

98.71

98.76

97.43

Proposed EDBN-TS

94.48

97.98

98.68

98.73

98.78

Figure 4. Specificity analysis

6.4 Precision analysis

Table 2 describes the developed FHCD model's precision analysis in detail along with the current approaches.

Table 2. Precision analysis

Methods

Iteration

20

40

60

80

100

GOA [26]

96.50

96.85

97.74

97.79

96.80

GWO [27]

95.39

96.98

97.45

97.50

96.91

TS [28]

94.75

96.19

97.39

97.44

97.53

TS-GWA [29]

94.76

97.81

97.36

97.41

97.46

Proposed EDBN-TS

96.52

97.85

98.23

98.29

98.50

The proposed model is better at predicting FHCD than the other methods that were previously considered because it uses the new EDBN-TS to do more accurate analysis than those other methods.

Figure 5. Precision analysis

During the course of iterations, the suggested technique outperforms state-of-the-art methods in terms of analysis and shown in Figure 5. Furthermore, it is evident from this study that the suggested EDBN-TS model provides superior precision analysis than the traditional approaches at each iteration.

6.5 Sensitivity analysis

Table 3 goes into great length on the sensitivity analysis of the created FHCD model as well as the present methods. The suggested model is superior in terms of FHCD prediction, as shown by the new EDBN-TS, which offers greater sensitivity analysis than the other traditional techniques that were previously taken into account and shown in the Figure 6.

The proposed method beats cutting-edge methods in terms of analysis over the duration of iterations. This investigation makes it clear that the recommended EDBN-TS model offers greater sensitivity analysis than the conventional methodologies at each iteration as well.

Table 3. Sensitivity analysis

Methods

Iteration

20

40

60

80

100

GOA [26]

96.45

96.80

97.69

97.74

96.75

GWO [27]

95.34

96.93

97.40

97.45

96.86

TS [28]

94.70

96.14

97.34

97.39

97.48

TS-GWA [29]

94.71

97.76

97.31

97.36

97.41

Proposed EDBN-TS

96.47

97.81

98.18

98.24

98.45

6.6 F1 score analysis

Table 4. F1 Score analysis

Methods

Iteration

20

40

60

80

100

GOA [26]

93.88

95.70

97.92

97.97

96.50

GWO [27]

94.66

96.26

97.81

97.86

97.43

TS [28]

92.79

96.49

97.87

97.92

96.71

TS-GWA [29]

92.52

97.17

98.48

98.53

97.19

Proposed EDBN-TS

94.96

97.39

98.02

98.07

98.64

The F1 Score analysis of the developed FHCD model and the current approaches are covered in great detail in Table 4. It is suggested that the recommended model is better at predicting FHCD based on the new EDBN-TS method, which gives a more complete F1 Score analysis than the other common methods already used and shown in Figure 7. Modern techniques are surpassed by the suggested method in terms of analysis during the course of iterations.

Figure 6. Sensitivity analysis

Figure 7. F1 Score analysis

This study demonstrates that the suggested EDBN-TS model provides more comprehensive F1 Score analysis than the traditional approaches at each iteration.

6.7 Accuracy analysis

Table 5 goes into great length on the accuracy analysis of the created FHCD model and the existing methodologies.

Table 5. Accuracy analysis

Methods

Iteration

20

40

60

80

100

GOA [26]

98.23

97.67

98.79

98.84

98.78

GWO [27]

97.12

98.90

98.22

98.27

98.53

TS [28]

97.98

97.98

98.41

98.46

98.90

TS-GWA [29]

96.01

99.15

98.46

98.51

98.35

Proposed EDBN-TS

98.53

98.53

98.85

98.90

99.52

The proposed model is superior in terms of FHCD prediction, according to the unique EDBN-TS, which offers a more thorough accuracy analysis than the other traditional approaches that were previously taken into account and shown in Figure 8.

The recommended strategy outperforms contemporary methodologies in terms of analysis over the duration of iterations. This study reveals that the recommended EDBN-TS model offers a more thorough accuracy analysis at each iteration than the conventional methodologies.

Figure 8. Accuracy analysis

From the overall analysis, it is possible to clearly demonstrate that the proposed EDBN-based FHCD prediction model showed superior outcomes than the other conventional models. The proposed EDBN-based FHCD prediction model returned better precision, accuracy, sensitivity, F1 score, and specificity of about 1.76%, 0.75%, 1.76%, 2.22%, and 2.80% compared with other models. Therefore, it can be concluded clearly that the proposed EDBN-based FHCD prediction model is better at predicting FHCD images than the other traditional methods, respectively.

7. Conclusion

This study employed the state-of-the-art intelligence technology EDBN-TS to partition and predict FHCD. The research is unique in that it utilizes a cutting-edge deep learning-based optimization algorithm to forecast the FHCD model. This approach yields higher prediction accuracy compared to other advanced models currently available. Initially, the data from US fetal imaging was obtained and pre-processed using the recovered frames, label removal, and filtering procedures. Subsequently, the GLCM approach was employed to derive the characteristics from the pre-processed images. Furthermore, the Otsu Thresholding approach was employed to segment the extracted features. In the last stage, the EDBN conducted the prediction. The buried DBN neurons of the EDBN were tweaked by TS, with the primary objective of maximizing accuracy and precision. The optimal classifier for predicting FHD was likewise generated by utilizing an EDBN as the input for all reconstructed pictures. The recommended EDBN-FS model significantly improved the prediction rate of FHCD by around 25%, taking into account precision, accuracy, sensitivity, and specificity. Furthermore, it achieved a 100% accurate prediction of unfavorable outcomes when applied to unfamiliar data. The precise forecasting of FHCD and the prompt detection of congenital heart defects were two potential pragmatic applications of the uncovered DL method.

The EDBN-based FHCD prediction model demonstrated superior precision, accuracy, sensitivity, F1 score, and specificity, with improvements of approximately 1.76%, 0.75%, 1.76%, 2.22%, and 2.80% respectively, when compared to alternative models. The suggested FHCD prediction model does not exhibit improved performance when used with hybrid optimization strategies. In the future, the FHCD prediction model can be expanded by using a novel hybrid optimization technique.

  References

[1] Wu, L., Cheng, J.Z., Li, S., Lei, B., Wang, T., Ni, D. (2017). FUIQA: fetal ultrasound image quality assessment with deep convolutional networks. IEEE Transactions on Cybernetics, 47(5): 1336-1349. https://doi.org/10.1109/TCYB.2017.2671898

[2] Baumgartner, C.F., Kamnitsas, K., Matthew, J., Fletcher, T.P., Smith, S., Koch, L.M., Rueckert, D. (2017). SonoNet: real-time detection and localisation of fetal standard scan planes in freehand ultrasound. IEEE Transactions on Medical Imaging, 36(11): 2204-2215. https://doi.org/10.1109/TMI.2017.2712367

[3] Maraci, M.A., Bridge, C.P., Napolitano, R., Papageorghiou, A., Noble, J.A. (2017). A framework for analysis of linear ultrasound videos to detect fetal presentation and heartbeat. Medical Image Analysis, 37: 22-36. https://doi.org/10.1016/j.media.2017.01.003

[4] Chen, L., Jiang, Y., Wang, J. (2020). Fetal cardiac rhabdomyoma due to paternal mosaicism of TSC2: A case report. Medicine, 99(35): e21949. https://doi.org/10.1097/MD.0000000000021949

[5] Ide, T., Miyoshi, T., Katsuragi, S., Neki, R., Kurosaki, K.I., Shiraishi, I., Ikeda, T. (2019). Prediction of postnatal arrhythmia in fetuses with cardiac rhabdomyoma. The Journal of Maternal-Fetal & Neonatal Medicine, 32(15): 2463-2468. https://doi.org/10.1080/14767058.2018.1438402

[6] Ekmekci, E., Ozkan, B.O., Yildiz, M.S., Kocakaya, B. (2018). Prenatal diagnosis of fetal cardiac rhabdomyoma associated with tuberous sclerosis: A case report. Case Reports in Women's Health, 19: e00070. https://doi.org/10.1016/j.crwh.2018.e00070

[7] Vo, K., Le, T., Rahmani, A.M., Dutt, N., Cao, H. (2020). An efficient and robust deep learning method with 1-D octave convolution to extract fetal electrocardiogram. Sensors, 20(13): 3757. https://doi.org/10.3390/s20133757

[8] Torrents-Barrena, J., Piella, G., Masoller, N., Gratacós, E., Eixarch, E., Ceresa, M., Ballester, M.Á.G. (2019). Segmentation and classification in MRI and US fetal imaging: recent trends and future prospects. Medical Image Analysis, 51: 61-88. https://doi.org/10.1016/j.media.2018.10.003

[9] Balayla, J., Shrem, G. (2019). Use of artificial intelligence (AI) in the interpretation of intrapartum fetal heart rate (FHR) tracings: A systematic review and meta-analysis. Archives of Gynecology and Obstetrics, 300: 7-14. https://doi.org/10.1007/s00404-019-05151-7

[10] Nurmaini, S., Rachmatullah, M.N., Sapitri, A.I., Darmawahyuni, A., Tutuko, B., Firdaus, F., Bernolian, N. (2021). Deep learning-based computer-aided fetal echocardiography: application to heart standard view segmentation for congenital heart defects detection. Sensors, 21(23): 8007. https://doi.org/10.3390/s21238007

[11] Nurmaini, S., Rachmatullah, M.N., Sapitri, A.I., Darmawahyuni, A., Jovandy, A., Firdaus, F., Passarella, R. (2020). Accurate detection of septal defects with fetal ultrasonography images using deep learning-based multiclass instance segmentation. IEEE Access, 8: 196160-196174. https://doi.org/10.1109/ACCESS.2020.3034367

[12] Skeika, E.L., Da Luz, M.R., Fernandes, B.J.T., Siqueira, H.V., De Andrade, M.L.S.C. (2020). Convolutional neural network to detect and measure fetal skull circumference in ultrasound imaging. IEEE Access, 8: 191519-191529. https://doi.org/10.1109/ACCESS.2020.3032376

[13] Singh, Y., McGeoch, L. (2016). Fetal anomaly screening for detection of congenital heart defects. Journal of Neonatal Biology, 5(2): 100-115. https://doi.org/10.4172/2167-0897.100e115

[14] Verdurmen, K.M., Eijsvoogel, N.B., Lempersz, C., Vullings, R., Schroer, C., van Laar, J.O., Oei, S.G. (2016). A systematic review of prenatal screening for congenital heart disease by fetal electrocardiography. International Journal of Gynecology & Obstetrics, 135(2): 129-134. https://doi.org/10.1016/j.ijgo.2016.05.010

[15] Jayaseelan, S.M., Gopal, S.T., Muthu, S., Selvaraju, S., Patel, M.S. (2022). A hybrid fuzzy based cross neighbor filtering (HF-CNF) for image enhancement of fine and coarse powder scanned electron microscopy (SEM) images. Journal of Intelligent & Fuzzy Systems, 42(6): 6159-6169. https://doi.org/10.3233/JIFS-212561

[16] Dong, J., Liu, S., Liao, Y., Wen, H., Lei, B., Li, S., Wang, T. (2019). A generic quality control framework for fetal ultrasound cardiac four-chamber planes. IEEE Journal of Biomedical and Health Informatics, 24(4): 931-942. https://doi.org/10.1109/JBHI.2019.2948316

[17] Gong, Y., Zhang, Y., Zhu, H., Lv, J., Cheng, Q., Zhang, H., Wang, S. (2019). Fetal congenital heart disease echocardiogram screening based on DGACNN: adversarial one-class classification combined with video transfer learning. IEEE Transactions on Medical Imaging, 39(4): 1206-1222. https://doi.org/10.1109/TMI.2019.2946059

[18] Sutarno, S., Nurmaini, S., Partan, R.U., Sapitri, A.I., Tutuko, B., Rachmatullah, M.N., Sulistiyo, D. (2022). FetalNet: Low-light fetal echocardiography enhancement and dense convolutional network classifier for improving heart defect prediction. Informatics in Medicine Unlocked, 35: 101136. https://doi.org/10.1016/j.imu.2022.101136

[19] Qiao, S., Pan, S., Luo, G., Pang, S., Chen, T., Singh, A. K., Lv, Z. (2022). A pseudo-siamese feature fusion generative adversarial network for synthesizing high-quality fetal four-chamber views. IEEE Journal of Biomedical and Health Informatics, 27(3): 1193-1204. https://doi.org/10.1109/JBHI.2022.3143319

[20] Qiao, S., Pang, S., Luo, G., Pan, S., Chen, T., Lv, Z. (2021). FLDS: An intelligent feature learning detection system for visualizing medical images supporting fetal four-chamber views. IEEE Journal of Biomedical and Health Informatics, 26(10): 4814-4825. https://doi.org/10.1109/JBHI.2021.3091579

[21] Pu, B., Lu, Y., Chen, J., Li, S., Zhu, N., Wei, W., Li, K. (2022). Mobileunet-fpn: A semantic segmentation model for fetal ultrasound four-chamber segmentation in edge computing environments. IEEE Journal of Biomedical and Health Informatics, 26(11): 5540-5550. https://doi.org/10.1109/JBHI.2022.3182722

[22] Qiao, S., Pang, S., Luo, G., Pan, S., Yu, Z., Chen, T., Lv, Z. (2022). RLDS: An explainable residual learning diagnosis system for fetal congenital heart disease. Future Generation Computer Systems, 128: 205-218. https://doi.org/10.1016/j.future.2021.10.001

[23] Sengan, S., Mehbodniya, A., Bhatia, S., Saranya, S.S., Alharbi, M., Basheer, S., Subramaniyaswamy, V. (2022). Echocardiographic image segmentation for diagnosing fetal cardiac rhabdomyoma during pregnancy using deep learning. IEEE Access, 10: 114077-114091. https://doi.org/10.1109/ACCESS.2022.3215973

[24] Xu, L., Liu, M., Zhang, J., He, Y. (2020). Convolutional-neural-network-based approach for segmentation of apical four-chamber view from fetal echocardiography. IEEE Access, 8: 80437-80446. https://doi.org/10.1109/ACCESS.2020.2984630

[25] Ogenyi, P., Chiegwu, H.U., England, A., Akanegbu, U. E., Ogbonna, O.S., Abubakar, A., Dauda, M. (2022). Appraisal of trimester-specific fetal heart rate and its role in gestational age prediction. Radiography, 28(4): 926-932. https://doi.org/10.1016/j.radi.2022.06.015

[26] Meraihi, Y., Gabis, A.B., Mirjalili, S., Ramdane-Cherif, A. (2021). Grasshopper optimization algorithm: theory, variants, and applications. IEEE Access, 9: 50001-50024. https://doi.org/10.1109/ACCESS.2021.3067597

[27] Mirjalili, S., Mirjalili, S.M., Lewis, A. (2014). Grey wolf optimizer. Advances in Engineering Software, 69: 46-61. https://doi.org/10.1016/j.advengsoft.2013.12.007

[28] Mirrashid, M., Naderpour, H. (2022). Transit search: An optimization algorithm based on exoplanet exploration. Results in Control and Optimization, 7: 100127. https://doi.org/10.1016/j.rico.2022.100127

[29] Shobana Nageswari, C., Kumar, V., Vini Antony Grace, N., Thiyagarajan, J. (2023). Tunicate swarm-based grey wolf algorithm for fetal heart chamber segmentation and classification: a heuristic-based optimal feature selection concept. Journal of Intelligent & Fuzzy Systems, 44(1): 1029-1041. https://doi.org/10.3233/JIFS-221654