OPEN ACCESS
Distributed radar is applied extensively in marine environment monitoring. In the early days, the radar signals are identified inefficiently by operators. It is promising to replace manual radar signal identification with machine learning technique. However, the existing deep learning neural networks for radar signal identification consume a long time, owing to autonomous learning. Besides, the training of such networks requires lots of reliable timefrequency features of radar signals. This paper mainly analyzes the identification and classification of marine distributed radar signals with an improved deep neural network. Firstly, the time frequency features were extracted from signals based on shorttime Fourier transform (STFT) theory. Then, a target detection algorithm was proposed, which weighs and fuses the heterogenous marine distributed radar signals, and four methods were provided for weight calculation. After that, the frequencydomain priori model feature assistive training was introduced to train the traditional deep convolutional neural network (DCNN), producing a CNN with feature splicing operation. The features of time and frequencydomain signals were combined, laying the basis for radar signal classification. Our model was proved effective through experiments.
distributed radar, deep learning, marine environment monitoring, radar signal identification
Radar observation is a key approach for dynamic monitoring of marine environment. The observation of ocean surface streams with various radars, namely, highfrequency radar, Xband radar, and synthetic aperture radar, plays an important role in marine rescue, oil discharge, navigation and transport, military sailing, and fishery [19]. Distributed radars with high spatiotemporal resolution, low cost, and long detection range are applied extensively in marine environment monitoring [1018]. The traditional radar signal recognition algorithms mostly focus on a single time or frequencydomain feature. Few algorithms consider the two kinds of features simultaneously. In the early days, the radar signals are identified inefficiently by operators. It is promising to replace manual radar signal identification with machine learning technique [1924].
With the growing density of radar signals, the analysis and processing of multicomponent radar signals has become an urgent problem to be solved by radar reconnaissance systems. To adapt to the timefrequency energy distribution of various radar signals, Qu et al. [25] relied on multikernel function for the timefrequency distribution of Cohen’s class to extract and receive the timefrequency images (TFIs) of signals, and designed and pertained a TFI feature extraction network for radar signals based on convolutional neural network (CNN). Li et al. [26] designed an AlexNetbased feature learning network, and optimized the network with the deep features of radar signals extracted by parametric transfer learning. The optimized network improves the multilayer representation of features, and reduces the number of required samples. Wu et al. [27] presented a novel attentionbased onedimensional (1D) CNN to extract more distinguishing features, and identify the signals from radar radiation sources. Specifically, the features of the given 1D signal series are extracted directly by the 1D convolutional layer, and weighed according to their importance to the recognition by the attention mechanism. Wei et al. [28] constructed a new network based on endtoend series, and used the network to recognize the eight kinds of pulse modulation for radar signals. The network is composed of a shallow CNN, an attentionbased bidirectional long shortterm memory (LSTM) network, and a dense neural network. Liu and Li [29] put forward an automatic recognition approach for modulating different low probability of intercept (LPI) radar signals. Firstly, the timedomain signals were converted into TFIs, using a smooth pseudoWignerVille distribution. Then, these TFIs were imported to a selfdesigned triple CNN to derive the highdimensional eigenvectors. There are two tasks in radar signal identification has two tasks: automatic modulation and classification, and radar radiation source identification. Wang et al. [30] proposed an embedding bottleneck gated tolerance unit network, which can handle these two tasks. Several embedding methods are included in the network: Pulse2Vec, GloveP, and EPMo.
The existing deep learning neural networks for radar signal identification consume a long time, owing to autonomous learning. Besides, the training of such networks requires lots of reliable timefrequency features of radar signals. To solve these problems, this paper proposes an identification scheme that combines the time and frequencydomain features of radar signals, and relies on an improved deep neural network to recognize and classify marine distributed radar signals. The main contents and innovations are as follows:
(1) The single and multi pulse signals in each symbol period were converted into the corresponding timefrequency images, and the timefrequency features were extracted through shorttime Fourier transform (STFT); (2) A target detection algorithm was proposed, which weighs and fuses the heterogenous marine distributed radar signals, and four methods were provided for weight calculation; (3) The frequencydomain priori model feature assistive training was introduced to train the traditional deep CNN (DCNN), and the features of time and frequencydomain signals were combined as the basis for radar signal classification, producing a CNN with feature splicing operation. The effectiveness of our model was proved through experiments.
This paper mainly studies the single and multipulse signals of marine distributed radars affected by Gaussian white noise. The communication system is composed of multiple radars connected by the communication link. Let o_{1}(p), o_{2}(p), ...o_{n}(p) be the original echo signals received and transmitted by each radar in the distributed radar system; m(p) be the additive Gaussian white noise. Then, the signals at the receiving end of each local radar station can be modeled as:
$e\left( p \right)={{o}_{1}}\left( p \right)+{{o}_{2}}\left( p \right)+...+{{o}_{m}}\left( p \right)+m\left( p \right)$ (1)
According to the theory on the recognition of marine distributed radar signals, the key and fundamental link is how to effectively extract the features of the signals received by each radar station. In the time domain and frequency domain, the form of received signals varies with local radar stations. Based on Fourier transform, feature extraction aims to extract the different features in the time and frequency domains. Since the received signals at radar stations are periodic and cyclostationary, this paper adopts the timefrequency feature extraction method of the STFT theory to convert the single and multipulse signals in each symbol period into corresponding timefrequency images.
The concept of local spectrum assumes that the signals received by radar stations are stable, if intercepted by a short time window function. Incorporating this concept, the STFT performs Fourier transform on the stable received signals, slides the window function along the time axis, and thus obtain a timevariation image about an entire segment of the received signals in the frequency domain.
Let h(p) be a very short time window function; * be complex conjugate. When h(p)=1 and ∀p, the STFT is essentially the traditional Fourier transform. For continuous signals o(p) received by radar stations, the continuous STFT can be defined as:
$DS{{F}_{o}}\left( p,g \right)=\int_{\infty }^{\infty }{\left[ o\left( v \right)h*\left( vp \right) \right]}{{r}^{j2\pi gv}}dv$ (2)
The inverse of the continuous STFT (2) can be given by:
$o\left( p \right)=\int_{\infty }^{\infty }{\int_{\infty }^{\infty }{DS{{F}_{o}}\left( p,g \right)}}h\left( vp \right){{r}^{j2\pi gv}}dpdv$ (3)
The continuous STFT has several basic properties: linear timefrequency representation and frequency shift invariance. The latter property can be expressed as:
$\tilde{o}\left( t \right)=o\left( p \right){{r}^{j2\pi g\delta }}\to DS{{F}_{{\tilde{o}}}}\left( p,g \right)=DS{{F}_{o}}\left( p,g{{g}_{0}} \right)$ (4)
This property can be derived by:
$\begin{align} & DS{{F}_{{\tilde{o}}}}\left( p,g \right)=\int_{\infty }^{\infty }{\left[ \tilde{o}\left( v \right)h*\left( vp \right) \right]{{r}^{j2\pi jv}}dv} \\ & =\int_{\infty }^{\infty }{\left[ o\left( v \right){{r}^{j2\pi gv}}h*\left( vp \right){{r}^{j2\pi g{{v}_{0}}}} \right]dv} \\ & =\int_{\infty }^{\infty }{\left[ o\left( v \right)h*\left( vp \right) \right]{{r}^{j2\pi \left( g{{g}_{0}} \right)v}}dv} \\ & =DS{{F}_{o}}\left( p,g{{g}_{0}} \right) \\\end{align}$ (5)
The time shift invariance can be expressed as:
$\begin{align} & \tilde{o}\left( p \right)=o\left( p{{p}_{0}} \right)\to \\ & DS{{F}_{{\tilde{o}}}}\left( p,g \right)=DS{{F}_{o}}\left( p{{p}_{0}},g \right){{r}^{j2\pi {{p}_{0}}g}} \\\end{align}$ (6)
That is, DSF_{o}_{~}(p,g)=DSF_{o}(pp_{0},g) does not hold. This property can be derived by:
$\begin{align} & DS{{F}_{{\tilde{o}}}}\left( p,g \right)=\int_{\infty }^{\infty }{\left[ \tilde{o}\left( v \right)h*\left( vp \right) \right]{{r}^{j2\pi jv}}dv} \\ & =\int_{\infty }^{\infty }{\left[ o\left( v{{p}_{0}} \right)h*\left( vp \right){{r}^{j2\pi gv}} \right]dv} \\ & =\int_{\infty }^{\infty }{\left[ o\left( v \right)h*\left( v+{{p}_{0}}p \right) \right]{{r}^{j2\pi gv}}{{r}^{j2\pi g{{p}_{0}}}}dv} \\ & =DS{{F}_{o}}\left( p{{p}_{0}} \right){{e}^{j2\pi {{p}_{0}}g}} \\\end{align}$ (7)
To select the window function for the STFT, the effective time width of the window function h(p) is denoted by Δp, and the bandwidth by Δg. Then, the product between Δp and Δg obeys Heisenberg’s inequality:
$\Delta p\bullet \Delta g\ge \frac{1}{2}$ (8)
It is remotely possible that both Δp and Δg are arbitrarily small. To make the local frequency spectrum of the received signals clearly distinguishable, the length of the window function can be determined by the principle that the width of the window function is compatible with the local stationary length of the received signals.
During the actual recognition of marine distributed radar signals, the continuous STFT is often discretized, that is, the discrete STFT is used to extract the timefrequency features of signals. The DSY_{o}(p,g) is sampled at equally spaced timefrequency grid points (nP,mG), where P>0 and G>0 are the sampling periods of time and frequency, respectively; n and m are integers. To facilitate the transform, it is assumed that DSY(n,m) =DSY(nP, mG). For the discrete signals o(l) of marine distributed radars, the continuous STFT (2) can be discretized into:
$FLY\left( n,m \right)=\sum\limits_{l=\infty }^{\infty }{o\left( l \right)h*\left( lPnP \right){{r}^{j2\pi \left( mG \right)l}}}$ (9)
The inverse of discretized STFT can be expressed as:
$o\left( l \right)=\sum\limits_{n=\infty }^{\infty }{\sum\limits_{m=\infty }^{\infty }{FLY\left( n,m \right)}}h*\left( lPnP \right){{r}^{j2\pi \left( mG \right)l}}$ (10)
In the marine distributed radar system with incoherent accumulation, when a local radar stations adopt a heterogeneous radar with good detection performance, it should play a core role in the entire radar system, that is, be assigned a large weight. The weight depends only on the information difference between local radar stations, and the signaltonoise ratio (SNR) information can be ignored.
Let g'(a_{1},a_{2},...,a_{N}F_{0}) and g'(a_{1},a_{2},...,a_{M}F_{1}) be the joint probability density function (PDF) of M local radar station observations in the absence and presence of the target radar signals, respectively; g(a_{i}F_{0}) and g(a_{i}F_{1}) be the PDF of the ith local radar station observations in the absence and presence of the target radar signals, respectively; γ be the fusion decision threshold. Under the NeymanPearson criterion, when the echo signals received by local radar stations are statistically independent of each other, the optimal distributed detection in the form of likelihood ratio can be described by:
$\Omega =\frac{g\left( {{a}_{1}},{{a}_{2}},...,{{a}_{M}}{{F}_{1}} \right)}{g\left( {{a}_{1}},{{a}_{2}},...,{{a}_{N}}{{F}_{0}} \right)}=\prod\limits_{i=1}^{M}{\frac{g\left( {{a}_{i}}{{F}_{1}} \right)}{g\left( {{a}_{i}}{{F}_{0}} \right)}}\underset{{{F}_{0}}}{\mathop{\overset{{{F}_{1}}}{\mathop{\overset{>}{\mathop{<}}\,}}\,}}\,\gamma $ (11)
Let c_{i} be the signals received by the ith local radar station. The log of formula (11) can be expressed as:
$ln\left( \Omega \right)=\sum\limits_{i=1}^{M}{ln\frac{g\left( {{a}_{i}}{{F}_{1}} \right)}{g\left( {{a}_{i}}{{F}_{0}} \right)}}=\sum\limits_{i=1}^{M}{{{c}_{i}}}\underset{{{F}_{0}}}{\mathop{\overset{{{F}_{1}}}{\mathop{\overset{>}{\mathop{<}}\,}}\,}}\,ln\left( \gamma \right)$ (12)
Formula (12) shows that the fusion detection algorithm for radar signals with incoherent accumulation is the best algorithm, when the echo signals received by local radar stations are statistically independent of each other. Let c_{i} and w_{i} be the radar signals received by the ith local radar station, and the weight of the station, respectively; Ω be the decision threshold of the fusion center. Then, the weighted fusion algorithm of heterogenous signals of marine distributed radars can be expressed as:
$\sum\limits_{i=1}^{M}{{{w}_{i}}{{c}_{i}}}\underset{{{F}_{0}}}{\mathop{\overset{{{F}_{1}}}{\mathop{\overset{>}{\mathop{<}}\,}}\,}}\,\Omega $ (13)
The weight w_{i} of the ith local radar station can be determined based on the prior detection performance curve of the signals received by local radar stations. The weight assignment to the signals received by different local radar stations is detailed as follows:
Step 1. Perform singlestation detection on the received radar signals c_{i} of each of the M local radar stations, and draw singlestation detection performance curves.
Step 2. Under the preset expected detection probability, compute the SNR XZB_{i} required by the ith local radar station.
Step 3. Assume that the ith local radar station requires the smallest SNR under the preset expected detection probability. Let (XZB_{i}XZB_{j})dY be the SNR loss of the ith local radar station relative to the jth local radar station. Then, the weight of the signals received by the ith local radar station can be obtained by converting the unit of the SNR loss to 1 and then taking the reciprocal:
$w_{i}=10^{\frac{X Z B_{1}X Z B_{i}}{10}}$ (14)
As mentioned before, the fusion detection algorithm for radar signals with incoherent accumulation is the best algorithm, when the echo signals received by local radar stations are statistically independent of each other. That is, the fusion center of the marine distributed radar system superposes the signals received by all local radar stations with equal weights. Let c_{i} and q_{i} be the radar signals received by the ith local radar station, and the weight of the station, respectively; Ω be the decision threshold of the fusion center. Based on the SNR information, the weighted fusion algorithm of heterogenous signals of marine distributed radars can be expressed as:
$\sum\limits_{i=1}^{M}{{{q}_{i}}{{c}_{i}}}\underset{{{F}_{0}}}{\mathop{\overset{{{F}_{1}}}{\mathop{\overset{>}{\mathop{<}}\,}}\,}}\,h$ (15)
The weight q_{i} of the ith local radar station can be determined jointly based on the prior detection performance curve of the signals received by local radar stations, and the SNR information. The weight assignment is detailed as follows:
Step 1. Perform singlestation detection on the received radar signals c_{i} of each of the M local radar stations, and draw singlestation detection performance curves. Let XZB_{i} be the SNR of the ith local radar station, and FS_{si} be the singlestation detection probability under that SNR.
Step 2. Assume that a local radar station has the largest FS_{si}, and the SNR required by the ith local radar station at the singlestation detection probability FS_{si} is XZB_{i}'. Then, the SNR loss of the ith local radar station can be expressed as (XZB_{i}'XZB_{i})dY. The first type of weight for the signals received by the ith local radar station can be calculated by:
${{\omega }_{1}}\left( i \right)={{q}_{i}}={{10}^{\frac{XZ{{B}_{i}}XZB_{i}^{'}}{10}}}$ (16)
Formula (16) shows that the weight is obtained by converting the unit of the SNR loss to 1 and then taking the reciprocal.
Step 3. Based on Bayesian theory, the second type of weight can be calculated by:
${{\omega }_{2}}\left( i \right)=\frac{{{\omega }_{1}}\left( i \right)}{1+{{\omega }_{1}}\left( i \right)}$ (17)
The assignment of the second type of weight is detailed as follows:
Step 1. Perform singlestation detection on the received radar signals c_{i} of each of the M local radar stations, and draw singlestation detection performance curves. Let XZB_{i} be the SNR of the ith local radar station, and FS_{si} be the singlestation detection probability under that SNR.
Step 2. Assume that a local radar station has the largest FS_{si}, and the SNR required by the ith local radar station at the singlestation detection probability FS_{si} is XZB_{i}'. Then, the SNR loss of the ith local radar station can be expressed as (XZB_{i}'XZB_{i})dY.
Step 3. Assume that the lth local radar station requires the smallest XZB_{l}' at the singlestation detection probability FS_{si}, and the SNR loss of the ith local radar station relative to the jth local radar station is (XZB_{i}'XZB_{l}')dY. Then, the total SNR loss can be expressed as (2XZB_{i}'XZB_{i}XZB_{l}')dY. In this case, the third type of weight of the signals received by the ith local radar station can be calculated by:
${{\omega }_{3}}\left( i \right)={{q}_{i}}={{10}^{\frac{XZ{{B}_{i}}XZB_{i}^{'}2XZB_{i}^{'}}{10}}}$ (18)
Step 4. Based on Bayesian theory, the fourth type of weight can be given by:
${{\omega }_{4}}\left( i \right)=\frac{{{\omega }_{3}}\left( i \right)}{1+{{\omega }_{3}}\left( i \right)}$ (19)
The traditional distributed radar signal recognition techniques usually extract model parameters for frequencydomain echo features, and introduce partially subjective prior information to the model. The subjectiveness makes it impossible for the radar signal classification to reach the optimum. When it comes to deep learningbased recognition of marine distributed radar signals, if the DCNN is directly applied to automatically extract the features of highresolution images in the target time domain, the computing would consume lots of resources and a long time. To solve the problem, this paper introduces the frequencydomain priori model feature assistive training to train the traditional DCNN, and combines time and frequencydomain signal features as the classification basis for radar signals. Table 1 lists the structural information of the proposed neural network.
Table 1. Structural information of our neural network
Structure 
Number of weight parameters 
Size of output feature map 
CP1 
3233 
2557×1×32 
CP2 
6024 
1275×1×32 
CP3 
18423 
633×1×64 
Splicing layer 
0 
632×1×64 
CP4 
24581 
311×1×64 
Fullyconnected block 
10254684 
1024, 512 
Output layer 
5147 
6 
Total 
10312092 

Every combination of two convolutional layers and a pooling layer is defined as a CP block. The proposed CNN with feature splicing operation consists of four CP blocks: CP_{1}CP_{4}. The kernel size and step length were configured as 3×3 and 1, respectively. The size of the feature map outputted by CP_{1}CP_{4} was set to 32, 32, 64, and 64, respectively. The fullyconnected block contains a fullyconnected layer with 512 output nodes, and a fullyconnected layer with 1,024 output nodes. A feature splicing layer was deployed between CP_{3} and CP_{4} (Figure 1).
Figure 1. Feature splicing layer
Firstly, the frequencydomain features extracted from the original echo signals received by local radar stations are copied based on the number of channels in the feature map outputted from the spliced hidden layer. Next, the copied features corresponding to a channel are attached to the end of the original hidden layer feature map. The feature map of the serial frequencydomain features of the new echo signals is then imported to the next layer of the network.
The crossentropy loss of the network can be expressed as:
$CEL=\frac{1}{M}\sum\limits_{a}{\left[ bln\beta +\left( 1b \right)ln\left( 1\beta \right) \right]}$ (20)
Let a and M be the number of classes of radar signals, and the number of samples in the test set of original echo signals, respectively; b be the number of positive samples; β be the number of samples predicted as positive by the classifier.
Our network needs to be trained in two stages: the training of the network except the splicing layer, and the training of the entire network. Let CV be the estimated importance of the frequencydomain features of echo signals; SU_{1} and SU_{2} be the losses of the original CNN in stage 1 and stage 2, respectively. For the feature map outputted by CP_{3}, the error matrix before the addition of the splicing layer differs from that after the addition. The difference can be computed by a 2norm R_{1}R_{2}. After the addition of the splicing layer, the error matrix of the frequencydomain features for the echo signals can be expressed as G_{2}_{2}. The importance of the frequencydomain features for the echo signals can be calculated by:
$C{{V}_{d}}=\frac{S{{U}_{1}}S{{U}_{2}}}{S{{U}_{2}}}\left( \left\ {{R}_{1}}{{R}_{2}} \right\+{{\left\ {{G}_{2}} \right\}_{2}} \right)$ (21)
Formula (25) shows that the characteristic error of the splicing layer and the value of the crossentropy loss function are positively correlated with the frequencydomain eigenvalue of the echo signals, while SU_{2} is negatively correlated with the frequencydomain eigenvalue of the echo signals. Let k be the serial number of network layers; e be a node on the current layer; ξ be the error matrix of the feature map of the current layer; ε' be the derivative of the activation function; US be the upsampling operation; $\oplus$ be the Hadamard product. The error matrix of formula (25) can be obtained by combining formulas (26)(28). For each convolutional layer:
${{\xi }^{k1}}={{\xi }^{l}}\frac{\partial {{c}^{k}}}{\partial {{c}^{k1}}}$ (22)
$\frac{\partial {{c}^{k}}}{\partial {{c}^{k1}}}={{\xi }^{k}}*rot180\left( {{\theta }^{k}} \right)\oplus \varepsilon '\left( {{c}^{k1}}1 \right)$ (23)
For each pooling layer:
${{\xi }^{k1}}=US\left( {{\xi }^{k}} \right)\oplus \varepsilon '\left( {{c}^{k}} \right)$ (24)
After computing the importance of frequencydomain features for the echo signals, the recognition algorithm for marine distributed radar signals, which fuse timefrequency features, can be designed further based on the CNN. Based on the calculation results of the above parameters, the network structure was determined according to the weighted fusion detection results for heterogenous radar signals. The flow of the complete algorithm is illustrated in Figure 2.
Figure 2. Flow of distributed radar signal recognition algorithm
Figure 3. Weights of signals received by different radar stations under unknown SNRs
Under unknown SNRs, the expected detection probability was set to 50%. Then, the weights of signals received by four local radar stations in the marine distributed radar system were plotted (Figure 3). Then, the proposed weighted fusion algorithm for heterogenous signals of marine distributed radars was applied, and the detection performance of the algorithm was analyzed.
Figure 4. Performance of weighted fusion algorithm vs. performance of original fusion algorithm
Figure 4 compares the performance of weighted fusion algorithm and that of original fusion algorithm. Table 2 presents the relationship between weight and expected detection probability under three different cases: In Case 1, there are 7, 12, 17, and 22 reference units; In Case 2, there are 7, 14, 21, and 28 reference units; In Case 3, there are 28, 24, 20, and 16 units.
Table 2. Relationship between weight and expected detection probability under different number of reference units

Weight 
0.7 
0.5 
0.3 
Case 1 
ω_{1} 
0.6858 
0.6715 
0.6824 
ω_{2} 
0.8526 
0.8547 
0.8632 

ω_{3} 
0.9254 
0.8946 
0.9214 

ω_{4} 
1.002 
1.023 
1.025 

Case 2 
ω_{1} 
0.6254 
0.6345 
0.6285 
ω_{2} 
0.8462 
0.8512 
0.8647 

ω_{3} 
0.9548 
0.9521 
0.9648 

ω_{4} 
1.004 
1.002 
1.006 

Case 3 
ω_{1} 
1.002 
1.005 
1.003 
ω_{2} 
0.8457 
0.8596 
0.871 

ω_{3} 
0.6528 
0.6413 
0.6625 

ω_{4} 
0.4625 
0.4749 
0.5213 
The above simulation results show that our weighted fusion algorithm outperformed the approaches without weighted fusion. Besides, the weighted fusion performance was not very different between the expected probabilities of 0.3, 0.5, and 0.7, suggesting the high stability of our weighted fusion algorithm. As the expected probability increased to 0.3, 0.5, and 0.7, the weighted fusion performance was 0.3, 0.31 and 0.29dB better in SNR than the performance of the original fusion algorithm, respectively. According to the algorithm performance curves at 7 and 28 reference units, the algorithm did not surpass the upper or lower bound of detection performance, which helps to measure the maximum degree of improvement of our algorithm against the original algorithm. As shown in Figure 4, the weighted fusion in Case 3 with the detection probability of 50% had an SNR gain of 0.9dB against the original fusion algorithm in other cases. Hence, the proposed algorithm can improve the performance by a maximum of 30%.
Figure 5. Algorithm performance curves when the SNR satisfies certain conditions
Figure 5 shows the algorithm performance curves when the SNR satisfies certain conditions. Under most weighting methods, the weighted fusion algorithm outshined the traditional fusion algorithm. When the SNR satisfied XZB_{1}=XZB_{2}4=XZB_{3}4, the third type of weight for the signals received by local radar stations would deteriorate. Compared with the SNR required for original fusion, the weighting with the third type of weight at the detection probability of 50% led to a 0.4dB higher SNR. The performance was good in all the other cases.
(1) Loss curve
(2) Accuracy curve
Figure 6. Training curves of the original CNN
(1) Loss curve
(2) Accuracy curve
Figure 7. Training curves of the improved CNN
Figures 6 and 7 display the training curves of the original network (without the splicing layer) and the improved network (with the splicing layer), respectively. The curves of both networks tended to be stable after 60 iterations. However, the recognition error of the original network on the test set oscillated, while the improved network saw a steadily decreasing error and converged rapidly.
The comparison between Figures 6 and 7 shows that frequencydomain features effectively suppress network overfitting, and improve the recognition accuracy of marine distributed radar signals. This is because our network focuses on the frequencydomain parametric features that positively affect network decision. The screened time and frequencydomain features are spliced on the splicing layer. Hence, compared with timedomain featurebased recognition algorithm, our algorithm improves the generalization ability and recognition accuracy of the detection model.
Figure 8 compares the recognition effects of different algorithms under the same datasets. The algorithms include our algorithm 1, the LSTM 2, the traditional recurrent neural network (RNN) 3, and the traditional CNN 4. Datasets 1 and 2 were collected by similar approaches from distributed radar systems in different sea areas. The two datasets cover basically the same types of signals. But Dataset 1 is 1.5 times that of Dataset 2. Both datasets were divided into a training set and a test set by the same split ratio. The classification accuracy of radar signals is the mean of the results of 150 signal recognition experiments. It can be seen that the recognition accuracy on Dataset 2 was higher than that on Dataset 1.
Figure 8. Recognition effects of different algorithms under the same datasets
To compare denoising performance, the recognition effects of the four algorithms were compared under different noise levels (Figure 9). As the SNR changed from 0dB to 25dB, our algorithm achieved a much higher recognition accuracy than the other algorithms under a high SNR, reaching around 0.95.
Figure 9. Recognition effects of four algorithm under different noise levels
This paper recognizes and classifies marine distributed radar signals based on an improved deep neural network. Specifically, the authors gave a method for extracting the timefrequency features of distributed radar signals, proposed a weighted fusion detection algorithm for the heterogenous signals of marine distributed radars, and detailed the calculation of four types of weights. Finally, a CNN with feature splicing operation was established, the frequencydomain priori model feature assistive training was introduced to train the traditional DCNN, and the time and frequencydomain signals were combined as the basis for classifying radar signals. Through experiments, our weighted fusion algorithm was compared with the original fusion algorithm in terms of the performance and the relationship between weight and expected detection probability, under different number of reference units. The comparison shows that our weighted fusion algorithm outperforms the fusion algorithm without weighted fusion. In addition, the training curves of the original network (without splicing layer) were compared with the improved network (with splicing layer), indicating that the improved network saw a steadily decreasing error and converged rapidly. Finally, the recognition effects of different algorithms were compared under the same datasets and different noise levels. The proposed algorithm was found to be superior and effective.
This article are supported by the project of Guangdong Provincial Science and Technology Department's subsidy for people's livelihood in 2020 and other institutional development expenditure funds (overseas famous teachers, 2020A1414010380), by the project of 2021 Guangdong Province Science and Technology Special Funds (‘College Special Project + Task List’) Competitive Distribution (2021A50111), by the project of Enhancing School with Innovation of Guangdong Ocean University’s (230420023), and by the program for scientific research startup funds of Guangdong Ocean University (R20065).
[1] Sola, I., FernándezTorquemada, Y., Forcada, A., Valle, C., del PilarRuso, Y., GonzálezCorrea, J.M., SánchezLizaso, J.L. (2020). Sustainable desalination: Longterm monitoring of brine discharge in the marine environment. Marine Pollution Bulletin, 161: 111813. https://doi.org/10.1016/j.marpolbul.2020.111813
[2] Lee, C., Kim, H.R. (2019). Conceptual development of sensing module applied to autonomous radiation monitoring system for marine environment. IEEE Sensors Journal, 19(19): 89208928. https://doi.org/10.1109/JSEN.2019.2921550
[3] Li, Z., Jin, Z.Q., Shao, S.S., Xu, X. (2018). A review on reinforcement corrosion mechanics and monitoring techniques in concrete in marine environment. Mater. Rev. A Rev. Pap, 32: 41704181.
[4] Wang, X.H., Ma, R., Cao, X., Cao, L., Chu, D.Z., Zhang, L., Zhang, T.P. (2017). Software for marine ecological environment comprehensive monitoring system based on MCGS. In IOP Conference Series: Earth and Environmental Science, 82(1): 012087. https://doi.org/10.1088/17551315/82/1/012087
[5] Min, R., Liu, Z., Pereira, L., Yang, C., Sui, Q., Marques, C. (2021). Optical fiber sensing for marine environment and marine structural health monitoring: A review. Optics & Laser Technology, 140: 107082. https://doi.org/10.1016/j.optlastec.2021.107082
[6] Branchet, P., ArpinPont, L., Piram, A., Boissery, P., WongWahChung, P., Doumenq, P. (2021). Pharmaceuticals in the marine environment: What are the present challenges in their monitoring. Science of The Total Environment, 766: 142644. https://doi.org/10.1016/j.scitotenv.2020.142644
[7] Liu, G., Rui, G., Tian, W., Wu, L., Cui, T., Huang, J. (2021). Compressed sensing of 3D marine environment monitoring data based on spatiotemporal correlation. IEEE Access, 9: 3263432649. https://doi.org/10.1109/ACCESS.2021.3060472
[8] Zhu, Y., Han, Y. (2021). Marine environment monitoring based on virtual reality and fuzzy Cmeans clustering algorithm. Mobile Information Systems, 2021: Article ID 2576919. https://doi.org/10.1155/2021/2576919
[9] BeltránSanahuja, A., CasadoCoy, N., SimóCabrera, L., SanzLázaro, C. (2020). Monitoring polymer degradation under different conditions in the marine environment. Environmental Pollution, 259: 113836. https://doi.org/10.1016/j.envpol.2019.113836
[10] Du, Y., Yin, J., Tan, S., Wang, J., Yang, J. (2020). A numerical study of roughness scale effects on ocean radar scattering using the secondorder SSA and the moment method. IEEE Transactions on Geoscience and Remote Sensing, 58(10): 68746887. https://doi.org/10.1109/TGRS.2020.2977368
[11] Wyatt, L.R. (2019). Measuring the ocean wave directional spectrum ‘First Five’ with HF radar. Ocean Dynamics, 69(1): 123144. https://doi.org/10.1007/s1023601812358
[12] Zhao, X.B., Yan, W., Ai, W.H., Lu, W., Ma, S. (2019). Research on calculation method of Doppler centroid shift from airborne synthetic aperture radar for ocean feature retrieval. Journal of Radars, 8(3): 391399. https://doi.org/10.12000/JR19020
[13] Cosoli, S., Grcic, B., De Vos, S., Hetzel, Y. (2018). Improving data quality for the Australian high frequency ocean radar network through realtime and delayedmode qualitycontrol procedures. Remote Sensing, 10(9): 1476. https://doi.org/10.3390/rs10091476
[14] Lu, Y., Zhang, B., Perrie, W., Mouche, A., Zhang, G. (2020). CMODH validation for Cband synthetic aperture radar HH polarization wind retrieval over the ocean. IEEE Geoscience and Remote Sensing Letters, 18(1): 102106. https://doi.org/10.1109/LGRS.2020.2967811
[15] Naz, S., Iqbal, M.F., Mahmood, I., Allam, M. (2021). Marine oil spill detection using Synthetic Aperture Radar over Indian Ocean. Marine Pollution Bulletin, 162: 111921. https://doi.org/10.1016/j.marpolbul.2020.111921
[16] Yao, G., Xie, J., Huang, W. (2020). HF radar ocean surface cross section for the case of floating platform incorporating a sixDOF oscillation motion model. IEEE Journal of Oceanic Engineering, 46(1): 156171. https://doi.org/10.1109/JOE.2019.2959289
[17] Ren, L., Chu, N., Hu, Z., Hartnett, M. (2020). Investigations into synoptic spatiotemporal characteristics of coastal upper ocean circulation using high frequency radar data and model output. Remote Sensing, 12(17): 2841. https://doi.org/10.3390/rs12172841
[18] Liu, X., Cui, S., Zhao, C., Wang, P., Zhang, R. (2018). Bind intrapulse modulation recognition based on machine learning in radar signal processing. In International Conference in Communications, Signal Processing, and Systems, pp. 717729. https://doi.org/10.1007/9789811365041_87
[19] Yi, J.D., Yang, J. (2020). Radar signal recognition based on IFOASABP neural network. Systems Engineering and Electronics, 42(12): 27352741. https://doi.org/10.3969/j.issn.1001506X.2020.12.08
[20] Shi, L.M., Yang, C.Z., Wu, H.C. (2020). Radar signal recognition method based on deep residual network and triplet loss. Systems Engineering and Electronics, 42(11): 25062512. https://doi.org/10.3969/j.issn.1001506X.2020.11.12
[21] Bai, J., Gao, L., Gao, J., Li, H., Zhang, R., Lu, Y. (2019). A new radar signal modulation recognition algorithm based on timefrequency transform. In 2019 IEEE 4th International Conference on Signal and Image Processing (ICSIP), pp. 2125. https://doi.org/10.1109/SIPROCESS.2019.8868675
[22] Gao, L., Zhang, X., Gao, J., You, S. (2019). Fusion image based radar signal feature extraction and modulation recognition. IEEE Access, 7: 1313513148. https://doi.org/10.1109/ACCESS.2019.2892526
[23] Liu, B., Feng, Y., Yin, Z., Fan, X. (2019). Radar signal emitter recognition based on combined ensemble empirical mode decomposition and the generalized Stransform. Mathematical Problems in Engineering, 2019: Article ID 2739173. https://doi.org/10.1155/2019/2739173
[24] Gao, J., Lu, Y., Qi, J., Shen, L. (2019). A radar signal recognition system based on nonnegative matrix factorization network and improved artificial bee colony algorithm. IEEE Access, 7: 117612117626. https://doi.org/10.1109/ACCESS.2019.2936669
[25] Qu, Z., Hou, C., Hou, C., Wang, W. (2020). Radar signal intrapulse modulation recognition based on convolutional neural network and deep Qlearning network. IEEE Access, 8: 4912549136. https://doi.org/10.1109/ACCESS.2020.2980363
[26] Li, D., Yang, R., Li, X., Zhu, S. (2020). Radar signal modulation recognition based on deep joint learning. IEEE Access, 8: 4851548528. https://doi.org/10.1109/ACCESS.2020.2978875
[27] Wu, B., Yuan, S., Li, P., Jing, Z., Huang, S., Zhao, Y. (2020). Radar emitter signal recognition based on onedimensional convolutional neural network with attention mechanism. Sensors, 20(21): 6350. https://doi.org/10.3390/s20216350
[28] Wei, S., Qu, Q., Zeng, X., Liang, J., Shi, J., Zhang, X. (2021). Selfattention BiLSTM networks for radar signal modulation recognition. IEEE Transactions on Microwave Theory and Techniques, 69(11): 51605172. https://doi.org/10.1109/TMTT.2021.3112199
[29] Liu, L., Li, X. (2021). Radar signal recognition based on triplet convolutional neural network. EURASIP Journal on Advances in Signal Processing, 2021(1): 116. https://doi.org/10.1186/s13634021008218
[30] Wang, Y., Cao, G., Su, D., Wang, H., Ren, H. (2021). Embedding bottleneck gated recurrent unit network for radar signal recognition. In 2021 International Joint Conference on Neural Networks (IJCNN), pp. 18. https://doi.org/10.1109/IJCNN52387.2021.9533995