Noise-Correlated Two-Stage Cubature Kalman Filtering Estimation Algorithm

Noise-Correlated Two-Stage Cubature Kalman Filtering Estimation Algorithm

Lu Zhang* Gang Huang Daxing Xu Hailun Wang

College of Electrical and Information Engineering, Quzhou University, Quzhou 324000, China

Corresponding Author Email: 
zhanglu@qzc.edu.cn
Page: 
2355-2364
|
DOI: 
https://doi.org/10.18280/ts.410511
Received: 
6 March 2024
|
Revised: 
19 July 2024
|
Accepted: 
30 September 2024
|
Available online: 
31 October 2024
| Citation

© 2024 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Existing two-stage filtering algorithms in the literature assume that the system is nonlinear and Gaussian, with independent noise, meaning the noise in the state equation and measurement equation are uncorrelated and both follow Gaussian white noise distributions. However, in practical applications, noise correlations are common, and traditional computational methods that ignore these correlations inevitably lead to reduced estimation accuracy. This paper proposes a Noise-Correlated Two-Stage Cubature Kalman Filtering Algorithm (TSCKF-CN) based on model transformation. The algorithm introduces a coefficient ∆k to transform the model from a noise-correlated system to a noise-independent system. It then employs the noise from the transformed model in the recursive computation of the two-stage filter to achieve a noise-correlated TSCKF estimator. Simulation results from a pure azimuth tracking system demonstrate that this method, by accounting for noise correlation, achieves better tracking accuracy than methods that neglect noise correlations, leading to improved tracking performance.

Keywords: 

noise correlation, cubature Kalman filtering (CKF), model transformation, Two-Stage Cubature Kalman Filtering (TSCKF), pure azimuth system

1. Introduction

In practical applications, noise correlation in systems is common. For example, noise correlation can arise due to the dual effects of internal components and external environmental changes [1, 2]; systems with colored measurement noise that undergo noise dimensionality expansion also transform the original system into a noise-correlated system after expanding it to the state [3, 4]; in systems that require multi-sensor information fusion, such as maneuvering target tracking, noise correlation is frequently encountered [5]. For noise-correlated systems, the Bayesian recursion framework, which is the foundation of nonlinear filtering algorithms, is no longer suitable, and thus the nonlinear filtering formulas are no longer valid. If conventional two-stage filtering algorithms are still applied, estimation accuracy will inevitably decrease. Therefore, discussing two-stage filtering for noise-correlated systems is of significant practical importance. Many researchers, both domestic and international, have studied noise-correlated filtering. Garcia et al. [4] presented the recursion process of Kalman filtering and extended Kalman filtering in noise-correlated systems, Youn et al. [5] and Ran et al. [6] respectively derived the unscented Kalman filtering process for noise-correlated systems, and Qian et al. [7-9] extended Youn’s et al. [5] research to cubature Kalman filtering, presenting the cubature Kalman filtering algorithm for noise-correlated systems [10-12].

This paper proposes the TSCKF-CN Algorithm. First, the noise-correlated system is transformed using the identity transformation method. By introducing the coefficient $\Delta_k$, the model is transformed from a noise-correlated system to a noise-independent system. Then, the noise in the new model is used in the recursive computation of the two-stage filtering process, resulting in a TSCKF-CN estimator.

2. Noise-Correlated CKF Algorithm

2.1 System model

Consider the following nonlinear Gaussian system [13, 14]:

$x_{k+1}=f_k\left(x_k\right)+\omega_{k+1, k}$                  (1)

$z_k=h_k\left(x_k\right)+v_k$                 (2)

where, $k$ is the discrete time index, $x_k \in R^{n \times 1}$ is the system state vector, $z_k \in R^{m \times 1}$ is the measurement vector, and $\mathrm{f}(\cdot)$ and $\mathrm{h}(\cdot)$ are known nonlinear state transition and measurement functions that are continuously differentiable at $x_k$. The process noise sequence $\omega_{k+1, k}$ and measurement noise sequence $v_k$ are both Gaussian white noise sequences with the means $E\left(\omega_{k+1, k}\right)=q_k$ and $E\left(v_k\right)=$ $r_k$, and their covariances $Q_{k+1, k}$ and $R_k$ satisfy the following conditions:

$E\left[\binom{\omega_{k+1, k}}{v_k}\left(\begin{array}{ll}\omega_{j+1, j}^T & v_j^T\end{array}\right)\right]=\left[\begin{array}{cc}Q_{k+1, k} & D_k \\ D_k^T & R_k\end{array}\right] \delta_{k j}$

The initial state $x_0$ is independent of $\omega_{k+1, k}$ and $v_k$, and satisfies:

$E\left(x_0\right)=\hat{x}_{0 \mid 0}, E\left(\left[x_0-\hat{x}_{0 \mid 0}\right]\left[x_0-\hat{x}_{0 \mid 0}\right]^T\right)=P_{0 \mid 0}$ 

2.2 Noise-correlated nonlinear gaussian filtering algorithm

From the system model, it is clear that the system's process noise and measurement noise are correlated, which does not meet the conditions for directly applying the Bayesian filtering framework. Thus, nonlinear Gaussian filtering algorithms cannot be directly used for recursive computation. An intuitive solution is to transform the model using an identity transformation, converting the noise-correlated system into an uncorrelated one, and then proceed with filtering estimation [15-20].

From the model Eq. (2), we have:

$z_k-h_k\left(x_k\right)-v_k=0$

Let $\Delta_k$ be the unknown coefficient, then:

$\Delta_k\left(z_k-h_k\left(x_k\right)-v_k\right)=0$                  (3)

Substitute into Eq. (1) and rearrange to obtain:

$x_{k+1}=f_k\left(x_k\right)+\omega_{k+1, k}+\Delta_k\left(z_k-h_k\left(x_k\right)-v_k\right)=f_k\left(x_k\right)+\Delta_k\left(z_k-h_k\left(x_k\right)\right)+\left(\omega_{k+1, k}-\Delta_k v_k\right)=F_k\left(x_k\right)+\bar{\omega}_k$                  (4)

where,

$F_k\left(x_k\right)=f_k\left(x_k\right)+\Delta_k\left(z_k-h_k\left(x_k\right)\right)$                     (5)

$\bar{\omega}_k=\omega_{k+1, k}-\Delta_k v_k$                    (6)

Thus, the model Eqs. (1) and (2) are transformed into:

$x_{k+1}=F_k\left(x_k\right)+\bar{\omega}_k$                     (7)

$z_k=h_k\left(x_k\right)+v_k$                     (8)

where, $\bar{Q}_k=\operatorname{Cov}\left(\bar{\omega}_k, \bar{\omega}_j\right)=\left(Q_{k+1, k}-\Delta_k R_k \Delta_k^T\right) \delta_{k j}$.

To transform the model from a noise-correlated system to a noise-independent system (i.e., to make the process noise and measurement noise uncorrelated), we require:

$\operatorname{Cov}\left(\bar{\omega}_k, v_k\right)=0$

Expanding this equation gives:

$\Delta_k=D_k R_k^{-1}$                  (9)

When Eq. (9) is satisfied, the process noise and measurement noise in the system model are no longer correlated, allowing the application of nonlinear Gaussian filtering algorithms. The filtering equations derived from the transformed model are denoted with a superscript t to indicate the noise-correlated nonlinear Gaussian filtering.

$\hat{x}_{k \mid k-1}^t=E\left(F_{k-1}\left(x_{k-1}\right)\right)=\int_{R^{n_x}} F_{k-1}\left(x_{k-1}\right) \times N\left(x_{k-1} ; \hat{x}_{k-1 \mid k-1}^t, P_{k-1 \mid k-1}^t\right) d x_{k-1}+q_{k-1}-\Delta_k r_k$                      (10)

$\begin{aligned} P_{k \mid k-1}^t & =E\left(\left(x_k^t-\hat{x}_{k \mid k-1}^t\right)\left(x_k^t-\hat{x}_{k \mid k-1}^t\right)^T\right) \\ & =\int_{R^n x} F_{k-1}\left(x_{k-1}\right) F_{k-1}^T\left(x_{k-1}\right) \times N\left(x_{k-1} ; \hat{x}_{k-1 \mid k-1}^t, P_{k-1 \mid k-1}^t\right) d x_{k-1}-\hat{x}_{k \mid k-1}^t\left(\hat{x}_{k \mid k-1}^t\right)^T+Q_{k, k-1}-\Delta_k R_k \Delta_k^T\end{aligned}$              (11)

$\hat{z}_{k \mid k-1}^t=E\left(h_k\left(x_k\right)\right)=\int_{R^{n_x}} h_k\left(x_k\right) \times N\left(x_k ; \hat{x}_{k \mid k-1}^t, P_{k \mid k-1}^t\right) d x_k+r_k$                   (12)

$\begin{aligned} & P_{z z, k \mid k-1}^t=E\left(\left(z_k^t-\hat{z}_{k \mid k-1}^t\right)\left(z_k^t-\hat{z}_{k \mid k-1}^t\right)^T\right) \\ &=\int_{R^{n_x}} h_k\left(x_k\right) h_k^T\left(x_k\right) \times N\left(x_k ; \hat{x}_{k \mid k-1}^t, P_{k \mid k-1}^t\right) d x_k-\hat{z}_{k \mid k-1}^t\left(\hat{z}_{k \mid k-1}^t\right)^T+R_k\end{aligned}$                     (13)

$\begin{aligned} P_{x z, k \mid k-1}^t=E( & \left.\left(x_k^t-\hat{x}_{k \mid k-1}^t\right)\left(z_k^t-\hat{z}_{k \mid k-1}^t\right)^T\right) \\ & =\int_{R^{n_x}} x_k h_k^T\left(x_k\right) \times N\left(x_k ; \hat{x}_{k \mid k-1}^t, P_{k \mid k-1}^t\right) d x_k-\hat{x}_{k \mid k-1}^t\left(\hat{z}_{k \mid k-1}^t\right)^T\end{aligned}$                          (14)

2.3 Noise-correlated CKF algorithm

Based on the nonlinear Gaussian filtering formulas for noise-correlated systems derived in the previous section, the Noise-Correlated CKF Algorithm is described using the third-order Spherical-Radial cubature rule as follows:

Step 1: Time Update

(1) Given the posterior density function at time k−1:

$p\left(x_{k-1} \mid D_{k-1}\right)=N\left(x_{k-1} ; \hat{x}_{k-1 \mid k-1}^t, P_{k-1 \mid k-1}^t\right)$,

then:

$P_{k-1 \mid k-1}^t=S_{k-1 \mid k-1}^t\left(S_{k-1 \mid k-1}^t\right)^T$

(2) Compute the cubature points $\left(i=1,2, \ldots, m=2 n_x\right)$:

$X_{i, k-1 \mid k-1}^t=S_{k-1 \mid k-1}^t \xi_i+\hat{x}_{k-1 \mid k-1}^t$                  (15)

(3) Propagate the cubature points (i=1,2,…,m):

$\left(X_{i, k-1 \mid k-1}^t\right)^*=F_{k-1}\left(X_{i, k-1 \mid k-1}^t, u_{k-1}\right)$                    (16)

where, $F_k\left(x_k\right)=f_k\left(x_k\right)+\Delta_k\left(z_k-h_k\left(x_k\right)\right)$.

(4) Estimate the state prediction at time k:

$\hat{x}_{k \mid k-1}^t=\frac{1}{m} \sum_{i=1}^m\left(X_{i, k-1 \mid k-1}^t\right)^*+q_{k-1}-\Delta_k r_k$                       (17)

(5) Estimate the state error covariance at time k:

$P_{k \mid k-1}^t=\frac{1}{m} \sum_{i=1}^m\left(X_{i, k-1 \mid k-1}^t\right)^*\left(X_{i, k-1 \mid k-1}^t\right)^{* T}-\hat{x}_{k \mid k-1}^t\left(\hat{x}_{k \mid k-1}^t\right)^T+Q_{k, k-1}-\Delta_k R_k \Delta_k^T$                 (18)

Step 2: Measurement Update

(1) Perform Cholesky decomposition:

$P_{k \mid k-1}^t=S_{k \mid k-1}^t\left(S_{k \mid k-1}^t\right)^T$

(2) Compute the cubature points (i=1,2,…,m)

$X_{i, k \mid k-1}^t=S_{k \mid k-1}^t \xi_i+\hat{\chi}_{k \mid k-1}^t$                    (19)

(3) Propagate the cubature points (i=1,2,…,m)

$Z_{i, k \mid k-1}^t=h_k\left(X_{i, k \mid k-1}^t, u_k\right)$                      (20)

(4) Estimate the measurement prediction at time k:

$\hat{z}_{k \mid k-1}^t=\frac{1}{m} \sum_{i=1}^m Z_{i, k \mid k-1}^t+r_k$                      (21)

(5) Estimate the measurement error covariance at time k:

$P_{z z, k \mid k-1}^t=\frac{1}{m} \sum_{i=1}^m Z_{i, k \mid k-1}^t\left(Z_{i, k \mid k-1}^t\right)^T-\hat{z}_{k \mid k-1}^t\left(\hat{z}_{k \mid k-1}^t\right)^T+R_k$                  (22)

(6) Estimate the cross-covariance at time k:

$P_{x z, k \mid k-1}^t=\frac{1}{m} \sum_{i=1}^m X_{i, k \mid k-1}^t\left(Z_{i, k \mid k-1}^t\right)^T-\hat{x}_{k \mid k-1}^t\left(\hat{z}_{k \mid k-1}^t\right)^T$                    (23)

(7) Estimate the gain at time k:

$K_k^t=P_{x z, k \mid k-1}^t\left(P_{z z, k \mid k-1}^t\right)^{-1}$                  (24)

(8) Compute the state estimate at time k:

$\hat{x}_{k \mid k}^t=\hat{x}_{k \mid k-1}^t+K_k^t\left(z_k-\hat{z}_{k \mid k-1}^t\right)$                  (25)

(9) Compute the state error covariance estimate at time k:

$P_{k \mid k}^t=P_{k \mid k-1}^t-K_k^t P_{Z Z, k \mid k-1}^t\left(K_k^t\right)^T$                   (26)

3. TSCKF-CN Algorithm

3.1 System model

Consider the following nonlinear Gaussian system with random biases:

$\begin{gathered}x_{k+1}=f_k\left(x_k\right)+B_k b_k+\omega_{k+1, k}^x \\ b_{k+1}=b_k+\omega_k^b \\ z_k=h_k\left(x_k\right)+F_k b_k+v_k\end{gathered}$                     (27)

where, $k$ is the discrete time index, $x_k \in R^{n \times 1}$ is the state vector, $b_k \in R^{p \times 1}$ is the bias vector, and $z_k \in R^{m \times 1}$ is the measurement vector. The functions $f_k(\cdot)$ and $h_k(\cdot)$ are known nonlinear state transition and measurement functions that are continuously differentiable at $x_k$. The process noise sequence $\omega_{k+1, k}^x$, bias noise sequence $\omega_k^b$, and measurement noise sequence $v_k$ are all Gaussian white noise sequences. The bias noise is uncorrelated with the process and measurement noise, with mean values $E\left(\omega_{k+1, k}^x\right)=q_k^x, E\left(\omega_k^b\right)=0$, and $E\left(v_k\right)=r_k$, and their covariances satisfy the following conditions:

$E\left[\left[\begin{array}{c}\omega_{k+1, k}^x \\ \omega_k^b \\ v_k\end{array}\right]\left[\begin{array}{c}\omega_{j+1, j}^x \\ \omega_j^b \\ v_j\end{array}\right]^T\right]=\left[\begin{array}{ccc}Q_{k+1, k}^x & 0 & D_k \\ 0 & Q_k^b & 0 \\ D_k^T & 0 & R_k\end{array}\right] \delta_{k j}$                       (28)

The initial states $x_0$ and $b_0$ are independent of $\omega_{k+1, k}$ and $v_k$, and satisfy:

$\begin{aligned} & E\left(x_0\right)=\hat{x}_{0 \mid 0}, E\left(\left[x_0-\hat{x}_{0 \mid 0}\right]\left[x_0-\hat{x}_{0 \mid 0}\right]^T\right) =P_{0 \mid 0}^x \\ & E\left(b_0\right)=\hat{b}_{0 \mid 0}, E\left(\left[b_0-\hat{b}_{0 \mid 0}\right]\left[b_0-\hat{b}_{0 \mid 0}\right]^T\right)= P_{0 \mid 0}^b \\ & E\left(\left[x_0-\hat{x}_{0 \mid 0}\right]\left[b_0-\hat{b}_{0 \mid 0}\right]^T\right)=P_{0 \mid 0}^{x b}\end{aligned}$

3.2 TSCKF-CN algorithm

Let:

$\begin{gathered}X_{k+1}=\left[\begin{array}{l}x_{k+1} \\ b_{k+1}\end{array}\right], \Gamma_k\left(X_k\right)=\left[\begin{array}{c}f_k\left(x_k\right)+B_k b_k \\ b_k\end{array}\right], \omega_k=\left[\begin{array}{c}\omega_{k+1, k}^x \\ \omega_k^b\end{array}\right], \\ H_k\left(X_k\right)=h_k\left(x_k\right)+F_k b_k\end{gathered}$

The system model given in Eq. (27) can be rewritten as follows:

$\begin{gathered}X_{k+1}=\Gamma_k\left(X_k\right)+\omega_k \\ Z_k=H_k\left(X_k\right)+v_k\end{gathered}$

where, $E\left(\omega_k \omega_j\right)=\left[\begin{array}{cc}Q_{k+1, k}^x & 0 \\ 0 & Q_k^b\end{array}\right] \delta_{k j}, E\left(\omega_k v_j\right)=\left[\begin{array}{cc}Q_{k+1, k} & D_k \\ D_k^T & R_k\end{array}\right] \delta_{k j}$.

Using the identity transformation described in Section 1.2, model (27) can be transformed into a noise-free system as shown in Eqs. (7) and (8), and further rewritten as:

$\begin{gathered}X_{k+1}=F_k\left(X_k\right)+\bar{\omega}_k \\ Z_k=H_k\left(X_k\right)+v_k\end{gathered}$                     (29)

where, $F_k\left(X_k\right)=\Gamma_k\left(X_k\right)+\Delta_k\left(Z_k-H_k\left(X_k\right)\right), \bar{\omega}_k=\omega_k-\Delta_k v_k$.

Theorem 1: The TSCKF-CN algorithm for nonlinear discrete random systems with random biases (27) is as follows:

(1) Noise-correlated unbiased filter based on model transformation:

$\begin{gathered}\bar{X}_{k \mid k-1}^{t 1}=\frac{1}{m} \sum_{i=1}^m\left(\Gamma_{k-1}^1\left(S_{k-1 \mid k-1}^t \xi_i+T\left(\Psi, \bar{X}_{k-1 \mid k-1}^t\right), u_{k-1}\right)+\right. \\ \left.\Delta_k\left(Z_k-H_k\left(S_{k-1 \mid k-1}^t \xi_i+T\left(\Psi, \bar{X}_{k-1 \mid k-1}^t\right), u_{k-1}\right)\right)\right)-\Phi\left(\bar{X}_{k \mid k-1}^{t 2}\right)+m_{k-1}^n \end{gathered}$

$\begin{gathered}\bar{X}_{k \mid k}^{t 1}=\bar{X}_{k \mid k-1}^{t 1}+\Phi\left(\bar{X}_{k \mid k-1}^{t 2}\right)+V_k\left(\bar{X}_{k \mid k}^{t 2}-\bar{X}_{k \mid k-1}^{t 2}\right)-\Psi\left(\bar{X}_{k \mid k}^{t 2}\right)+ \\ \bar{K}_k^{t 1}\left(Z_k-\frac{1}{m} \sum_{i=1}^m H_k\left(S_{k \mid k-1}^t \xi_i+T\left(\Phi, \bar{X}_{k \mid k-1}^t\right), u_k\right)\right)+m_{k-1}^n+r_k^n \\ \bar{P}_{k \mid k-1}^{t 1}=M_{k-1}^{t 11}+Q_{k-1}^{t 11}-U_k\left(M_{k-1}^{t 22}+Q_{k-1}^{t 22}\right) U_k^T\end{gathered}$

$\begin{gathered}\bar{P}_{k \mid k}^{t 1}=\bar{P}_{k \mid k-1}^{t 1}+U_k \bar{P}_{k \mid k-1}^{t 2} U_k^T-V_k \bar{P}_{k \mid k-1}^{t 2} V_k^T-\bar{K}_k^{t 1} P_{z z, k \mid k-1}^t\left(\bar{K}_k^{t 1}\right)^T-\bar{K}_k^{t 1} P_{z z, k \mid k-1}^t\left(\bar{K}_k^{t 2}\right)^T V_k^T-\left(\bar{K}_k^{t 1} P_{z z, k \mid k-1}^t\left(\bar{K}_k^{t 2}\right)^T V_k^T\right)^T \\ \bar{K}_k^{t 1}=K_k^{t 1}-V_k K_k^{t 2}\end{gathered}$

(2) Noise-correlated bias filter based on model transformation:

$\begin{gathered}\bar{X}_{k \mid k-1}^{t 2}=\frac{1}{m} \sum_{i=1}^m \Gamma_{k-1}^2\left(S_{k-1 \mid k-1}^t \xi_i+T\left(\Psi, \bar{X}_{k-1 \mid k-1}^t\right), u_{k-1}\right)+\Delta_k\left(Z_k-H_k\left(S_{k-1 \mid k-1}^t \xi_i+T\left(\Psi, \bar{X}_{k-1 \mid k-1}^t\right), u_{k-1}\right)\right)+m_{k-1}^p \\ \bar{X}_{k \mid k}^{t 2}=\bar{X}_{k \mid k-1}^{t 2}+\bar{K}_k^{t 2}\left(Z_k-\frac{1}{m} \sum_{i=1}^m H_k\left(S_{k \mid k-1}^t \xi_i+T\left(\Phi, \bar{X}_{k \mid k-1}^t\right), u_k\right)\right)+m_{k-1}^p+r_k^p \\ \bar{P}_{k \mid k-1}^{t 2}=M_{k-1}^{t 22}+Q_{k-1}^{t 22} \\ \bar{P}_{k \mid k}^{t 2}=\bar{P}_{k \mid k-1}^{t 2}-\bar{K}_k^{t 2} P_{z z, k \mid k-1}^t\left(\bar{K}_k^{t 2}\right)^T \\ \bar{K}_k^{t 2}=K_k^{t 2}\end{gathered}$

Coupling Relationship Between the Two Filters:

$\begin{gathered}U_k=\left(M_{k-1}^{t 12}+Q_{k-1}^{t 12}\right)\left(M_{k-1}^{t 22}+Q_{k-1}^{t 22}\right)^{-1} \\ V_k=U_k-\bar{K}_k^{t 1} P_{z z, k \mid k-1}^t\left(\bar{K}_k^{t 2}\right)^T\left(\bar{P}_{k \mid k-1}^{t 2}\right)^{-1}\end{gathered}$

Proof:

Let $m_{k-1}=q_{k-1}-\Delta_k r_k$, and partition $m_k$ according to the dimensions of the state and bias vectors:

$m_{k-1}=\left[\begin{array}{l}m_{k-1}^n \\ m_{k-1}^p\end{array}\right]$                   (30)

where, $m_{k-1}^n$ represents the first $n$ dimensions of the vector $m_{k-1}$, and $m_{k-1}^p$ represents the last $p$ dimensions.

Similarly, partition $r_k$ as:

$r_k=\left[\begin{array}{l}r_k^n \\ r_k^p\end{array}\right]$                  (31)

where, $r_k^n$ represents the first $n$ dimensions of the vector $r_k$, and $r_k^p$ represents the last $p$ dimensions.

The T-transformation for nonlinear systems, as mentioned in Sun et al. [9], can be extended to nonlinear systems as described in Al-Ghattas et al. [10] and Dong et al. [11]. This results in the following nonlinear T-transformation:

$T(F, X)=\left[\begin{array}{c}X_1+F\left(X_2\right) \\ X,\end{array}\right]$                  (32)

where, $X=\left\{\left(X^1\right)^T,\left(X^2\right)^T\right\}^T, X^1 \in R^{n-p}, X^2 \in R^p,\left(X^2\right)$ is a nonlinear function of the sub-state $X^2$.

Based on the properties of this nonlinear T-transformation, the nonlinear two-stage transformation formulas can be derived.

$\widehat{X}_{k \mid k-1}=T\left(\Phi, \bar{X}_{k \mid k-1}\right)$                   (33)

$\hat{X}_{k \mid k}=T\left(\Psi, \bar{X}_{k \mid k}\right)$                     (34)

$P_{k \mid k-1}=\frac{\partial T\left(\Phi, \bar{X}_{k \mid k-1}\right)}{\partial \bar{X}_{k \mid k-1}} \bar{P}_{k \mid k-1}\left(\frac{\partial T\left(\Phi, \bar{X}_{k \mid k-1}\right)}{\partial \bar{X}_{k \mid k-1}}\right)^T$                   (35)

$P_{k \mid k}=\frac{\partial T\left(\Psi, \bar{X}_{k \mid k}\right)}{\partial \bar{X}_{k \mid k}} \bar{P}_{k \mid k}\left(\frac{\partial T\left(\Psi, \bar{X}_{k \mid k}\right)}{\partial \bar{X}_{k \mid k}}\right)^T$                    (36)

$K_k=\frac{\partial T\left(\Psi, \bar{X}_{k \mid k}\right)}{\partial \bar{X}_{k \mid k}} \bar{K}_k$                   (37)

where, $\Phi$ and $\Psi$ are two defined nonlinear functions.

Formulas (33)-(37) have the following properties:

$\frac{\partial T\left(\Phi, \bar{X}_{k \mid k-1}\right)}{\partial \bar{X}_{k \mid k-1}}=\left[\begin{array}{cc}I_{n-p} & U_k \\ 0 & I_p\end{array}\right] \equiv T\left(U_k\right)$                      (38)

$\frac{\partial T\left(\Psi, \bar{X}_{k \mid k}\right)}{\partial \bar{X}_{k \mid k}}=\left[\begin{array}{cc}I_{n-p} & V_k \\ 0 & I_p\end{array}\right] \equiv T\left(V_k\right)$                    (39)

In summary, $U_k$ and $V_k$ are given as in formula (40):

$U_k=\frac{\partial \Phi\left(\bar{X}_{k \mid k-1}^2\right)}{\partial \bar{X}_{k \mid k-1}^2}, V_k=\frac{\partial \Psi\left(\bar{X}_{k \mid k}^2\right)}{\partial \bar{X}_{k \mid k}^2}$                  (40)

Substituting formulas (17) and (25) into the left-hand side of formulas (36) and (37), and formula (32) into the right-hand side, yields:

$\left[\begin{array}{c}\bar{X}_{k \mid k-1}^{t 1}+\Phi\left(\bar{X}_{k \mid k-1}^{t 2}\right) \\ \bar{X}_{k \mid k-1}^{t 2}\end{array}\right]=\frac{1}{m} \sum_{i=1}^m F_{k-1}\left(S_{k-1 \mid k-1}^t \xi_i+T\left(\Psi, \quad \bar{X}_{k-1 \mid k-1}^t\right), u_{k-1}\right)+m_{k-1}$                   (41)

where, $F_{k-1}\left(X_{k-1}\right)=\Gamma_k\left(X_{k-1}\right)+\Delta_k\left(Z_k-H_k\left(X_{k-1}\right)\right)$.

$\left[\begin{array}{c}\bar{X}_{k \mid k}^{t 1}+\Psi\left(\bar{X}_{k \mid k}^{t 2}\right) \\ \bar{X}_{k \mid k}^{t 2}\end{array}\right]=\left[\begin{array}{c}\bar{X}_{k \mid k-1}^{t 1}+\Phi\left(\bar{X}_{k \mid k-1}^{t 2}\right) \\ \bar{X}_{k \mid k-1}^{t 2}\end{array}\right]+K_k^t\left(Z_k-\frac{1}{m} \sum_{i=1}^m H_k\left(S_{k \mid k-1}^t \xi_i+T\left(\Phi, \quad \bar{X}_{k \mid k-1}^t\right), u_k\right)\right)+m_{k-1}+r_k$                     (42)

Substituting formulas (35) and (37) into Eqs. (41) and (42) and expanding and organizing yields:

$\begin{aligned} \bar{X}_{k \mid k-1}^{t 1}=\frac{1}{m} \sum_{i=1}^m & \left(\Gamma_{k-1}^1\left(S_{k-1 \mid k-1}^t \xi_i+T\left(\Psi, \bar{X}_{k-1 \mid k-1}^t\right), u_{k-1}\right)\right. \\ & \left.\quad+\Delta_k\left(Z_k-H_k\left(S_{k-1 \mid k-1}^t \xi_i+T\left(\Psi, \bar{X}_{k-1 \mid k-1}^t\right), u_{k-1}\right)\right)\right)-\Phi\left(\bar{X}_{k \mid k-1}^{t 2}\right)+m_{k-1}^n\end{aligned}$                   (43)

$\begin{aligned} \bar{X}_{k \mid k}^{t 1}=\bar{X}_{k \mid k-1}^{t 1}+ & \Phi\left(\bar{X}_{k \mid k-1}^{t 2}\right)+V_k\left(\bar{X}_{k \mid k}^{t 2}-\bar{X}_{k \mid k-1}^{t 2}\right)-\Psi\left(\bar{X}_{k \mid k}^{t 2}\right) \\ & +\bar{K}_k^{t 1}\left(Z_k-\frac{1}{m} \sum_{i=1}^m H_k\left(S_{k \mid k-1}^t \xi_i+T\left(\Phi, \bar{X}_{k \mid k-1}^t\right), u_k\right)\right)+m_{k-1}^n+r_k^n\end{aligned}$                 (44)

$\begin{aligned} \bar{X}_{k \mid k-1}^{t 2}=\frac{1}{m} \sum_{i=1}^m & \Gamma_{k-1}^2\left(S_{k-1 \mid k-1}^t \xi_i+T\left(\Psi, \bar{X}_{k-1 \mid k-1}^t\right), u_{k-1}\right) \\ & +\Delta_k\left(Z_k-H_k\left(S_{k-1 \mid k-1}^t \xi_i+T\left(\Psi, \bar{X}_{k-1 \mid k-1}^t\right), u_{k-1}\right)\right)+m_{k-1}^p\end{aligned}$                   (45)

$\bar{X}_{K \mid k}^{t 2}=\bar{X}_{k \mid k-1}^{t 2}+\bar{K}_k^{t 2}\left(Z_k-\frac{1}{m} \sum_{i=1}^m H_k\left(S_{k \mid k-1}^t \xi_i+T\left(\Phi, \bar{X}_{k \mid k-1}^t\right), u_k\right)\right)+m_{k-1}^p+r_k^p$                    (46)

where, $\Gamma_{k-1}(\cdot)=\left[\begin{array}{ll}\left(\Gamma_{k-1}^1(\cdot)\right)^T & \left(\Gamma_{k-1}^2(\cdot)\right)^T\end{array}\right]^T, \bar{K}_k^t=\left[\begin{array}{ll}\left(\bar{K}_k^{t 1}\right)^T & \left(\bar{K}_k^{t 2}\right)^T\end{array}\right]^T$.

For Eq. (18), let:

$M_{k-1}^t=\frac{1}{m} \sum_{i=1}^m\left(X_{i, k-1 \mid k-1}^t\right)^*\left(X_{i, k-1 \mid k-1}^t\right)^{* T}-\hat{x}_{k \mid k-1}^t\left(\hat{x}_{k \mid k-1}^t\right)^T$

Based on the dimensions of the block matrices in the two-stage transformation formula, $M_{k-1}^t$ is partitioned:

$M_{k-1}^t=\left[\begin{array}{cc}M_{k-1}^{t 11} & M_{k-1}^{t 12} \\ \left(M_{k-1}^{t 12}\right)^T & M_{k-1}^{t 22}\end{array}\right]$

Similarly, let $Q_{k-1}^t=Q_{k, k-1}-\Delta_k R_k \Delta_k^T$, and partition the state noise variance matrix:

$Q_{k-1}^t=\left[\begin{array}{cc}Q_{k-1}^{t 11} & Q_{k-1}^{t 12} \\ \left(Q_{k-1}^{t 12}\right)^T & Q_{k-1}^{t 22}\end{array}\right]$

Substituting the two block matrices into formula (18) yields:

$P_{k \mid k-1}^t=\left[\begin{array}{cc}M_{k-1}^{t 11}+Q_{k-1}^{t 11} & M_{k-1}^{t 12}+Q_{k-1}^{t 12} \\ \left(M_{k-1}^{t 12}+Q_{k-1}^{t 12}\right)^T & M_{k-1}^{t 22}+Q_{k-1}^{t 22}\end{array}\right]$                  (47)

Substituting formula (47) into the left-hand side of formula (38), and formula (33) into the right-hand side, and expanding yields:

$\bar{P}_{k \mid k-1}^{t 1}=M_{k-1}^{t 11}+Q_{k-1}^{t 11}-U_k\left(M_{k-1}^{t 22}+Q_{k-1}^{t 22}\right) U_k^T$                    (48)

$\bar{P}_{k \mid k-1}^{t 2}=M_{k-1}^{t 22}+Q_{k-1}^{t 22}$                      (49)

$U_k=\left(M_{k-1}^{t 12}+Q_{k-1}^{t 12}\right)\left(M_{k-1}^{t 22}+Q_{k-1}^{t 22}\right)^{-1}$                        (50)

Expanding formula (39) and substituting formulas (38) and (37) yields:

$\begin{aligned} \bar{P}_{k \mid k}^{t 1}= & \bar{P}_{k \mid k-1}^{t 1}+U_k \bar{P}_{k \mid k-1}^{t 2} U_k^T-V_k \bar{P}_{k \mid k-1}^{t 2} V_k^T-\bar{K}_k^{t 1} P_{z z, k \mid k-1}^t\left(\bar{K}_k^{t 1}\right)^T \\ & -\bar{K}_k^{t 1} P_{z z, k \mid k-1}^t\left(\bar{K}_k^{t 2}\right)^T V_k^T-\left(\bar{K}_k^{t 1} P_{z z, k \mid k-1}^t\left(\bar{K}_k^{t 2}\right)^T V_k^T\right)^T\end{aligned}$               (51)

$\bar{P}_{k \mid k}^{t 2}=\bar{P}_{k \mid k-1}^{t 2}-\bar{K}_k^{t 2} P_{z z, k \mid k-1}^t\left(\bar{K}_k^{t 2}\right)^T$                        (52)

$V_k=U_k-\bar{K}_k^{t 1} P_{z Z, k \mid k-1}^t\left(\bar{K}_k^{t 2}\right)^T\left(\bar{P}_{k \mid k-1}^{t 2}\right)^{-1}$                      (53)

The block gain matrix obtained by partitioning formula (24) according to the corresponding dimensions is shown as formula (54):

$K_k^t=\left[\begin{array}{l}K_k^{t 1} \\ K_k^{t 2}\end{array}\right]$                      (54)

Substituting formula (54) into formula (37) and expanding yields:

$\bar{K}_k^{t 1}=K_k^{t 1}-V_k K_k^{t 2}$                   (55)

$\bar{K}_k^{t 2}=K_k^{t 2}$                 (56)

The proof is complete.

The flowchart of the TSCKF-CN algorithm based on model transformation is shown in Figure 1.

Figure 1. Flowchart of the TSCKF-CN algorithm based on model transformation

The TSCKF-CN algorithm is described as follows:

Algorithm 1: The TSCKF-CN algorithm based on model transformation

Initialization of State Conditions:

    $\hat{X}_0=E\left[X_0\right], P_0=E\left[\left(X_0-\hat{X}_0\right)\left(X_0-\hat{X}_0\right)^T\right]$

    $E\left(\bar{\omega}_0\right)=q_0-\Delta_k r_0, E\left(v_0\right)=r_0, \hat{Q}_0=Q_0, \widehat{R}_0=R_0$

for $k=1,2, \cdots, N$  do

Step 1: Time Update:

(1) Decompose $P_{k-1 \mid k-1}^t$ to obtain $S_{k-1 \mid k-1}^t$.

(2) Calculate the cubature points $X_{i, k-1 \mid k-1}^t$ (4-15) and propagate the cubature points $\left(X_{i, k-1 \mid k-1}^t\right)^*$ (4-16).

(3) Estimate the noise-correlated unbiased filter state prediction $\bar{X}_{k \mid k-1}^{t 1}$ (4-34) and biased filter state prediction $\bar{X}_{k \mid k-1}^{t 2}$ (4-36) using $m_{k-1}$ (4-30).

(4) Estimate the noise-correlated unbiased filter state error covariance $\bar{P}_{k \mid k-1}^{t 1}$ (4-39) and biased filter state error covariance $\bar{P}_{k \mid k-1}^{t 2}$ (4-40) using the coupling relationship $U_k$ (4-41).

Step 2: Measurement Update:

(1) Decompose $P_{k \mid k-1}^t$ to obtain $S_{k \mid k-1}^t$.

(2) Calculate the cubature points $X_{i, k \mid k-1}^t$ (4-19) and the propagated cubature points from the measurement equation $Z_{i, k \mid k-1}^t$ (4-20).

(3) Estimate the noise-correlated measurement prediction $\hat{z}_{k \mid k-1}^t$ (4-21).

(4) Estimate the noise-correlated measurement error covariance $P_{z z, k \mid k-1}^t$ (4-22) and the noise-correlated cross-covariance $P_{x z, k \mid k-1}^t$ (4-23).

(5) Estimate the noise-correlated unbiased filter Kalman gain $\bar{K}_k^{t 1}$ (4-45) and the noise-correlated biased filter Kalman gain $\bar{K}_k^{t 2}$ (4-47).

(6) Calculate the noise-correlated unbiased filter state estimates $\bar{X}_{k \mid k}^{t 1}$ (4-35) and the noise-correlated biased filter state estimates $\bar{X}_{k \mid k}^{t 2}$ (4-37) using $m_{k-1}$ (4-30) and $r_k$ (4-31).

(7) Calculate the noise-correlated unbiased filter estimate error covariance $\bar{P}_{k \mid k}^{t 1}$ (4-42) and the noise-correlated biased filter estimate error covariance $\bar{P}_{k \mid k}^{t 2}$ (4-43) using $V_k$ (4-44).

End for

4. Experiment and Analysis

The pure azimuth tracking system tracks the state of moving targets through two sensors [12], obtaining nonlinear measurements. Each sensor can only obtain angular observations of the target state, denoted as $\alpha_{i, k}$ and $\beta_{i, k}$, as shown in Figure 2. The two angle observations form the intersection point in a plane coordinate system. In the rectangular coordinate system shown in Figure 2, two sensors $S_{i 1}$ and $S_{i 2}(i=1,2, \cdots, N)$ are fixed on platforms $P_1$ and $P_2$, respectively, with a distance of d between them. Many sensors are fixed on platforms $P_j(\mathrm{j}=1,2)$, denoted as $\left\{\left(S_{1, j}, P_j\right),\left(S_{2, \mathrm{j}}, P_j\right), \cdots,\left(S_{N, \mathrm{j}}, P_j\right)\right\}$, corresponding to the nonlinear measurements $\left\{\left(\alpha_{1, k}, \beta_{1, k}\right),\left(\alpha_{2, k}, \beta_{2, k}\right), \cdots,\left(\alpha_{N, k}, \beta_{N, k}\right)\right\}$.

Figure 2. Pure azimuth tracking system diagram

The dynamic model is a four-dimensional nonlinear system defined as $x_k=\left[\begin{array}{llll}x_{1, k} & x_{2, k} & y_{1, k} & y_{2, k}\end{array}\right]^T$, where $x_{1, k}$ and $x_{2, k}$ are the displacement components in the east and north directions, respectively, and $y_{1, k}$ and $y_{2, k}$ are the velocity components corresponding to the displacement components. The movement of the target is treated as a constant velocity (CV) model. The state equations and deviation variance are as follows:

$\begin{gathered}x_{k+1}=F_k x_k+b_k+\omega_k^x \\ b_{k+1}=b_k+\omega_k^b\end{gathered}$

where, $F_k=\left[\begin{array}{cccc}1 & T & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & T \\ 0 & 0 & 0 & 1\end{array}\right]$, the process noise variance is:

$Q_k=\left[\begin{array}{cccc}\frac{T^3}{3} & \frac{T^2}{2} & 0 & 0 \\ \frac{T^2}{2} & T & 0 & 0 \\ 0 & 0 & \frac{T^3}{3} & \frac{T^2}{2} \\ 0 & 0 & \frac{T^2}{2} & T\end{array}\right] \times 0.5$,

with a tracking period $T=1 s$.

According to the cross-principle, the observation function is:

$h_k\left(x_k\right)=\left[\begin{array}{l}\alpha_1 \\ \beta_1\end{array}\right]=\left[\arccos \left(\frac{x_{1, k}}{\sqrt{x_{1, k}^2+y_{1, k}^2}}\right) \arccos \left(\frac{x_{1, k}-d}{\sqrt{\left(x_{1, k}-d\right)^2+y_{1, k}^2}}\right)\right]$.

In a multi-sensor system, the measurement equation is:

$z_{i, k}=\left[\arccos \left(\frac{x_{1, k}}{\sqrt{x_{1, k}^2+y_{1, k}^2}}\right) \arccos \left(\frac{x_{1, k}-d}{\left.\sqrt{\left(x_{1, k}-d\right)^2+y_{1, k}^2}\right)}\right)\right]+b_k+v_{i, k}(i=1,2, \cdots, N)$

where, $h_{1, k}\left(x_k\right)=h_{2, k}\left(x_k\right)=\cdots=h_{N, k}\left(x_k\right)=h_k\left(x_k\right)$.

The simulation experiment was conducted on a computer with hardware configuration: a quad-core Intel Core i5 4258U 2.4 GHz processor, 4 GB of memory, running Windows 7 64-bit Professional, and using MATLAB version 2013a for simulation.

Assume: $v_{i, k}=c_i \omega_{k, k-1}$, then there are $D_{i, k}=E\left(\omega_{k, k-1}, v_{i, k}^T\right)=Q_{k, k-1} c_i^T, R_{i j, k}=E\left(v_{i, k}, v_{i, k}^T\right)=c_i Q_{k, k-1} c_i^T$.

The initial state estimate and covariance matrix are: $\hat{x}_{0 \mid 0}=\left[\begin{array}{llll}0 & 10 & 3 & 10\end{array}\right]^T, \hat{b}_{0 \mid 0}=\left[\begin{array}{ll}0.1\end{array}\right]$, $P_{0 \mid 0}^x=\operatorname{diag}\left\{\begin{array}{llll}1 & 1 & 1 & 1\end{array}\right\}$, $P_{0 \mid 0}^b=\left[\begin{array}{lllll}0 & 0 & 0 & 0 & 0.5\end{array}\right]$, $R_k^x=\left[\begin{array}{cc}0.15 & 0.01 \\ 0.01 & 0.01\end{array}\right], R_k^b=0.5 \times T^2$, $d=500, c_1=\left[\begin{array}{cccc}0.01 & 0.01 & 0.001 & 0.001 \\ 0.001 & 0.001 & 0.001 & 0.001\end{array}\right]$.

The simulation time is set to 200 seconds, with 1000 Monte Carlo simulations performed for both algorithms. The Root Mean Square Error (RMSE) is defined as in Eq. (57) to compare the performance of various filtering algorithms:

$\operatorname{RMSE}(k)=\sqrt{\frac{1}{M} \sum_{i=1}^M\left(x_{*, k}^{(i)}-\hat{x}_{*, k \mid k}^{(i)}\right)^2}$                  (57)

where, $M$ is the number of Monte Carlo trials, and $x_{*, k}^{(i)}$ and $\hat{x}_{*, k \mid k}^{(i)}$ represent the state value and estimated value of $x_*$ in the $n$-th Monte Carlo simulation, respectively.

Figures 2 to 6 compare the RMSE of the CKF algorithm and the TSCKF-CN algorithm. From this set of figures, it can be observed that the RMSE curves of the two algorithms are approximately similar, with little difference, indicating that the estimation accuracy of both algorithms is comparable.

Figure 3. RMSE comparison of $x_{1, k}$ for two algorithms

Figure 4. RMSE comparison of $x_{2, k}$ for two algorithms

Figures 7 and 8 show the comparison of the estimated values of the bias terms and their RMSE. From the figures, it can be seen that the curves of both algorithms are basically consistent, with errors within an acceptable range. In fact, this precision error is attributed to computational errors during the calculation process on the computer.

The state estimation RMSE is calculated using Eq. (57), and the results are shown in Table 1. The RMSE of the TSCIF-CN algorithm is slightly lower than that of the TSCKF-CN algorithm, but they are very close. This similarity arises because both algorithms consider the impact of correlated noise in the recursive estimation process and appropriately handle the correlated noise, resulting in similar accuracy for both noise-correlated algorithms and comparable tracking performance.

Figure 5. RMSE comparison of $y_{1, k}$ for two algorithms

Figure 6. RMSE comparison of $y_{2, k}$ for two algorithms

Figure 7. Comparison of estimated values of $b_k$ for two algorithms

Figure 8. Comparison of RMSE of $b_k$ for two algorithms

Table 1. State estimation RMSE

 

TSCKF-CN

TSCIF-CN

$x_{1, k}(\mathrm{~m})$

1.6689

1.5312

$x_{2, k}(\mathrm{~m})$

0.2056

0.1980

$y_{1, k}(\mathrm{~m})$

0.5545

0.5420

$y_{2, k}(\mathrm{~m})$

0.0564

0.0550

$b_k (m)$

0.2479

0.2280

5. Conclusion

In practical applications, noise-correlated nonlinear systems are very common. If correlated noise is not considered and conventional two-stage filtering algorithms are applied, the estimation accuracy is bound to decrease. This paper first proposes a transformation model of the TSCKF algorithm based on the minimum variance estimation criterion. By introducing a transformation coefficient matrix, the correlated noise system is transformed into an uncorrelated system, establishing a conversion relationship between the two. Finally, simulation experiments demonstrate that the proposed method effectively addresses the correlated noise issue, resulting in superior tracking accuracy compared to scenarios that ignore correlated noise, thus yielding better tracking results.

Acknowledgment

This paper was supported by Natural Science Foundation of Zhejiang Province (Grants No.: LZY22E050005, LZY23F030001).

  References

[1] Phan, M.Q., Vicario, F., Longman, R.W., Betti, R. (2018). State-space model and Kalman filter gain identification by a Kalman filter of a Kalman filter. Journal of Dynamic Systems, Measurement, and Control, 140(3): 030902. https://doi.org/10.1115/1.4037778

[2] Luo, Z., Fu, Z., Xu, Q. (2020). An adaptive multi-dimensional vehicle driving state observer based on modified Sage–Husa UKF algorithm. Sensors, 20(23): 6889. https://doi.org/10.3390/s20236889

[3] Qiu, Z., Yang, J., Guo, L. (2021). Stochastic stable attitude estimation algorithm using UKF with measurement loss. IEEE/ASME Transactions on Mechatronics, 27(2): 1059-1069. https://doi.org/10.1109/TMECH.2021.3078888

[4] Garcia, R.V., Pardal, P.C.P.M., Kuga, H.K., Zanardi, M.C. (2019). Nonlinear filtering for sequential spacecraft attitude estimation with real data: Cubature Kalman filter, unscented Kalman filter and extended Kalman filter. Advances in Space Research, 63(2): 1038-1050. https://doi.org/10.1016/j.asr.2018.10.003

[5] Youn, W., Ko, N.Y., Gadsden, S.A., Myung, H. (2020). A novel multiple-model adaptive Kalman filter for an unknown measurement loss probability. IEEE Transactions on Instrumentation and Measurement, 70: 1-11. https://doi.org/10.1109/TIM.2020.3023213

[6] Ran, D., de Ruiter, A.H., Yao, W., Chen, X. (2019). Distributed and reliable output feedback control of spacecraft formation with velocity constraints and time delays. IEEE/ASME Transactions on Mechatronics, 24(6): 2541-2549. https://doi.org/10.1109/TMECH.2019.2951812

[7] Qian, J., Zi, B., Wang, D., Ma, Y., Zhang, D. (2017). The design and development of an omni-directional mobile robot oriented to an intelligent manufacturing system. Sensors, 17(9): 2073. https://doi.org/10.3390/s17092073

[8] Pak, J.M. (2021). Switching extended Kalman filter bank for indoor localization using wireless sensor networks. Electronics, 10(6): 718. https://doi.org/10.3390/electronics10060718

[9] Sun, S., Dong, K., Guo, C., Tan, D. (2021). A wind estimation based on unscented Kalman filter for standoff target tracking using a fixed-wing UAV. International Journal of Aeronautical and Space Sciences, 22(2): 366-375. https://doi.org/10.1007/s42405-020-00290-7

[10] Al-Ghattas, O., Bao, J., Sanz-Alonso, D. (2024). Ensemble Kalman filters with resampling. SIAM/ASA Journal on Uncertainty Quantification, 12(2): 411-441. https://doi.org/10.1137/23M1594935

[11] Dong, B., Bannister, R., Chen, Y., Fowler, A., Haines, K. (2023). Simplified Kalman smoother and ensemble Kalman smoother for improving reanalyses. Geoscientific Model Development, 16(14): 4233-4247. https://doi.org/10.5194/gmd-16-4233-2023

[12] Lou, T.S., Chen, N., Zhao, L. (2024). Adaptive fast desensitized Kalman filter. Circuits, Systems, and Signal Processing, 43(11): 7364-7386. https://doi.org/10.1007/s00034-024-02801-3

[13] Nosrati, K., Belikov, J., Tepljakov, A., Petlenkov, E. (2023). Extended fractional singular Kalman filter. Applied Mathematics and Computation, 448: 127950. https://doi.org/10.1016/j.amc.2023.127950

[14] Naik, A.K., Upadhyay, P.K., Singh, A.K. (2023). Gaussian kernel quadrature Kalman filter. European Journal of Control, 71: 100805. https://doi.org/10.1016/j.ejcon.2023.100805

[15] Menner, M., Berntorp, K., Di Cairano, S. (2023). Automated controller calibration by Kalman filtering. IEEE Transactions on Control Systems Technology, 31(6): 2350-2364. https://doi.org/10.1109/TCST.2023.3254213

[16] Kong, N.J., Payne, J.J., Council, G., Johnson, A.M. (2021). The salted Kalman filter: Kalman filtering on hybrid dynamical systems. Automatica, 131: 109752. https://doi.org/10.1016/j.automatica.2021.109752

[17] Santos-León, J.C., Orive, R., Acosta, D., Acosta, L. (2021). The cubature Kalman filter revisited. Automatica, 127: 109541. https://doi.org/10.1016/j.automatica.2021.109541

[18] Chu, S., Qian, H., Sreeram, V. (2023). Robust filtering for spacecraft attitude estimation systems with multiplicative noises, unknown measurement disturbances and correlated noises. Advances in Space Research, 72(9): 3619-3630. https://doi.org/10.1016/j.asr.2023.07.008

[19] Jahanian, M., Ramezani, A., Moarefianpour, A., Shoorehdeli, M.A. (2022). Robust extended Kalman filtering for nonlinear systems in the presence of unknown inputs and correlated noises. Optimal Control Applications and Methods, 43(1): 243-256. https://doi.org/10.1002/oca.2786

[20] Ullah, I., Su, X., Zhang, X., Choi, D. (2020). Simultaneous localization and mapping based on Kalman filter and extended Kalman filter. Wireless Communications and Mobile Computing, 2020(1): 2138643. https://doi.org/10.1155/2020/2138643