Low Complexity Adaptive Post-Whitening DCT-LMS for Echo Cancellation

Low Complexity Adaptive Post-Whitening DCT-LMS for Echo Cancellation

Laid Chergui* Saad Bouguezel

Laboratory of SATIT, Department of Electrical Engineering, University Abbas Laghrour, Khenchela 40004, Algeria

CCNS Laboratory, Department of Electronics, Setif 1 University-Ferhat Abbas, Setif 19000, Algeria

Corresponding Author Email: 
chergui_laid@univ-khenchela.dz
Page: 
2635-2643
|
DOI: 
https://doi.org/10.18280/ts.410536
Received: 
2 December 2023
|
Revised: 
3 April 2024
|
Accepted: 
28 August 2024
|
Available online: 
31 October 2024
| Citation

© 2024 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

In this paper, we propose a low-complexity adaptive post-whitening discrete cosine least mean square (LC-POW-DCT-LMS) algorithm. The fundamental concept introduced in this new algorithm consists of designing an adaptive post-whitening process of first order by appropriately exploiting the fact that the adaptation process of the decorrelation coefficients becomes slow when the step size parameter takes relatively small values. For a given iteration, this new post-whitening requires the computation of only one transformation of length N, 5N+2 additions and 7N+5 multiplications, whereas the corresponding post-whitening of the existing POW-DCT-LMS algorithm requires the computation of two transformations each of length N, 4N+2 additions and 7N+4 multiplications. Therefore, the LC-POW-DCT-LMS algorithm substantially reduces the arithmetic complexity. Moreover, by considering echo cancellation application, we demonstrate that the performance of this new algorithm, for both mean square error (MSE) and normalized misalignment (MSI) convergence speed as well as for the reached steady state, is comparable to that of the existing POW-DCT-LMS algorithm.

Keywords: 

DCT-LMS algorithm, echo cancellation, LMS algorithm, post-whitening, system identification, low complexity

1. Introduction

In adaptive filtering, robust, simple, and computationally efficient algorithms such as the least mean square (LMS) are broadly used [1-5]. Nevertheless, their convergence performance is significantly affected when the input signal autocorrelation matrix is characterized by a significant spread of eigenvalues [6-9]. Several time-domain whitening techniques have been suggested in studies [10-13] to reduce such eigenvalue spreads. The time-domain whitening is achieved using the adaptive predictive decorrelation to decorrelate the input signal or jointly decorrelate the input signal with the error signal. This offers improvements in the behavior of the mean square error (MSE) convergence, which depends on the initial convergence speed and reached steady state [14-16].

The whitening of correlated input signals is also adopted in the transform-domain LMS (TDLMS) algorithm [17, 18], which is considered as a good alternative to the time-domain LMS algorithm. This whitening is achieved by exploiting the decorrelation property of orthogonal transforms such as the discrete cosine transform (DCT), discrete Fourier transform (DFT) and discrete Hartley transform (DHT). As a result, a significant reduction in the eigenvalue spread of the transformed and power-normalized autocorrelation matrix signal has been obtained, and the MSE convergence has then been improved. However, the performance of the existing TDLMS in terms of convergence is still limited due to the fact that the decorrelation capability of the transforms is limited [19].

To overcome this problem by reinforcing the decorrelation skills of the transforms, an efficient pre-whitening TDLMS (PW-TDLMS) algorithm has been designed in paper [19] by introducing at the input of the TDLMS a pre-whitening filter of first order. The role of this filter is to decrease the spread of the eigenvalues of the autocorrelation matrix corresponding to the resulting transformed and power normalized signal, and subsequently improve the MSE convergence performance. As demonstrated in paper [19], the PW-TDLMS adaptive noise canceller algorithm exhibits good performance in speech denoising applications, where the processed noise is employed mutually in filtering and adaptation steps. In order to reinforce the decorrelation ability of the employed orthogonal transforms and improve the MSE convergence of the TDLMS algorithm for the purpose of identifying systems, where the input signal characteristics needs to be kept unchanged during the filtering step and can only be altered during the adaptation step, a post-whitening TDLMS (POW-TDLMS) algorithm has been designed in paper [20]. This algorithm post-whitens the transformed signal and reduces the eigenvalue spread of the autocorrelation matrix of the post-whitened and power-normalized signal. Moreover, it has been shown in paper [20] that the MSE convergence of POW-DCT-LMS algorithm is superior to that of POW-DFT-LMS and POW-DHT-LMS algorithms. Additionally, past research has also shown that the DCT outperforms the DFT and DHT in a range of practical signal models [5, 18, 19, 21]. Although POW-DCT-LMS algorithm is very attractive for adaptive acoustic echo cancellation, it presents high arithmetic complexity primarily arising due to the necessity of computing two transforms during each iteration. This crucial limitation becomes extremely serious in real-time applications such as stereophonic and teleconferencing systems [22-24] and hence should be addressed.

In this paper, we develop a low-complexity POW-DCT-LMS (LC-POW-DCT-LMS) algorithm for adaptive acoustic echo cancellation by exploiting the fact that the adaptation process of the decorrelation coefficients slows down with relatively small values of the step size parameter. This leads to a new first-order adaptive post-whitening process allowing substantial reduction in the complexity and permitting our LC-POW-DCT-LMS algorithm to require for the calculation of one iteration only one transform.

The rest of this paper is organized as follows. Section 2 presents a brief review on the POW-DCT-LMS algorithm reported in paper [20]. In Section 3, we propose a LC-POW-DCT-LMS algorithm. The arithmetic complexity analysis of this algorithm is presented in Section 4 and its performance analysis is carried out in Section 5 in terms of eigenvalue spread, stability and performance measure. In Section 6, we apply our algorithm in the context of echo cancellation and perform a comparison with existing algorithms by considering both the normalized misalignment (MSI) and MSE convergence metrics. Some conclusions are included in Section 7.

2. POW-DCT-LMS Algorithm

In this section, we present a brief review of the POW-DCTLMS algorithm developed in paper [20] for system identification purposes. It is indicated in Figure 1 for each time sample $k$. The time domain input vector $\mathrm{x}_k=$ $\left\lceil x_k, x_{k-1}, \ldots, x_{k-N+1}\right\rceil^T$ of length $N$ generated from the AR (1) process as $x_k=\rho x_{k-1}+\omega_k$, where $\rho$ is the coefficient of correlation, which falls within the interval $(0,1), \omega_k$ is a Gaussian white noise, and (. $)^T$ denotes the transpose operation. The signal $\mathrm{x}_k$ is first transformed to $\mathrm{X}_k=\mathrm{T}_N \mathrm{x}_k=$ $\left[X_k(0), X_k(1), \ldots, X_k(N-1)\right]^T$, with $\mathrm{T}_N$ being the DCT matrix of size $N \times N$. Then, the transformed signal $\mathrm{X}_k$ is post-whitened as:

$\widetilde{\mathrm{X}}_k=\mathrm{X}_k-\mathrm{T}_N \times \operatorname{diag}\left(a_{k-1}, a_{k-2}, \ldots, a_{k-N}\right) \times \mathrm{x}_{k-1}$          (1)

with $a_k, \  k=0, 1, …$ being the coefficients of a first order adaptive time-domain decorrelation filter computed as:

$a_k=a_{k-1}+\gamma \tilde{x}_k x_{k-1}$          (2)

where, $\tilde{x}_k$ is the decorrelated input signal obtained as:

$\tilde{x}_k=x_k-a_{k-1} x_{k-1}$          (3)

and γ is the adaptation step size, which takes values in the interval $\left(0, \frac{2}{\lambda_{\max }}\right)$ [25], with $\lambda_{\max }$ being the largest eigenvalue of the autocorrelation matrix of $x_k$.

Figure 1. System identification based on the POW-DCT-LMS

Let us now describe the adaptive filtering process of the POW-DCT-LMS algorithm. The adaptive filter weight vector $\mathrm{w}_k=\left[w_k(0), w_k(1), \ldots, w_k(N-1)\right]^T$ is adapted using the post-whitened real-valued signal $\widetilde{\mathrm{X}}_k$ given by (1) as:

$\mathrm{w}_{k+1}=\mathrm{w}_k+\mu e_k \widetilde{\mathrm{P}}_k^{-1} \widetilde{\mathrm{X}}_k$          (4)

with μ being the adaptation step size parameter controlling the algorithm convergence and $e_k$ the error, which can be computed as:

$e_k=d_k-\sum_{i=0}^{N-1} w_k(i) X_k(i)$          (5)

where, $d_k=\mathrm{x}_k * \mathrm{~h}+n_k$ is the desired signal, $\mathrm{h}=$ $\left[h_0, h_1, \ldots, h_{N-1}\right]^T$ is the filter system to be found, $n_k$ is the additive white Gaussian noise and (*) denotes the convolution operation. In (4), the vector $\widetilde{\mathrm{P}}_k$ of length $N$ allows the power normalization of the post whitened signal $\widetilde{\mathrm{X}}_k$, and can be obtained as:

$\widetilde{\mathrm{P}}_k=\operatorname{diag}\left(\tilde{\sigma}_k^2(0), \tilde{\sigma}_k^2(1), \ldots, \tilde{\sigma}_k^2(N-1)\right)$          (6)

where, the elements $\tilde{\sigma}_k^2(i)$, with $i=0,1, \ldots N-1$, represent the power estimates of $\widetilde{\mathrm{X}}_k$ and are computed by:

$\tilde{\sigma}_k^2(i)=\beta \tilde{\sigma}_{k-1}^2(i)+(1-\beta)\left|\tilde{X}_k(i)\right|^2$          (7)

with $\beta$ being the smoothing factor having values from the interval $(0,1)$.

3. Proposed LC-POW-DCT-LMS Algorithm

The POW-DCT-LMS algorithm reviewed in Section 2 corresponds of post-whitening and adaptive filtering. The aim of this section is to design a low-complexity POW-DCT-LMS (LC-POW-DCT-LMS) algorithm by developing a low-complexity first order adaptive post-whitening. For this purpose, let us start by observing that the adaptation process of the decorrelation coefficients $a_k$ in (2) becomes slow when the step size parameter $\gamma$ takes relatively small values.

Assuming that the value taken by the adaptation step size $\gamma within the interval suggested in Section 2 is small enough to slowdown the process of adapting the decorrelation coefficients defined by (2).

Therefore, at each iteration $k$, the term $\gamma \tilde{x}_k x_{k-1}$ will not introduce a significant alteration to the values taken by $a_k$. Thus, we can assume that the values $\left(a_{k-1}, a_{k-2}, \ldots, a_{k-N}\right)$ taken over a finite duration $N$ are nearly equal.

With this assumption in mind, the vectors $\left[a_{k-1}, a_{k-2}, \ldots, a_{k-N}\right]^T$ and $\left[\bar{a}_k, \bar{a}_k, \ldots, \bar{a}_k\right]^T$ are asymptotically equal, where $\bar{a}_k$ is the mean value of the vector $\left[a_{k-1}, a_{k-2}, \ldots, a_{k-N}\right]^T$, i.e.

$\bar{a}_k=\frac{1}{N} \sum_{i=1}^N a_{k-i}$           (8)

By replacing the vector $\left[a_{k-1}, a_{k-2}, \ldots, a_{k-N}\right]^T$ in (1) by the approximated vector $\left[\bar{a}_k, \bar{a}_k, \ldots, \bar{a}_k\right]^T$, the post-whitening given by (1) can be rewritten as:

$\widetilde{\mathrm{X}}_k=\mathrm{X}_k-\mathrm{T}_N \times \operatorname{diag}\left(\bar{a}_k, \bar{a}_k, \ldots, \bar{a}_k\right) \times \mathrm{x}_{k-1}$           (9)

The term $\mathrm{T}_N \times \operatorname{diag}\left(\bar{a}_k, \bar{a}_k, \ldots, \bar{a}_k\right) \times \mathrm{x}_{k-1}$ can be rearranged as $\bar{a}_k \times \mathrm{T}_N \times \mathrm{x}_{k-1}$, and then, (9) becomes $\widetilde{\mathrm{X}}_k=$ $\mathrm{X}_k-\bar{a}_k \times \mathrm{T}_N \times \mathrm{x}_{k-1}$.

Consequently,

$\widetilde{\mathrm{X}}_k=\mathrm{X}_k-\bar{a}_k \times \mathrm{X}_{k-1}$           (10)

where, $\mathrm{X}_{k-1}=\mathrm{T}_N \times \mathrm{x}_{k-1}$.

Finally, a simplified form of the post-whitening process of is obtained in (10) and illustrated in Figure 2.

Now, we appropriately replace the post-whitening process of the POW-DCT-LMS algorithm, which is reviewed in Section 2, by the suggested post-whitening process obtained in Figure 2 to design the LC-POW-DCT-LMS algorithm illustrated in Figure 3.

Figure 2. Suggested low-complexity adaptive post-whitening

Figure 3. Suggested low-complexity adaptive post-whitening DCT-LMS algorithm

4. Arithmetic Complexity

Here, we compare the complexities of the conventional DCT-LMS, existing POW-DCT-LMS and new LC-POW-DCT-LMS algorithms. It is obvious that the post whitening given by (1) requires, for one iteration, N multiplications, one transform and then N subtractions. In contrast, the low-complexity post whitening given by (10) requires only N multiplications and N subtractions. Therefore, it achieves savings of the entire complexity of the transform $\mathrm{T}_N$ of length $N$ at the expense of $N-1$ additions and one multiplication required for computing $\bar{a}_k$ in (8). It should be noted that $X_{k-1}$ in (10) is computed during the time sample $k-1$ and hence does not need an extra complexity at the time sample $k$.

The arithmetic complexity of the conventional DCT-LMS algorithm requires the computation of a single transformation, together with 6N+1 multiplications and 3N additions per iteration [19, 26]. Consequently, the LC-POW-DCT-LMS algorithm, as detailed earlier, requires the computation of only one transformation of length N, 5N+2 additions, and 7N+5 multiplications for a given iteration. In contrast, the existing POW-DCT-LMS algorithm necessitates the computation of two transformations, each of length N, 4N+2additions, and 7N+4 multiplications. Hence, the new algorithm achieves substantial savings in the arithmetic complexity.

Therefore, by considering the arithmetic complexity of the fast algorithm devised in paper [27] to implement the DCT, Table 1 shows the complexities of the conventional DCT-LMS [19], existing POW-DCT-LMS [20] and new LC-POW-DCT-LMS algorithms, for a single iteration with various values of N. In this table, $M_N$ and $A_N$ denote, respectively, the total number of additions and multiplications.

Table 1. Arithmetic complexity of different algorithms for various values of N

N

DCT-LMS

POW-DCT-LMS

New LC-POW-DCT-LMS

$M_N$

$A_N$

$M_N$

$A_N$

$M_N$

$A_N$

16

129

129

180

228

149

163

32

273

305

388

548

309

371

64

577

705

836

1284

645

835

128

1217

1601

1796

2948

1349

1859

256

2561

3585

3844

6660

2821

4099

512

5377

7937

8196

14852

5893

8963

1024

11265

17409

17412

32772

12293

19459

5. Performance Analysis of the LC-POW-DCT-LMS

In this section, we analyze the eigenvalue spread, stability and performance measures of the LC-POW-DCT-LMS.

5.1 Eigenvalue spread analysis

As mentioned in Section 3, the value of the step size parameter  has a direct effect on the adaptation process of the decorrelation coefficients $a_k$ given by (2) and used in (8) to calculate the mean value $\bar{a}_k$ required in the new low complexity post-whitening given by (9).

In addition, the spread of the eigenvalues of the autocorrelation matrix $\tilde{S}_N$ obtained from the post-whitened signal and normalized by the power as $\widetilde{\mathrm{P}}_k^{-1} \widetilde{\mathrm{X}}_k$ has a significant influence on the convergence of the algorithm.

The autocorrelation matrix $\widetilde{\mathrm{B}}_N$ of $\widetilde{\mathrm{X}}_k$ after performing post whitening using the LC-POW process given by (10) can be computed as:

$\widetilde{\mathrm{B}}_N=E\left\{\widetilde{\mathrm{X}}_k \widetilde{\mathrm{X}}_k^H\right\}$          (11)

where, ( )H denotes the Hermitian transpose operation.

After transformation and power normalization, the autocorrelation matrix is obtained in studies [6, 19, 20] by rearranging the elements such that $\widetilde{\mathrm{X}}_k$ transforms its elements $\tilde{X}_k(i), i=0,1 \ldots, N-1$ into $\tilde{X}_k(i) / \sqrt{\text { Power of } \tilde{X}_k(i)}$, with the power of $\widetilde{X}_k(i)$ being placed on the main diagonal of $\widetilde{\mathrm{B}}_N$.

$\widetilde{\mathrm{S}}_N=\left(\operatorname{diag} \widetilde{\mathrm{B}}_N\right)^{-1 / 2} \widetilde{\mathrm{~B}}_N\left(\operatorname{diag} \widetilde{\mathrm{~B}}_N\right)^{-1 / 2}$          (12)

The eigenvalue spread of $\widetilde{\mathrm{S}}_N$ is computed numerically for $T_N$ being the DCT. Therefore, it is of great interest to find an interval for appropriate values of $\gamma$ ensuring a small eigenvalue spread.

We now analyze numerically, for the conventional DCTLMS, existing POW-DCT-LMS and new LC-POW-DCTLMS algorithms, the eigenvalue spread of the autocorrelation matrix $\widetilde{\mathrm{S}}_N$ for different values of $\gamma$ in the interval $[0,0.1]$, various correlation coefficient values ρ=0.9, 0.7, and 0.5, and filter length $N=16$. The results of this analysis are depicted in Figures 4-6.

It is clear from these figures that the DCT-LMS algorithm presents a worse performance, whereas the eigenvalue spreads obtained using the LC-POW-DCT-LMS algorithm are comparable to those of the POW-DCT-LMS with a slight increase in the case of relatively high values of . Therefore, the interval for appropriate values of  is $\gamma$ = ]0,0.002[.

Figure 4. Eigenvalue spreads of the DCT-LMS, POW-DCT-LMS and LC-POW-DCT-LMS for $\rho=0.9, N=16$ and different values of the step size $\gamma$

Figure 5. Eigenvalue spreads of the DCT-LMS, POW-DCT-LMS and LC-POW-DCT-LMS for $\rho=0.7, N=16$ and different values of the step size $\gamma$

Figure 6. Eigenvalue spreads of the DCT-LMS, POW-DCT-LMS and LC-POW-DCT-LMS for $\rho=0.5, N=16$ and different values of the step size $\gamma$

5.2 Stability analysis

Similarly, to the analysis conducted for the existing POW-DCT-LMS algorithm [20], we analyze in this section the mean convergence of our LC-POW-DCT-LMS algorithm.

It is obvious from (4) that normalizing the power by $\tilde{\mathrm{P}}_k^{-1}$ affects the step size parameter and enhances the MSE convergence, but complicates stability analysis. To simplify this, we perform on the post-whitened vector $\widetilde{\mathrm{X}}_k$ the power normalization by multiplying (4) by $\tilde{\mathrm{P}}_k^{1 / 2}$.

$\tilde{\mathrm{P}}_k^{1 / 2} \mathrm{w}_{k+1}=\widetilde{\mathrm{P}}_k^{1 / 2} \mathrm{w}_k+\mu e_k \tilde{\mathrm{P}}_k^{-1 / 2} \widetilde{\mathrm{X}}_k$         (13)

By assuming that $\widetilde{\mathrm{X}}_k$ is a stationary process with zero mean value, $\tilde{\mathrm{P}}_k{ }^{1 / 2} \approx \tilde{\mathrm{P}}_{k+1}{ }^{1 / 2}$. This allows (13) to be formulated as $\tilde{\mathrm{P}}_{k+1}^{1 / 2} \mathrm{w}_{k+1}=\tilde{\mathrm{P}}_k^{1 / 2} \mathrm{w}_k+\mu e_k \tilde{\mathrm{P}}_k^{-1 / 2} \widetilde{\mathrm{X}}_k$, and then becomes:

$\widehat{\mathrm{w}}_{k+1}=\widehat{\mathrm{w}}_k+\mu e_k \mathrm{~V}_k$         (14)

where, $\widehat{\mathrm{w}}_k=\tilde{\mathrm{P}}_k^{1 / 2} \mathrm{w}_k$ and $\mathrm{V}_k=\tilde{\mathrm{P}}_k^{-1 / 2} \widetilde{\mathrm{X}}_k$.

In stationary or mildly non-stationary environments, the weight vector $\hat{\mathrm{w}}_k$ from (14) converges to $\tilde{\mathrm{P}}^{1 / 2} \mathrm{~T}_N \mathrm{~W}^0$, where $\tilde{\mathrm{P}}=\mathrm{E}\left\{\tilde{\mathrm{P}}_k\right\}$. This implies that (4) and (14) are equivalent forms.

We now subtract the quantity $\tilde{\mathrm{P}}^{1 / 2} \mathrm{~T}_N \mathrm{~W}^o$, obtained by transforming and normalizing Wiener filter, from (14), and then reformulate the result as:

$\widetilde{\mathrm{w}}_{k+1}=\widetilde{\mathrm{w}}_k+\mu e_k \mathrm{~V}_k$         (15)

where, $\widetilde{\mathrm{w}}_k=\widehat{\mathrm{w}}_k-\widetilde{\mathrm{P}}^{1 / 2} \mathrm{~T}_N \mathrm{~W}^o$ representing the weight-error vector.

By assuming the convergence of the decorrelation filter defined by (2) and (3), the decorrelation coefficients $a_k$ converge to the optimal scalar coefficient $a^o$ belonging to the range ]0, 1[. Hence, (8) becomes:

$\bar{a}_k=\frac{1}{N} \sum_{i=1}^N a^o=a^o,$         (16)

The approximated vector $\left[\bar{a}_k, \bar{a}_k, \ldots, \bar{a}_k\right]^T$ of length $N$ tends to $\left[a^o, a^o, \ldots, a^o\right]^T$, and (10), which corresponds to the low complexity post whitening, reduces to:

$\widetilde{\mathrm{X}}_k=\mathrm{X}_k-a^o \mathrm{X}_{k-1}$         (17)

The expression for $e_k$ in terms of $\widetilde{\mathrm{X}}_k$ is given by [20]:

$e_k=d_k-\left(\mathrm{w}_k\right)^H\left(\alpha \widetilde{\mathrm{X}}_k-\varphi w_k\right)$         (18)

where, $w_k=T_N \times \omega_k$ and $\omega_k=\left[\omega_k, \omega_{k-1}, \ldots, \omega_{k-N+1}\right]^T$ is zeros mean white gaussian noise. $\alpha=\frac{\rho}{\rho-a^0}, \varphi=\frac{a}{\rho-a^0}, 0<\rho<1, \rho$ is the correlation coefficient and $a^o \neq \rho$, and the expression of the obtained $e_k$ given by (18) in terms of $\widetilde{\mathrm{w}}_k$ and $\mathrm{V}_k$ is:

$e_k=e^o-\left(\alpha \mathrm{V}_k-\varphi \widetilde{\mathrm{P}}_k^{-1 / 2} w_k\right)^H \widetilde{\mathrm{w}}_k$         (19)

where, $e^o=d_k-\left(\alpha \mathrm{V}_k-\varphi \tilde{\mathrm{P}}_k^{-1 / 2} w_k\right)^H \tilde{\mathrm{P}}^{1 / 2} \mathrm{~T}_{N \mathrm{~W}^o}$ representing the error when the filter is optimum.

The substitution of (19) in (15) leads to:

$\begin{gathered}

\widetilde{\mathrm{w}}_{k+1}=\widetilde{\mathrm{w}}_k+ 

\mu\left[\mathrm{V}_k e^o-\left(\alpha \mathrm{V}_k\left(\mathrm{~V}_k\right)^H-\varphi \mathrm{V}_k\left(w_k\right)^H \tilde{\mathrm{P}}_k^{-1 / 2}\right) \widetilde{\mathrm{w}}_k\right]

\end{gathered}$         (20)

At time $k$, it is assumed that there is independence between the coefficients of $\widetilde{\mathrm{w}}_k, \mathrm{~V}_k, w_k$, and $\tilde{\mathrm{P}}_k{ }^{-1 / 2}$, which is justified by considering small values of $\mu$. The computation of the expectation of (20), leads to:

$\mathrm{E}\left\{\widetilde{\mathrm{w}}_{k+1}\right\}=\left[\mathrm{I}-\mu \alpha \tilde{\mathrm{S}}_N\right] \mathrm{E}\left\{\widetilde{\mathrm{w}}_k\right\}+\mu \varphi \mathrm{E}\left\{\mathrm{~V}_k\left(\mathrm{w}_k\right)^H \tilde{\mathrm{P}}_k^{-1 / 2}\right\} \mathrm{E}\left\{\widetilde{\mathrm{w}}_k\right\}$         (21)

where, $\mathrm{E}\left\{\mathrm{V}_k e^{\circ}\right\}=0$, and $\tilde{\mathrm{S}}_N=E\left\{\mathrm{~V}_k\left(\mathrm{~V}_k\right)^H\right\}$.

By assuming that the algorithm is convergent, (21) becomes:

$\mathrm{E}\left\{\widetilde{\mathrm{w}}_{k+1}\right\}=\left[\mathrm{I}-\mu \widetilde{\mathrm{S}}_N\right] \mathrm{E}\left\{\widetilde{\mathrm{w}}_k\right\}$         (22)

From (22), it is clear that the equivalent forms of (4) and (14) are stabilized in the mean-square sense under the resulting necessary condition.

$\left|1-\mu 3 \operatorname{tr}\left\{\tilde{\mathrm{~S}}_N\right\}\right|<1$         (23)

Therefore, we are able to express:

$0<\mu<\frac{2}{3 \operatorname{tr}\left\{\tilde{\mathrm{~S}}_N\right\}}$         (24)

where, $\operatorname{tr}(\cdot)$ denotes the trace of matrix and

$\operatorname{tr}\left\{\tilde{\mathrm{S}}_N\right\}=N$         (25)

Consequently, the stability criterion (24) becomes:

$0<\mu<\frac{2}{3 N}$         (26)

where, $N$ is the length of the filter.

Therefore, the stability condition of the LC-POW-DCT-LMS algorithm is similar to the stability conditions of the existing POW-DCT-LMS and conventional DCT-LMS algorithm.

5.3 Performance measure

The performance assessment of different algorithms is conducted in decibel (dB), utilizing different performance metrics, namely: the MSE, the normalized misalignment (MSI), the steady state MSE also known as the excess MSE (EMSE) and the steady-state excess MSE ($\operatorname{EMSE}_{s s}$).

At each time sample $k$, the ${MSE}_k$ is defined as:

$\operatorname{MSE}_k(d B)=10 \times \ Log _{10}\left(E\left\{e_k^2\right\}\right)$         (27)

Here, $e_k$ represents the error measured at the time sample k.

The normalized misalignment ${MSI}_k$ is defined as:

$\operatorname{MSI}_k(d B)=20 \times \ L og _{10} \left(\frac{\left\|\mathrm{w}_k-\mathrm{h}\right\|_2}{\|\mathrm{~h}\|_2}\right)$         (28)

where, $w_k$ is the weight vector estimated at the time sample $k$, $h$ is the echo channel impulse response and $\|\cdot\|_2$ refers to the $\ell_2 $ norm.

The $\mathrm{EMSE}_k$ is defined as [19]:

$\operatorname{EMSE}_k=\frac{1}{M} \sum_{n=0}^{M-1}\left|e_{k-n}\right|^2$         (29)

$M$ denotes the number of samples utilized in estimating the $\mathrm{EMSE}_k$. The $\mathrm{EMSE}_{s s}$, which is determined by averaging the $\mathrm{EMSE}_k$ values after the algorithm reaches a steady state, and has the following definition [19]:

$\operatorname{EMSE}_{s s}=\left(\frac{1}{K-S}\right) \sum_{k=S}^{K-1} \mathrm{EMSE}_k$         (30)

with K representing the total number of samples and S is the number of samples that the algorithm needs to attain the steady-state.

6. Simulations and Discussions

In this section, the performance of the suggested LC-POW-DCT-LMS algorithm is evaluated and compared to that of the POW-DCT-LMS and DCT-LMS algorithms in the context of network echo cancellation. The echo path impulse response of length $L=128$ is shown in Figure 7, corresponding to the $4_{}^{\text {th }}$ impulse response of ITU-T Recommendation G. 168 [28, 29]. Different simulations are performed for the AR (1) process described in Section 2, with $\rho=0.9$, and for the speech signal depicted in Figure 8. The sampling frequency employed is 8 kHz and a filter length of $N=128$. For the AR (1) process, the chosen step sizes and regularization parameters are $\mu=$ 0.0008 and $\varepsilon=0.00001$, respectively, while for the speech signal, the values utilized are $\mu=0.0003$ and $\varepsilon=0.0005$. The POW and the LC-POW process are adapted using the step size $\gamma=0.001$. The simulation results, obtained at a signal to noise ratio (SNR) of 20 dB and shown in Figures 9-12, depict the convergence of various algorithms for diverse input signals, showcasing both MSE and MSI. It should be noted that the results of the simulations corresponding to the AR (1) input signal are obtained for 200 different realizations, while those corresponding to the speech signal are obtained for the first realization of white gaussian noise.

These figures show clearly that the LC-POW-DCT-LMS algorithm exhibits performance, in terms of MSE and MSI convergence, analogous to that of the POW-DCT-LMS, and is superior to that of the DCT-LMS in both the convergence rate and reaching steady state.

Figure 7. Echo path impulse response [28, 29]

Figure 8. Speech signal

Figure 9. MSE convergence of the DCT-LMS, POW-DCT-LMS and LC-POW-DCT-LMS obtained for AR (1) input signal, N=128 and SNR = 20dB

Figure 10. MSI convergence of the DCT-LMS, POW-DCT-LMS and LC-POW-DCT-LMS obtained for AR (1) input signal, N=128 and SNR = 20dB

Figure 11. MSE convergence of the DCT-LMS, POW-DCT-LMS and LC-POW-DCT-LMS obtained for speech input signal, N=128 and SNR = 20dB

Figure 12. MSI convergence of the DCT-LMS, POW-DCT-LMS and LC-POW-DCT-LMS obtained for speech input signal, N=128 and SNR = 20dB

Table 2 illustrates the correlation between the obtained results and the theoretical analysis of arithmetic complexity, showcasing the arithmetic details necessary to achieve convergence for the three algorithms.

In order to effectively compare the results obtained for the different algorithms, we use the $\mathrm{EMSE}_k$ and the $\mathrm{EMSE}_{s s}$ performance measures. Figures 9 and 11 show the MSE convergence of the DCT-LMS, POW-DCT-LMS, and LC-POW-DCT-LMS algorithms obtained for AR (1) process and speech input signal, respectively, with a filter length of =128 and an SNR of 20dB.

Table 2. Correlation between the obtained results and the theoretical analysis of arithmetic complexity

 

Number of Iterations Required for Convergence

Number of Operations Performed to Achieve Convergence

Computational Savings

(LC-POW-DCT-LMS/POW-DCT-LMS)

Time Savings

$M_{128} \times$ Iterations

$A_{128} \times$ Iterations

M

A

 

 

 

 

40.8%

DCT-LMS

83235

101296995

133259235

 

 

 

24.89%

 

 

 

36.94%

POW-DCT-LMS

70249

126167204

207094052

Suggested

LC-POW-DCT-LMS

70249

94765901

130592891

It is clear from Figure 9 that the new LC-POW-DCT-LMS and existing POW-DCT-LMS algorithms achieve a steady-state MSE level of -20dB after 4202 iterations, while the conventional DCT-LMS algorithm achieves the same level of steady-state MSE after 6020 iterations.

It should be noted that the curves in Figure 11, which correspond to a speech input signal, are obtained by smoothing the corresponding quadratic errors of each algorithm for each iteration, using the relationship $\hat{e}_k^2=\delta \hat{e}_{k-1}^2+(1-\delta) e_k^2$, where, $\hat{e}_k^2$ is the obtained smoothed square error, $e_k$ is the estimated error described in (5), and $\delta$ is the smoothing factor of value equal to 0.9999. This figure shows clearly that the new LC-POW-DCT-LMS and existing POW-DCT-LMS algorithms converge after 70249 iterations and achieve an $\mathrm{EMSE}_{s s}$ level of -20.15 dB, while the conventional DCT-LMS algorithm converges after 83235 iterations and achieves an $\mathrm{EMSE}_{s s}$ level of -20 dB.

7. Conclusion

An efficient LC-POW-DCT-LMS algorithm has been developed in this paper by designing a novel first-order adaptive post-whitening procedure. This procedure has been established by appropriately exploiting the fact that the adaptation process of the decorrelation coefficients becomes slow when the step size parameter takes relatively small values. This has allowed as to use only the mean value of the decorrelation coefficients in the post-whitening process. As a result, this new post-whitening requires for a given iteration only the computation of one transformation of length N, 5N+2 additions and 7N+5 multiplications, whereas the corresponding post-whitening of the existing POW-DCT-LMS algorithm requires the computation of two transformations each of length N, 4N+2 additions and 7N+4 multiplications. The new LC-POW-DCT-LMS algorithm achieves savings of about 40% in computational time compared to the existing POW-DCT-LMS algorithm, making it suitable and highly desirable for real-time applications such as adaptive acoustic echo cancellation employed in modern stereophonic and teleconferencing systems. Furthermore, we have carried out performance analysis of our algorithm by considering the echo cancellation application to demonstrate that the significantly obtained reduction in the arithmetic complexity does not affect the performance of the algorithm. Indeed, the simulation results have shown clearly that the convergence speed and reached steady state obtained by the new LC-POW-DCT-LMS algorithm are similar to those obtained by the existing POW-DCT-LMS algorithm.

  References

[1] Shen, Z.J., Wang, R.G. (2019). Design and application of an improved least mean square algorithm for adaptive filtering. European Journal of Electrical Engineering, 21(3): 303-307. https://doi.org/10.18280/ejee.210307

[2] Ramdane, M.A., Benallal, A., Maamoun, M., Hassani, I. (2022). Partial update simplified fast transversal filter algorithms for acoustic echo cancellation. Traitement du Signal, 39(1): 11-19. https://doi.org/10.18280/ts.390102

[3] Navitha, C., Sivani, K., Reddy, A. (2017). Adaptive wavelet transform based rake receiver for ultra-wideband systems. Advances in Modelling and Analysis B, 60(2): 355-369. https://doi.org/10.18280/ama_b.600207

[4] Shynk, J.J. (1992). Frequency-domain and multirate adaptive filtering. IEEE Signal Processing Magazine, 9(1): 14-37. https://doi.org/10.1109/79.109205

[5] Simon, H. (2013). Adaptive Filter Theory. Pearson. https://www.pearson.com.

[6] Beaufays, F. (1995). Transform-domain adaptive filters: An analytical approach. IEEE Transactions on Signal Processing, 43(2): 422-431. https://doi.org/10.1109/78.348125

[7] Zhao, S., Man, Z., Khoo, S., R. Wu, H. (2009). Stability and convergence analysis of transform-domain LMS adaptive filters with second-order autoregressive process. IEEE Transactions on Signal Processing, 57(1): 119-130. https://doi.org/10.1109/TSP.2008.2007618.

[8] Homer, J. (2000). Quantifying the convergence speed of LMS adaptive FIR filter with autoregressive inputs. Electronics Letters, 36(6): 585-586. https://doi.org/10.1049/el:20000469

[9] Kim, D.I., De Wilde, P. (2000). Performance analysis of signed self-orthogonalizing adaptive lattice filter. IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, 47(11): 1227-1237. https://doi.org/10.1109/82.885130

[10] Douglas, S.C., Cichocki, A., Amari, S. (1999). Self-whitening algorithms for adaptive equalization and deconvolution. IEEE Transactions on Signal Processing, 47(4): 1161-1165. https://doi.org/10.1109/78.752617

[11] Mboup, M., Bonnet, M., Bershad, N. (1994). LMS coupled adaptive prediction and system identification: A statistical model and transient mean analysis. IEEE Transactions on Signal Processing, 42(10): 2607-2615. https://doi.org/10.1109/78.324727

[12] Gazor, S., Lieu, T. (2005). Adaptive filtering with decorrelation for coloured AR environments. IEE Proceedings-Vision, Image and Signal Processing, 152(6): 806-818. https://doi.org/10.1049/ip-vis:20045260

[13] Mboup, M., Bonnet, M., Bershad, N. (1992). Coupled adaptive prediction and system identification: a statistical model and transient analysis. In [Proceedings] ICASSP-92: 1992 IEEE International Conference on Acoustics, Speech, and Signal Processing, San Francisco, CA, USA, 4: 1-4. https://doi.org/10.1109/ICASSP.1992.226426

[14] Chergui, L., Bouguezel, S. (2019). Variable step size pre-whitening transform domain LMS adaptive noise canceller. In 2019 International Conference on Advanced Systems and Emergent Technologies (IC_ASET), Hammamet, Tunisia, pp. 327-331. https://doi.org/10.1109/ASET.2019.8871012

[15] Lee, J., Un, C. (1984). On the convergence behaviour of frequency-domain LMS adaptive digital filters. In ICASSP'84. IEEE International Conference on Acoustics, Speech, and Signal Processing, San Diego, CA, USA, 9: 467-470. https://doi.org/10.1109/ICASSP.1984.1172765

[16] Chergui, L., Bouguezel, S. (2018). Enhanced pre-whitening TDLMS adaptive noise canceller. In International Conference on Technological Advances in Electrical Engineering (ICTAEE ‘18), pp. 10-12.

[17] Narayan, S., Peterson, A., Narasimha, M. (1983). Transform domain LMS algorithm. IEEE Transactions on Acoustics, Speech, and Signal Processing, 31(3): 609-615. https://doi.org/10.1109/TASSP.1983.1164121

[18] Farhang-Boroujeny, B., Gazor, S. (1992). Selection of orthonormal transforms for improving the performance of the transform domain normalised LMS algorithm. In IEE Proceedings F (Radar and Signal Processing). IET Digital Library, 139(5): 327-335. https://doi.org/10.1049/ip-f-2.1992.0046

[19] Chergui, L., Bouguezel, S. (2017). A new pre-whitening transform domain LMS algorithm and its application to speech denoising. Signal Processing, 130: 118-128. http://doi.org/10.1016/j.sigpro.2016.06.021

[20] Chergui, L., Bouguezel, S. (2019). A new post-whitening transform domain LMS algorithm. Traitement du Signal, 36(3): 245-252. https://doi.org/10.18280/ts.360307

[21] Mayyas, K. (2003). New transform-domain adaptive algorithms for acoustic echo cancellation. Digital Signal Processing, 13(3): 415-432. https://doi.org/10.1016/S1051-2004(03)00004-6

[22] Djendi, M. (2015). New efficient adaptive fast transversal filtering (FTF)‐type algorithms for mono and stereophonic acoustic echo cancelation. International Journal of Adaptive Control and Signal Processing, 29(3): 273-301. https://doi.org/10.1002/acs.2470

[23] Mayyas, K. (2009). Low complexity LMS-type adaptive algorithm with selective coefficient update for stereophonic acoustic echo cancellation. Computers & Electrical Engineering, 35(3): 450-458, https://doi.org/10.1016/j.compeleceng.2008.11.004

[24] Abadi, M.S.E., Mesgarani, H., Khademiyan, S.M. (2017). The wavelet transform-domain LMS adaptive filter employing dynamic selection of subband-coefficients. Digital Signal Processing, 69: 94-105. https://doi.org/10.1016/j.dsp.2017.05.012

[25] Kong, A.L., Woon, S.G., Sen, M.K. (2009). Subband Adaptive Filtering: Theory and Implementation. Willey.

[26] Bilcu, R.C., Kuosmanen, P., Egiazarian, K. (2002). A transform domain LMS adaptive filter with variable step-size. IEEE Signal Processing Letters, 9(2): 51-53. http://doi.org/10.1109/97.991136

[27] Puschel, M. (2003). Cooley-Tukey FFT like algorithms for the DCT. In 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP'03). Hong Kong, China, 2: II-501. https://doi.org/10.1109/ICASSP.2003.1202413

[28] Paleologu, C., Benesty, J., Ciochină, S., Grant, S.L. (2014). A Kalman filter with individual control factors for echo cancellation. In 2014 IEEE International Conference on Acoustics, ICASSP, Florence, Italy, pp. 5974-5978. https://doi.org/10.1109/ICASSP.2014.6854750

[29] Digital Network Echo Cancellers, ITU-T Rec. G.168, 2002. https://www.itu.int/rec/T-REC-G.168/en.