Enhanced Noise Cancellation: A Variable Step Size Normalized Least Mean Square Approach

Enhanced Noise Cancellation: A Variable Step Size Normalized Least Mean Square Approach

Abdelghani Manseur* Abdelhakim Dendoug

Department of Electronics and Telecommunications, Faculty of New Information Technologies and Communication, Kasdi Merbah University, Ouargla 30000, Algeria

Electrical Engineering Department, Faculty of Science and Technology, Mohamed Khider University, Biskra 07000, Algeria

Identification, Command, Control and Communication Laboratory “LI3CUB”, Mohamed Khider University, Biskra 07000, Algeria

Corresponding Author Email: 
manseur.abdelghani@univ-ouargla.dz
Page: 
911-918
|
DOI: 
https://doi.org/10.18280/ts.410231
Received: 
2 April 2023
|
Revised: 
29 July 2023
|
Accepted: 
15 November 2023
|
Available online: 
30 April 2024
| Citation

© 2024 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Noise cancellation remains a significant challenge in signal processing, particularly when addressing non-stationary and time-varying noise sources. Traditional approaches, such as the Normalized Least Mean Square (NLMS) algorithm, are often limited by the fixed step size parameter, which dictates the trade-off between convergence rate and system robustness. In this study, an innovative Variable Step Size NLMS (VSS-NLMS) algorithm is introduced, designed to dynamically adjust the step size parameter, thereby optimizing performance criteria including precision, robustness, convergence rate, and tracking ability. Employing system identification techniques within an adaptive filtering framework, this research advances the NLMS algorithm by incorporating a variable step size parameter that adapts in real-time to the noise environment. The proposed VSS-NLMS algorithm is evaluated through extensive simulations, demonstrating a significant enhancement in the balance between Mean Square Error (MSE) reduction and convergence rate over both the conventional NLMS and Recursive Least Squares (RLS) algorithms, whilst maintaining computational simplicity. In the context of adaptive filters, the VSS-NLSM algorithm represents a substantial improvement for noise cancellation applications, particularly in scenarios characterized by variable noise dynamics. The results presented herein confirm that the VSS-NLMS algorithm not only achieves a superior trade-off between accuracy/robustness and convergence rate/tracking but also sets a new benchmark for adaptive noise cancellation strategies in complex acoustic environments.

Keywords: 

adaptive filtering, noise cancellation, LMS algorithm, NLMS algorithm, VSS-NLMS algorithm, variable step size, convergence factor, Mean Square Error (MSE)

1. Introduction

In dynamic environments where conditions are uncertain or exhibit non-stationary behavior, conventional fixed step size filters often fail to deliver optimal performance. Adaptive filters, by contrast, are adept at monitoring environmental fluctuations and accordingly adjusting the filter's weight vector coefficients, thereby providing efficient outcomes [1]. Among the current suite of adaptive algorithms, the Least Mean Square (LMS) and its variant, the NLMS, are particularly prominent. The LMS algorithm's widespread adoption can be attributed to its implementation simplicity.

Despite its simplicity, the LMS algorithm suffers from drawbacks such as slow convergence and subpar tracking capabilities when contrasted with the NLMS algorithm, which is favored for real-time applications. Both algorithms are designed to update their coefficients with a minimal number of operations, rendering them suitable for digital implementations [2].

Speech processing has emerged as a crucial field within the engineering sciences, experiencing an exponential growth since the 1960s, spurred by advancements in telecommunications technologies. These innovations, particularly in hands-free telephony and teleconferencing, have introduced new challenges related to various disturbances, including additive noise.

With the advent of modern telecommunications systems, the pursuit of adaptive filtering has become an increasingly significant research area. While adaptive filtering has proven effective in linear systems, where the input-output relationship is straightforward, its application to nonlinear systems has been limited. It is within this context that nonlinear filters, such as the proposed VSS-NLMS algorithm, demonstrate their merit, offering enhanced tracking performance [3-5].

The quest for an adaptive filter that performs adeptly in non-stationary and time-varying scenarios has led to the development of various algorithms with variable step sizes, aiming to balance rapid convergence with minimal misadjustment. This paper introduces a novel VSS-NLSM algorithm that augments the convergence rate and tracking by leveraging the input signal for both linear and nonlinear components, in conjunction with the previous step size.

This approach yields a new VSS-NLMS algorithm that retains the computational simplicity of its predecessors while providing improved performance. A fresh formulation for the adaptive algorithm's iterative step size is also proposed. The developed algorithm boasts easy computational structure, facilitating its application across diverse scenarios, such as learning curves and tracking a Pseudo-Random Binary Sequence (PRBS) in noisy conditions.

Simulation results in noise cancellation contexts underscore the superior performance of the newly developed VSS-NLMS algorithm, confirming its potential in advanced signal processing applications.

2. Adaptive Filtering

To get the best estimation of the system noise, with an adaptive algorithm, the adaptive filtering technique modifies the filter parameters to complete the optimum noise cancellation result. It is appropriate for real-time processing because it requires a small calculation [6].

Adaptive filtering is used in various domains, like radar navigation, voice signal processing, and wireless communication. Different sources of noise deteriorate the radar signal, from the target to the reception. The objective of the adaptive filters is to find out if noise reduction in receiving radar signals can be achieved. Sadly, since the frequency of the detected radar signals is unknown, we do not know their frequencies. We use adaptive filters like LMS and NLMS, which adapt their parameters according to the radar signal that they receive.

In nature, acoustic noise tends to affect speech signals. Therefore, before being stored, transmitted, or played out, the speech signal needs to be filtered through an adaptive filter, whether in voice signal processing or the field of wireless communication. In recent years, numerous audio applications, like mobile telephones and automatic speech recognition systems, have become extremely demanding, requiring the use of adaptive algorithms like the LMS or NLMS algorithm to reduce noise [7]. These algorithms must have an acceptable balance between convergence rate/tracking and accuracy/robustness.

The proposed VSS-NLMS algorithm offers the best accord between convergence rate/tracking and the accuracy/robustness compared to LMS and NLMS algorithms, which makes it likely to improve these different applications.

The adaptive filter is the subject of significant signal-processing research. The efficiencies of the adaptive filter algorithms are determined by steady-state error and the convergence rate. Both the basic LMS algorithm with the basic fixed step size LMS algorithm are unsuitable in adaptive signal filtering because they have a poor convergence speed with a high steady-state error, but conflicts between these crucial elements exist. Distinct step sizes have various steady-state errors and convergence rates. In contrast to the big step size, with a rapid convergence rate but a significant steady-state error, the slow step size has a slower convergence rate and a small steady-state error [3].

2.1 Least mean square algorithm (LMS)

Figure 1 illustrates the basic concept of adaptive filtering: x(n) corresponds to our input signal, d(n) represents desired response signal, y(n) indicates the output signal, e(n) indicates the signal's error, and is the difference between d(n) and y(n), and ε(n) denotes system noise [8].

Bernard Widrow and Marcian Hopf of Stanford University presented the concept of adaptive filtering derived from the LMS approach.

Due to its simple implementation, the LMS is a well-liked learning algorithm. The steepest-descent approach, which calculates the gradient vector from the provided data, is simplified by this method [9]. The basis of the impulse response filter that serves as the foundation of the conventional LMS algorithm recursively minimizes the MSE to get the ideal filter weights. The most used adaptive filtering technique is the LMS algorithm [10].

Figure 1. Principle of adaptive filtering

The LMS algorithm is commonly utilized in system identification, channel equalization, and acoustic echo cancellation due to its mathematical ease, robustness, and other applications [11-14].

One of the first versions of the LMS algorithm is given by [15]. It is obtained by an iterative solution of Eq. (1) using the concept of least squares.

$H_*=R^{-1} P$          (1)

where, R=E[x(n)x(n)T] is the autocorrelation matrix of the input signal $x(n)$. This matrix is defined positive, by Toeplitz; P=E[x(n)d(n)] is the intercorrelation vector between the input signal x(n) and the desired signal d(n).

The basic version of the LMS algorithm, Eq. (2), is given by [2, 9, 16-20].

$h(n+1)=h(n)+2 * \mu * e(n) * x(n)$               (2)

where, h(n) is the vector of the filter coefficients, µ is the step size or convergence factor, e(n) is the error signal, x(n) is the observed sequence of input data.

We will have the algorithm's convergence when the step size μ meets the condition $0<\mu<\frac{1}{\lambda_{\max }}$ [21, 22].

where, λmax is the maximum eigenvalue of the autocorrelation matrix of the input signal x(n).

2.2 NLMS algorithm

The LMS algorithm is one of the most useful adaptive algorithms accessible in the literature. The ease of implementation, due to its mathematical simplicity, is the major factor. The NLMS algorithm, a normalized variation of the LMS algorithm [23], is also commonly utilized. In real-time applications, the NLMS method has been applied more frequently. Comparing the LMS algorithm to the NLMS algorithm, the LMS algorithm has a slower steady-state error, a lower convergence rate, and poor tracking, in addition to being less robust than the NLMS algorithm [2]. The NLMS algorithm has varying step size which makes it converge faster than the LMS algorithm.

The formula for its weight coefficient update is given by Sun et al. [3].

$h(n+1)=h(n)+\frac{\mu}{x^T(n) * x(n)} * e(n) * x(n)$                  (3)

where, h(n) is the vector of the filter coefficients, µ is the step size or convergence factor, e(n) is the error signal, x(n) is the observed sequence of input data, xT(n) is the transpose of x(n).

The algorithm will converge when the step size $\mu$ meets the condition 0<μ<2 [4, 6, 24-26].

2.2.1 Variable step size (VSS) NLMS

Different Variable Step-Size (VSS) algorithms are already developed since earlier 1990s. Since choosing the step sizes in various iterations is an unresolved issue, the assumption of employing low step-size values inside the steady-state zone and high values if the adaptive filter coefficients are distant from the ideal solution has been examined in many publications [27].

One of the initial approaches suggested in the work [28] is a VSS-LMS algorithm that modifies its step size utilizing the squared instantaneous estimation error.

Both convergence rate and steady-state error affect the performance of the adaptive filter. Therefore, in the NLMS algorithm, the trade-off between convergence rate and steady-state error is not resolved, hence the need to introduce a VSS-NLMS algorithm.

3. Noise Cancellation

3.1 Principle of the adaptive noise cancellation

Extracting the information in an additionally noisy signal is a traditional problem in signal processing. The operator has a noisy signal.

The goal is to filter the input signal, x(n), with an adaptive filter to align it with the desired signal, y(n). To create an error signal e(n), the desired signal, y(n), is deducted from the filtered signal, u(n). The error signal $e(n)$ guides an adaptive algorithm to generate the filter coefficients in a way that reduces the error signal, Figure 2.

Figure 2. Adaptative noise cancellation scheme

In a noise-cancelling system, the goal is to generate a system output e(n)=[s(n)+v(n)]-u(n) that closely matches the signal s(n) in terms of least squares. This goal is accomplished by modifying the filter coefficients, using an adaptive algorithm, and providing the system output as feedback to the adaptive filter to minimize the error signal e(n), so that it reduces the error between the estimated noise and the actual noise.

3.2 Noise cancellation techniques

Noise cancellation is one of the problems encountered in signal processing. To remedy this phenomenon, there are a variety of techniques in the literature that operate differently. Over the past few decades, different methods have been developed to solve this problem. Spectral subtraction, Wiener filter, and beamforming have been proposed to achieve optimal performance in the noise cancellation domain.

Spectral subtraction uses fixed values of the subtraction parameter, which is its major weakness, making it unable to adjust noise characteristics and variable noise levels. Moreover, most additive noise, especially in speech signals, has a non-flat spectrum, so optimizing the parameters is not trivial task.

Moreover, several limitations constrain the performance and usability of a Wiener filter for noise cancellation in signal processing. Utilizing a Wiener filter has several limitations, so that it necessitates knowledge of the noise signal, the power spectra of the input signal, and the true signal beforehand. In many situations, obtaining this might be complicated or impractical, particularly when the signal and noise are non-Gaussian or non-stationary. Utilizing a Wiener filter has the additional limitation of being linear, which makes it incapable of handling nonlinear phenomena.

Despite several benefits [29], beamforming has one significant drawback: it demands large computing resources [30, 31].

The adaptive noise cancellation algorithms can track the signal adaptively in non-stationary contexts. Their frequency response has the distinctive characteristic of being self-modifying, changing their behaviors over time and enabling the filter to adjust its response as the parameters of the input signal change [16]. Hence their importance in noise cancellation, especially the proposed VSS-NLMS algorithm.

4. Proposed Algorithm and Convergence Conditions

This section presents the new approach we propose for updating the coefficient vectors of the linear and nonlinear parts of the adaptive filter. We will also develop the convergence conditions of the algorithm.

Figure 3 illustrates a typical application of the adaptive filter. The selection of the system input signal is a matter of taking an excitation signal with a homogeneous spectral density covering the entire bandwidth of the process (a transmission channel) to be identified. In the case of the application, we are interested in, a Pseudo Random Binary Sequence (PRBS) is used as an input signal that allows an excellent temporal identification.

A Pseudo Random Binary Sequence (PRBS) is transmitted through a communication channel. On reception, an Additive White Gaussian Noise (AWGN), an archetype of noise patterns encountered in practice, is added to the received signal before the "sum" signal is injected at the input of the adaptive filter.

The error signal, the variation between the desired response with the filter's output, is used to adjust the filter coefficients to better eliminate the noise or unwanted signal.

Figure 3. Digital channel equalization

Referring to Figure 3, the transfer function of the adaptive equalizer will have the relation given by Eq. (4), where the index n indicates the number of iterations.

$h(n)=h(n-1)+\frac{\mu * e(n) * x(n)}{x^T(n) * x(n)}$            (4)

The error e(n) is given by e(n)=d(n)-y(n).            

The laws of iterative adaptations of the linear and nonlinear parts are given by Eq. (5):

$\mu_i(n)=\mu_i(n-1)+\frac{1}{\sum_{j=1}^{N, M^2} x^2(n-j)} i=1$ or 2              (5)

where, N and M2 are, respectively, the orders of the linear and nonlinear parts of the filter.

In order to simplify the expression of the step sizes, Eq. (5), we will make the following changes of variables:

$\Delta_1=x^2(n)-x^2(n-N+1)$             (6)

$\Delta_2=x^2(n)-x^2\left(n-(M-1)^2\right)$            (7)

After the first N and M2 initialization iterations, the expressions of the step sizes of the linear and nonlinear parts μ1 and μ2 can be reformulated by Eq. (8):

$\mu_i(n)=\mu_i(n-1)-\frac{\Delta_i * \mu_i^2(n)}{1+\Delta_i * \mu_i(n)} i=1$ or 2        (8)

And, by changing the variable $\mu^{\prime}=1 / \mu$, to simplify the expressions of the step sizes, Eq. (8) becomes:

$\mu_i^{\prime}(n)=\mu_i^{\prime}(n-1)+\Delta_i i=1$ or 2                  (9)

In our case, the linear and nonlinear parts of the filter have different adaptation laws, which allows us to have more freedom in adjusting filter coefficients. In addition, these separate adaptation steps will give the filter more tracking capabilities in a non-stationary environment.

Therefore, with a step size μ that is not the same for the linear and nonlinear parts of x(n), Eq. (4) becomes:

$h(n)=h(n-1)+e(n) *\left[\begin{array}{l}\frac{x(n)}{x^T(n)+x(n)} / \mu_1^{\prime}(n) \\ \frac{x(n)}{x^T(n)+x(n)} / \mu_2^{\prime}(n)\end{array}\right]$                (10)

The conditions on the step size μ1'=1⁄μ1 of the linear part are similar to those of the NLMS algorithm using a fixed step size. So, the VSS-NLMS algorithm will converge when $\mu_1$ meets the condition 0<μ1<2 [4, 6, 24-26].

In the following, we will determine the conditions of convergence on the adaptation step $\mu_2^{\prime}=1 / \mu_2$ of the nonlinear part of the filter.

From Eq. (10), we get:

$h(n)-h(n-1)=e(n) *\left[\begin{array}{l}\frac{x(n)}{x^T(n)+x(n)} / \mu_1^{\prime}(n) \\ \frac{x(n)}{x^T(n)+x(n)} / \mu_2^{\prime}(n)\end{array}\right]$              (11)

If we pose,

$\mathcal{Q}(n)=\left[\begin{array}{l}\frac{x(n)}{x^T(n)+x(n)} / \mu_1^{\prime}(n) \\ \frac{x(n)}{x^T(n)+x(n)} / \mu_2^{\prime}(n)\end{array}\right]$

So,

$h(n)-h(n-1)=e(n) * \mathcal{Q}(n)$             (12)

By changing the variables:

$\alpha=Q^T(n) * Q(n), \beta=Q^T(n+1) * Q(n+1)$

With T is the transpose operator.

We will have:

$h(n)-h(n-1)=e(n) * Q(n)$                   (13)

From where,

$e(n)=\frac{Q^T(n) *(h(n)-h(n-1))}{\alpha}$                  (14)

Or, again

$e(n+1)=\frac{Q^T(n+1) *(h(n+1)-h(n))}{\beta}$                  (15)

The filter input signal (PRBS) is bounded, and applying the Milosavljevic convergence condition [32].

$(S(n+1)-S(n)) * S(n)<0$           (16)

We will have:

$(e(n+1)-e(n)) * e(n)<0$                 (17)

And using the two previous Eq. (14) and Eq. (15) leads us to the convergence condition of the algorithm on the adaptation step $\mu_2^{\prime}=1 / \mu_2$ of the nonlinear part of the filter.

So, the condition of the convergence on the nonlinear part step size $\mu_2^{\prime}=1 / \mu_2$ of the filter, is that $\mu_2^{\prime}>\mu_1^{\prime}$, or $\mu_2<\mu_1$.

To ensure convergence of the proposed VSS-NLMS algorithm, the initial value of the step size of the nonlinear part must be smaller than that of the linear part.

5. Results and Discussion

5.1 Learning curves

To compare the tracking performance of the fixed step size NLMS algorithm with the VSS-NLMS algorithm, we opted for the scheme in Figure 3. The primary signalx(n) of the filter consists of a noisy random signal.

Since the same type of signals, primary and secondary, and the filter order N=6, on a set of 100 realizations, the results obtained for both algorithms are shown in Figure 4.

(a) $\mu_1=1$ and $\mu_2=0.5$

(b) $\mu_1=1$ and $\mu_2=0.5$

Figure 4. Learning curves of the VSS-NLMS and fixe step size NLMS

Figure 4 above, which represents the two learning curves for the two versions of the algorithms, demonstrates that the VSS-NLMS converges faster than the fixed step size NLMS and has a lower steady-state error. Moreover, varying μ1 and μ2 results in various convergence rates and steady-state errors. Large step size values have a fast convergence rate with a higher steady-state error, while smaller step size values have a slower convergence rate with a slower steady state.

The developed VSS-NLMS algorithm resolves the controversy between convergence rate and steady-state error by providing both a quicker convergence rate and smaller steady-state error.

5.2 Tracking of a pseudo random binary sequence (PRBS)

The developed VSS-NLMS algorithm is utilized in the following experiment to truck PRBS. The following figures show the obtained results.

Figure 5 shows the output signal of the filter compared to the desired signal, for N=6 and M=4, the linear and nonlinear orders of the filter and μ1=0.2 and μ2=0.001 the initial steps size of the linear and nonlinear parts, respectively.

A close-up of the filter output signal behavior is given in Figure 6. We see that the divergence of the filter's output signal is minimal and that the filter's output signal ends up following the desired signal perfectly.

Figure 7 shows the evolution of the MSE. We see that the MSE is minimal despite the great convergence of the VSS-NLMS algorithm, proving its excellent tracking capability.

Figure 5. Desired signal (PRBS) and VSS-NLMS filter output signal

(a) First 100 iterations

(b) Last 100 iterations

Figure 6. Zoom on the desired signal and VSS-NLMS output signal

Figure 7. Evolution of the MSE

(a) Desired signal

(b) Noisy signal

Figure 8. Evolution of desired signal and noisy signal

Figure 9. Desired signal and output signal of the VSS-NLMS adaptive filter

5.3 Noise cancellation

Figure 8 shows the evolution of the filter input signal and the noisy signal. We can see that the useful speech signal '' yes'' is completely drowned in the disturbing noise.

Figure 9 demonstrates how the developed VSS-NLMS filter successfully recreated the desired signal. Figure 9 illustrates the advantages of the VSS-NLMS in tracking the speech signal by showing how the evolution of the filter's output signal perfectly matches the evolution of the desired signal.

This indicates that our filter models successfully the disturbing noise and, as a result, follow the desired signal and reconstruct our speech signal, "yes".

5.4 Comparison with alternative algorithm

To further validate the effectiveness of the VSS-NLMS algorithm, we compared its performance with the RLS algorithm, which is another state-of-art algorithm in the noise cancellation field. The obtained result is showed in Figure 10.

Figure 10, which represents the two learning curves for the VSS-NLMS algorithm and the RLS algorithm, demonstrates that the VSS-NLMS converges faster than the RLS and has a lower steady-state error. This demonstrates, once again, the efficiency of the VSS-NLMS algorithm compared to the RLS algorithm.

Figure 10. Learning curves of the VSS-NLMS with μ1=1andμ2=0.5and RLS

5.5 Statistical analysis of performance metrics

To reinforce and better support the assertions made, we carried out a statistical analysis of some performance measurements, such as the Signal to Noise Ratio (SNR) before and after the filtering and the convergence rate between the VSS-NLMS algorithm, the fixed step size NLMS algorithm, and the RLS algorithm. The SNR is frequently represented in decibels and is calculated by the signal-to-noise power ratio.

For adaptive noise cancellation, each algorithm exhibits a different behavior based on the outcomes of the simulation. Simulations were carried out to demonstrate the following performance characteristics in order to better comprehend the comparison between the algorithms mentioned above in terms of the noise cancellation application: The algorithm's rate of convergence, and SNR ratio before and after signal filtering. The filter should have the following qualities: quicker algorithm convergence, and a high SNR ratio. Table 1 below provides a summary of the results obtained.

Table 1. Statistical comparison results

Algorithm

SNR [dB] Before Filtering

SNR [dB] After Filtering

Convergence Rate

Fixed Step-size NLMS

-7.5478

12.6824

400 Iterations

RLS

-7.4357

13.8717

180 Iterations

VSS-NLMS

-7.2409

16.7944

70 Iterations

From Table 1, it can be observed that the developed VSS-NLMS algorithm has the highest SNR after filtering, which indicates that it has better performance in terms of noise cancellation compared to the fixed step-size NLMS algorithm and the RLS algorithm. On the other hand, the convergence rate makes it obvious that the VSS-NLMS algorithm converges to stability after 70 iterations, contrary to the fixed step-size NLMS, and RLS algorithms, which require 400 iterations and 180 iterations, respectively, to converge to stability. This demonstrates, once again, the superiority of the developed VSS-NLMS algorithm compared to the state-of-the-art algorithms in the same field, like the fixed step-size NLMS algorithm and the RLS algorithm.

6. Conclusions

The LMS algorithm is among the most prominent and commonly utilized adaptive algorithms because of its ease of implementation and simplicity of computation. The LMS algorithm convergence is still slow and constrained by input data dependence.

The NLMS algorithm provides a better compromise between computational simplicity and performance than the LMS algorithm. It is also equally easy but more robust.

A modified nonlinear VSS-NLMS algorithm is developed in this paper to improve convergence performance and misadjustment. The new variable step size in the developed NLMS algorithm is calculated by adding the previous step size with the squared inverse of the input signal for both linear and nonlinear parts. The step size has been expressed in a unique and iterative form and allows a simplified implementation in terms of the mathematical complexity of the VSS-NLMS algorithm.

For the choice of the input signal of the system, it is a question of taking an excitation signal with a homogeneous spectral density covering the whole bandwidth of the process to be identified. In this case, we used a Pseudo Random Binary Sequence (PRBS) as an input signal, which allows an excellent temporal identification. The NLMS algorithm with Variable Step Size (VSS-NLMS) offers better tracking performance while keeping a level of computational load (mathematical simplicity) relatively simple.

According to computer simulations: learning curves, tracking of a Pseudo Random Binary Signal (PRBS), and the filtering of a noisy “YES” speech signal, the VSS-NLMS algorithm surpasses the standard signal-tracking NLMS algorithm.

Although the VSS-NLMS algorithm has demonstrated its convergence rate and steady-state performances, it would be interesting to experiment it in real environment conditions. Such as in communication systems, above all, in channel equalization. In speech processing like echo cancellation or speaker separation. Or even, in biomedical applications in ECG power-line interference removal or maternal-fetal ECG separation, and possibly in other applications.

Even if the VSS-NLMS algorithm showed acceptable performances, it would be interesting to extend in the future works the results of this study to the adaptive filtering in sub-band in which the VSS-NLMS algorithm can be used to know the wavelet coefficients or to increase the convergence speed of the subband filters, which could further improve the results obtained in this study.

  References

[1] Goel, P., Chandra, M. (2019). FPGA implementation of adaptive filtering algorithms for noise cancellation—A technical survey. In Proceedings of the Third International Conference on Microelectronics, Computing and Communication Systems: MCCS 2018, pp. 517-526. https://doi.org/10.1007/978-981-13-7091-5_42

[2] Zade, S.A., Zafar, S. (2015). To study LMS & NLMS algorithm for adaptive echo cancellation. International Journal of Advance Research in Science and Engineering. IJARSE, 4(01).

[3] Sun, Y., Wang, M., Han, Y., Zhang, C. (2017). An improved VSS NLMS algorithm for active noise cancellation. In AIP Conference Proceedings, 1864(1): 020158. https://doi.org/10.1063/1.4992975

[4] Rusu, A.G., Paleologu, C., Benesty, J., Ciochină, S. (2022). A variable step size normalized least-mean-square algorithm based on data reuse. Algorithms, 15(4): 111. https://doi.org/10.3390/a15040111

[5] Pauline, S.H., Samiappan, D., Kumar, R., Anand, A., Kar, A. (2020). Variable tap-length non-parametric variable step-size NLMS adaptive filtering algorithm for acoustic echo cancellation. Applied Acoustics, 159: 107074. https://doi.org/10.1016/j.apacoust.2019.107074

[6] Yu, Z., Cai, Y., Mo, D. (2020). Comparative study on noise reduction effect of fiber optic hydrophone based on LMS and NLMS algorithm. Sensors, 20(1): 301. https://doi.org/10.3390/s20010301

[7] Siva Priyanka, S. Kishore Kumar, T. (2021). Signed convex combination of fast convergence algorithm to generalized sidelobe canceller beamformer for multi-channel speech enhancement. Traitement du Signal, 38(3): 785-795. https://doi.org/10.18280/ts.380325

[8] Sun, Y., Xiao, R., Tang, L.R., Qi, B. (2011). A novel variable step size LMS adaptive filtering algorithm. In Applied Informatics and Communication: International Conference, ICAIC 2011, Xi’ian, China, pp. 367-375. https://doi.org/10.1007/978-3-642-23235-0_48

[9] Prajna, K., Mukhopadhyay, C.K. (2020). Fractional Fourier transform based adaptive filtering techniques for acoustic emission signal enhancement. Journal of Nondestructive Evaluation, 39: 1-15. https://doi.org/10.1007/s10921-020-0658-6

[10] Huang, F., Zhang, J., Zhang, S. (2019). Mean-square-deviation analysis of probabilistic LMS algorithm. Digital Signal Processing, 92: 26-35. https://doi.org/10.1016/j.dsp.2019.05.003

[11] Haykin, S. (2022). Adaptive Filter Theory. 4th ed., Prentice-Hall, Upper Saddle River, NJ.

[12] Sayed, A.H. (2003). Fundamentals of Adaptive Filtering. Wiley-Interscience, New York.

[13] Benesty, J., Huang, Y. (2003). Adaptive Signal Processing: Applications to Real-World Problems. Springer Science & Business Media.

[14] Ciochină, S., Paleologu, C., Benesty, J. (2016). An optimized NLMS algorithm for system identification. Signal Processing, 118: 115-121. http://doi.org/10.1016/j.sigpro.2015.06.016

[15] Widrow, B., Glover, J.R., McCool, J.M., Kaunitz, J., Williams, C.S., Hearn, R.H., Goodlin, R.C. (1975). Adaptive noise cancelling: Principles and applications. Proceedings of the IEEE, 63(12): 1692-1716. https://doi.org/10.1109/PROC.1975.10036

[16] Benabdallah, H., Kerai, S. (2021). Respiratory and motion artefacts removal from ICG signal using denoising techniques for hemodynamic parameters monitoring. Traitement du Signal, 38(4): 919-928. https://doi.org/10.18280/ts.380401

[17] Tali, M., Essadki, A., Nasser, T. (2021). Grid voltage estimation based on an adaptive linear neural network for PV-active power filter control strategy. Journal Européen des Systèmes Automatisés, 54(3): 403-410. https://doi.org/10.18280/jesa.540303

[18] Qureshi, R., Uzair, M., Khurshid, K. (2017). Multistage adaptive filter for ECG signal processing. In 2017 International conference on Communication, Computing and Digital Systems (c-code), Islamabad, Pakistan, pp. 363-368. https://doi.org/10.1109/C-CODE.2017.7918958

[19] Juan, F.A., Al-Khwaji, A.I. (2019). Noise cancellation by digital adaptive filter based on NLMS algorithm. Journal of Humanities and Applied Science (JHAS), 32: 13-25. Retrieved from https://journals.asmarya.edu.ly/jbs/index.php/jbs/article/view/87.

[20] Patil, J.P., Dembrani, M.B., Jayaswal, A.B. (2020). Design and implementation of the NLMS adaptive filter for error minimization and cancellation. International Journal of Engineering Research and Applications, 42-45.

[21] Sutha, P., Jayanthi, V.E. (2018). Fetal electrocardiogram extraction and analysis using adaptive noise cancellation and wavelet transformation techniques. Journal of Medical Systems, 42(1): 1-18. https://doi.org/10.1007/s10916-017-0868-3

[22] Ling, Q., Ikbal, M. A., Kumar, P. (2021). Optimized LMS algorithm for system identification and noise cancellation. Journal of Intelligent Systems, 30(1): 487-498. https://doi.org/10.1515/jisys-2020-0081

[23] Ramdane, M.A., Benallal, A., Maamoun, M., Hassani, I. (2022). Partial update simplified fast transversal filter algorithms for acoustic echo cancellation. Traitement du Signal, 39(1): 11-19. https://doi.org/10.18280/ts.390102

[24] Paleologu, C., Ciochină, S., Benesty, J., Grant, S.L. (2015). An overview on optimized NLMS algorithms for acoustic echo cancellation. EURASIP Journal on Advances in Signal Processing, 2015: 1-19. https://doi.org/10.1186/s13634-015-0283-1

[25] Dixit, S., Nagaria, D. (2017). LMS adaptive filters for noise cancellation: A review. International Journal of Electrical and Computer Engineering (IJECE), 7(5): 2520-2529.

[26] Wang, W., Li, J., Li, M. (2021). Selective partial update of NLMS adaptive filter algorithm. In Journal of Physics: Conference Series, 1966(1): 012009. https://doi.org/10.1088/1742-6596/1966/1/012009

[27] Romoli, L., Squartini, S., Piazza, F. (2010). A variable step-size frequency-domain adaptive filtering algorithm for stereophonic acoustic echo cancellation. In 2010 18th European Signal Processing Conference, Aalborg, Denmark, pp. 26-30.

[28] Kwong, R.H., Johnston, E.W. (1992). A variable step size LMS algorithm. IEEE Transactions on Signal Processing, 40(7): 1633-1642. https://doi.org/10.1109/78.143435

[29] Morab, F., Hegde, R., Hegde, V.N. (2022). Detection, estimation and radiation formation using smart antennas for the spatial location. Traitement du Signal, 39(1): 389-398. https://doi.org/10.18280/ts.390141

[30] El Mettiti, A., Oumsis, M. (2022). A stacked autoencoder and multilayer perceptrons for mmWave beamforming prediction. Ingénierie des Systèmes d’Information, 27(3): 479-485. https://doi.org/10.18280/isi.270315 

[31] Londhe, G.D., Hendre, V.S. (2022). An effective Kalman based hybrid beamforming for millimeter wave massive MIMO system by using 2D overlapped partially connected sub-array structure. Traitement du Signal, 39(6): 2141-2147. https://doi.org/10.18280/ts.390627

[32] Milosavljevic, C. (1985). General conditions for the existence of a quasisliding mode on the switching hyperplane in discrete variable structure systems. Automation & Remote Control, 46: 307-314.