Audio Processing by Lattice RLS Algorithm Based Linear Adaptive Filter

DOI : 10.17577/IJERTV2IS50434

Download Full-Text PDF Cite this Publication

Text Only Version

Audio Processing by Lattice RLS Algorithm Based Linear Adaptive Filter

Amit Prakash, Kumar Shashi Kant

Department of Electronics and Communication Engineering, National Institute of Technology, Jamshedpur, Jharkhand, INDIA-831014

Abstract

This RLS Lattice algorithm is developed by using vector space treatment with the introduction of Forgetting Factor such that it offers fast convergence and good error performance in the presence of noise. A linear Adaptive filter using this algorithm is developed which has advantage that we can directly add the next stage to the existing previous filter stage without changing the previous filter coefficients to obtain the higher order filter. An audio signal with Gauss White noise is simulated in the noise cancellation system on MATLAB platform. This algorithm has very less computational complexity since it does not include any inverse matrix calculation which is highly efficient. The experimental resultdemonstrates the importance of using newly updated iterations for the sensitivity of the noise estimation algorithm performance by introducing the forgetting factor in this algorithm.

Keywords Adaptive filters, RLS Algorithm, Adaptive noise cancellation, Vector space

  1. Introduction

    As compared to other adaptive algorithms Recursive Least Square (RLS) algorithm has rapid and exact convergence with a better noise handling capability across frequencies even when the Eigen value spread of the input signal correlation matrix is large. It extends the conventional scheme by adopting a numerical linear algebra random variable analysis without any mean operator [1]-[3]. A step further, RLS Lattice (RLSL) algorithm based adaptive filter is much more useful in audio processing and noise cancellation since the data processing at any instant of time for (p+1)th order requires only to add the new factor with the previous output signals of pth order as an input to

    the next order. Further, the use of numerical linear algebra analysis gives better numerical performance of the algorithm and its stability, such that the poles lies within the unity circle as explained in [4].

    To handle the rapid change in the statistical properties, forgetting factor has been introduced in this algorithm. It gives rise to exponentially weighted growing window which further enhances the performance of the algorithm by handling true error covariance matrix. In [5], the adaptive technique for noise reduction during non – speech segment is explained but its performance is degraded during speech segment. The performance of the signal can be improved by using forgetting factor in the algorithm that leads to fast tracking. It exploits the concept of forgetting factor that may be required, due to model uncertainty, presence of unknown external disturbances and time-varying nature of the observed signal or non-stationary behaviour of observation noise [6].

    In this paper, a RLS Lattice algorithm has been presented by using vector space treatment with the introduction of forgetting factor, such that it offers fast convergence and good error performance in the presence of noise. This algorithm is better than the other widely used adaptive algorithm, i.e., Least Mean Square (LMS) algorithm for noise cancellation in [7]. In addition, an improved block diagram for Adaptive Noise Canceller (ANC) is introduced to achieve noise cancellation of an audio signal.

  2. Description of the Algorithm

    1. RLS Algorithm

      For the exponentially weighted least square method the tap-weight vector () at iteration is given by [1] as,

      = ø1 (1)

      where ø() is the correlation matrix of the input vector

      combines the vector on the space1,, .

      Order update of forward error and backward error

      =1

      =1

      u(n) given by

      ()(), () is the cross-

      prediction:

      correlation vector between input of the traversal filter

      and the desired response d(n) is given by

      . : p-th order forward prediction error operator on

      ()().

      the space 1,, with n-th index as the current index

      =1

      Now, +1. is the forward error prediction operator

      Here the matrix ø() for tap weight update is given by Matrix Inverse Lemma [1], which states that

      1 = ( + )1

      where and are positive definite M-by-M matrices, is M-by-N matrix and is positive definite N-by-N matrix.

      Since this algorithm uses inverse calculation of a matrix it has more computational complexity and it is difficult for hardware implementations.

    2. RLS Lattice (RLSL) Algorithm

      We consider, in general the pre – windowed with exponentially weighted least square case, the input samples vector to the microphone be-

      (0)

      = (1)

      ()

      on the space 1,+1, i.e.

      +1.

      = 1, +1, (2)

      1, +1, can be orthogonally decomposed into two spaces i.e., 1,, and the space spanned by the (p+1)thelement vector +1 (). The orthogonal projection error of the vector +1 () has the

      0

      form denoted by ,1.

      ,1

      ,1is the p-th order backward error prediction operator on the space 1,, 1with (n-1)th index as the current index.

      Hence +1. = sum of the forward error prediction operator on the space 1, , and the forward error prediction operator on the space spanned by (+1) which is given by

      +1. = 1,, + , 1

      = .

      , , 1

      2

      2

      , 1

      ,1

      (3)

      ,,

      = , (+1) , ,

      ,, is the p-dimensional space spanned by the

      , , 1

      , 1 2

      on ,1.

      ,1

      is the projection error of

      column input vectors up to n-th index where

      is given by

      0

      , ,1 is the inner product of the vectors

      ,1which gives the correlation between the two vectors denoted by , . Thus

      0 , = , ,1 (4)

      =

      0

      (0)

      2

      ,1

      2=

      (5)

      ,1

      ()

      =1

      =1

      ,, : Orthogonal projection operator with respect to space ,, such that 1, , = z where is the optimal coefficient which linearly

      2 is the backward error variance. There may be chance that this variance becomes zero and therefore this should be initialized with a small positive value in the algorithm.

      ,1

      ,1

      Now (3) transforms to

      =

      ,

      (6)

      and, = ,

      (14)

      +1.

      .

      2

      ,1

      ,1

      ,

      2

      , 1

      The present component of the vector +1. comes out to be

      From (13) and (14) forward error variance and backward error variance requires time and order update

      ,

      () = ()

      ( 1) (7)

      for the recursive values of respectively.

      +1

      2

      ,1

      ,

      ,

      Similarly, (p+1)th order backward error prediction is given by

      Time update of variance:

      =

      ,

      (8)

      From (5), (9) and (11)

      +1.

      ,1

      2

      .

      2

      2 2

      ,

      0, = 0, =

      (15)

      2 =

      2 (9)

      2 is the auto-orrelation matrix of with

      ,

      ,

      exponential weighting factor or forgetting factor ""

      ,

      ,

      2 is the forward error variance and will never approach zero since it is the square of the error which is projected by the present component on the past p- components.

      The present component of the vector +1. comes out to be

      = 1 , (10)

      defined in pp.437 of [1].

      2 = 2 0 + 1 2 1 + + 2 1

      + 2

      = 12 0 + 22 1 + + 2 1

      + 2

      Hence,

      +1

      2

      ,

      2

      = 2

      =

      2

      + 2 (16)

      With = 0 in (6) and (8), and comparing the

      0,

      0,

      0,1

      vectors 0. 0. will be

      Order update of variance

      2 = 2

      0. = 0. = (11)

      +1,

      +1,

      2 is the inner product of

      which gives the

      +1, +1,

      For = 0,the present component or the last component of vectors . . is 0() and

      auto-correlation vector of +1, explained in (17)

      2 = 2 , ( )

      0 respectively which from (11) leads to

      +1,

      ,

      2

      2

      ,1

      2

      ,

      0() = 0()

      = () (12)

      For 1, the structure of RLSL filter follows

      Similarly,

      =

      =

      2 2

      =

      ,

      ,

      2

      ,

      2

      ,1

      (17)

      from (7), (10) and (12) is given in Fig. 1.

      +1,

      ,1

      ,

      2

      ,

      (18)

      Fig.1 Schematic of RLSL Filter

      Time update for, :

      Consider the space , such that

      ,,

      = 1 , 2 , , , ()

      where()is the pinning vector defined as

      0 0 1 T

      and,

      1 1 1

      The values of (Fig.1) are both function

      = ,

      , ()

      ,

      ,

      ,,1

      of order and time that needs to be updated respectively. 0 0

      = ,

      (13)

      The p-th order backward error projection , , is

      , 2

      ,

      given by

      ,2

      ,1

      introduction of forgetting factor and efficient updating

      , =

      0

      = ,1

      , (19)

      of filter weights.

      , , =

      =

      From (19),

      Simulation

      The primary input to the ANC is the desired signal

      ,1 =

      From (4) and (19),

      ,2

      +

      0

      ,1

      ,

      x(n) mixed with the White Gaussian noise signal v(n). The mixed signal is passed through the adaptive filter based on RLS Lattice algorithm which recursively

      adjusts the filter coefficients to get the noise free

      ,2

      ,1

      output y(n) which matches with the x(n) desired signal.

      , = ,

      0

      + ,

      The block diagram representation of the ANC is shown in Fig. 3.

      = 1 ,

      ,2

      () ( 1)

      + ,

      =

      ,1

      () ( 1)

      + ,

      (20)

      The present component or the last component ( ()) of vector , can be derived from (19) as

      = 1 + 1

      (21)

      The order update for which is correlation of the error variance of is given by

      2( 1)

      Fig. 3. Block Diagram representation of ANC The simulation is done through MATLAB platform

      +1 = +

      2 (22)

      for sinusoidal signal in first case and for audio signal in

      ,

      The summary of the algorithm is depicted in Table.1.

  3. Experimental Results

    As compared to other widely used adaptive algorithms RLSL algorithm gives better result as shown in Fig. 2. It shows the curve between Mean Square Error (MSE) and the number of iterations. MSE is the difference between the output signal after filtering and the original input signal.

    second case. In both the cases, the order of filter = 8,and the forgetting factor = 0.89, = 0.01.

      1. Simulation of Sinusoidal Signal

        The input signal:x(n)=sin(0.05*pi*n) Noise: v(n)=randn(1,n)

        Simulation:

        a)

        1.5

        1

        0.5

        0

        -0.5

        -1

        INPUT SINUSOIDAL SIGNAL

        Fig.2 Learning curves of algorithms compared

        The figure shows that RLSL has faster convergence rate than the other conventional algorithms due to

        -1.5

        0 100 200 300 400 500 600 700

        NUMBER OF ITERATIONS

        b)

        5.6

        5.5

        5.4

        5.3

        5.2

        5.1

        5

        1.5

        1.5

        c)

        a)

        SIGNAL AFTER PASSING THROUGH CHANNEL

        0 100 200 300 400 500 600 700

        NUMBER OF ITERATIONS

        uest

        uest

        ESTIMATED SINUSOIDAL SIGNAL b)

        1.5

        1

        0.5

        0

        -0.5

        -1

        -1.5

        INPUT AUDIO SIGNAL

        0 1000 2000 3000 4000 5000 6000 7000

        1

        0.5

        0

        -0.5

        -1

        -0.1

        -0.2

        -0.3

        -0.4

        -0.5

        -0.6

        AUDIO SIGNAL AFTER PASSING THROUGH CHANNEL

        -1.5

        d)

        1.5

        1

        0.5

        0

        -0.5

        -1

        0 100 200 300 400 500 600 700

        NUMBER OF ITERATIONS

        ESTIMATED SINUSOIDAL SIGNAL

        td/>

        c)

        -0.7

        -0.8

        -0.9

        1.5

        1

        0.5

        0

        0 1000 2000 3000 4000 5000 6000 7000

        ESTIMATED AUDIO SIGNAL

        -1.5

        0 100 200 300 400 500 600 700

        NUMBER OF ITERATIONS

        Fig. 4. Simulation of Sinusoidal Signal (a)input sinusoidal signal (b) input signal mixed with noise (c)output of the RLSL filter (d) output of RLS filter

      2. Simulation of an Audio Signal

    -0.5

    -1

    -1.5

    d)

    1.5

    1

    0 1000 2000 3000 4000 5000 6000 7000

    ESTIMATED AUDIO SIGNAL

    In this experiment an audio file is recorded from the sound card as a .wavfile. This audio signal is then mixed with the White Gaussian noise and passed through the RLSL algorithm based filter to get the desired output. The RLSL filter updates its coefficients until the noise from the mixed signal is eliminated.

    Simulation:

    0.5

    0

    -0.5

    -1

    -1.5

    0 1000 2000 3000 4000 5000 6000 7000

    Fig. 5. Simulation of Audio Signal

    (a)input audio signal (b) input signal mixed with noise (c)output of the RLS filter (d)output of the RLSL filter

    300

    250

    200

    150

    100

    50

    0

    MEAN SQUARED ENSEMBLED AVERAGE

    0 1000 2000 3000 4000 5000 6000 7000

    NUMBER OF ITERATIONS

    contribution and guidance provided by the Head & supervisor needs a special mention.

    Initialization

    For = 0 set:

    1 = 0

    1 = 0

    2 = 0

    ,1

    2 = ; such that > 0 & 0

    ,1

    For 0, repeat:

    0 () = 0() = ()

    2 = 2 = 2 + 2 0, 0, 0,1

    0 = 1

    For = 0 1, repeat:

    = 1 + 1

    ,

    +1() = () 2 ( 1)

    ,1

    = 1 ,

    +1 2

    ,

    2( 1)

    +1 = +

    2

    ,

    2

    2 = 2 ,

    +1, , 2

    ,1

    2 2 2,

    +1, = ,1

    2

    ,

    Table 1: The standard RLS Lattice Algorithm

    Initialization

    For = 0 set:

    1 = 0

    1 = 0

    2 = 0

    ,1

    2 = ; such that > 0 & 0

    ,1

    For 0, repeat:

    0 () = 0() = ()

    2 = 2 = 2 + 2 0, 0, 0,1

    0 = 1

    For = 0 1, repeat:

    = 1 + 1

    ,

    +1() = () 2 ( 1)

    ,1

    = 1 ,

    +1 2

    ,

    2( 1)

    +1 = +

    2

    ,

    2

    2 = 2 ,

    +1, , 2

    ,1

    2 2 2,

    +1, = ,1

    2

    ,

    Table 1: The standard RLS Lattice Algorithm

    Fig. 6. Mean square Ensemble Average for RLS filter

    1

    0.9

    0.8

    0.7

    0.6

    0.5

    0.4

    0.3

    0.2

    0.1

    0

    MEAN SQUARED ENSEMBLED AVERAGE

    0 100 200 300 400 500 600 700

    NUMBER OF ITERATIONS

    Fig. 7. Mean square Ensemble Average for RLSL filter

    The RLSL gives exact convergence as demonsrated in Fig. 7. and hence the output of the RLSL filter is exaclty matched with the desired signal. The SNR(dB) calculatedat input and output of both RLS and RLSLfilters respectively aregiven below.

    Input SNR(dB)

    Output SNR(dB)

    RLS

    24.1894

    18.1987

    RLSL

    24.1894

    21.4204

    Input SNR(dB)

    Output SNR(dB)

    RLS

    24.1894

    18.1987

    RLSL

    24.1894

    21.4204

  4. Conclusion

In this research work, we developed lattice form of RLS Adaptive Filter based on the vector space treatment. The main conclusion is that the standard lattice recursions of Table. 1represents the only numerically reliable algorithm. The algorithm demonstrates the importance of using the newly updated iterations for the sensitivity of the noise estimation algorithm performance by using the forgetting factor in above given algorithm which has been verified by several simulations to validate the algorithm in finite precision and the performance is compared with other algorithms.

Acknowledgement

The authors are thankful to the incessant support and motivation from the Department of Electronics and Communication Engineering at NIT Jamshedpur. The

References

  1. Simon Haykin, Order-recursive adaptive filters, in Adaptive Fiter Theory, 4th ed., India, pp. 536- 563.

  2. B.Farhang-Boroujeny, Lattice filters in Adaptive Filters, USA, pp. 357-410.

  3. Glenn E. Johnson, Robert A. Muir, Joseph M. Scanlan, and William M. Steedly, Practical Comparison of Adaptive Filter Algorithms Proc. IEEE, pp. 259-263, 1995.

  4. J.R. Bunch, R.C.Le Borne and I.K. Proudler, Tracking ill-conditioning for the RLS-lattice

    algorithms IEE Proc.-Vis. Image Signal Process., Vol. 145, pp. 1-5 , February 1998.

  5. R. B. Wallace and R. A. Goubran, Noise Cancellation Using Parallel Adaptive Filters IEEE transactions on circuits and systems-II: Analog and Digital signal processing, vol. 39, pp. 239-42, April 1992.

  6. Md. Zulfiquar Ali Bhotto and Andreas Antoniou Robust Recursive Least-Squares Adaptive- Filtering for Impulsive- Noise Environments IEEE Signal processing letters, vol. 18, pp. 185- 188, March 2011.

  7. Ma Shengqian, XuGuowei, Ma Zhifeng, Wei Shuping, Fan Manhong, Research on Adaptive Noise Canceller of an Improvement LMS algorithm Proc. IEEE, pp. 1611-1614, 2011.

Leave a Reply