Adaptive Filter Analysis for System Identification Using Various Adaptive Algorithms

DOI : 10.17577/IJERTV1IS3042

Download Full-Text PDF Cite this Publication

Text Only Version

Adaptive Filter Analysis for System Identification Using Various Adaptive Algorithms

Ms. Kinjal Rasadia, Dr. Kiran Parmar

Abstract This paper includes the analysis of various adaptive algorithms such as LMS, NLMS, Leaky LMS, Sign- Sign, Sign-error and RLS for system identification. The problem of obtaining a model of system from input and output measurements is called the system identification problem. Using adaptive filter we can find the mathematical model of unknown system based on the input and output measurement. And analyze different parameter of algorithm such as order of filter, step size, leakage factor, normalized step size and forgetting factor. It has been found that RLS faster than other, but for practical consideration LMS is better. Complexity of LMS is less as compare to RLS because of less floating point operation. As the order increases magnitude response of adaptive filter is nearly equal to the response of unknown system and mean square error also reduced.

Index Terms Convergence speed, Least mean square error (LMS), Mean Square error, Normalized LMS, System Identification

  1. INTRODUCTION

    which is in turn used to control the values of a set

    Digital signal processing systems are attractive due to their low cost, reliability, accuracy, small physical sizes, and flexibility. Coefficients of adaptive filter are continuously and automatically adapt to given signal in order to get desired response and improve the performance.

    Figure 1. adaptive filter configuration Figure 1 shows the basic adaptive filter configuration, where x(k) is the input signal, y(k) is filter output, d(k) is the desired signal, e(k) is the error signal.The main objective of adaptive filter is to minimize the error signal. Here FIR filters struc- ture and different Algorithmic methods are used to represents complete adaptive filter specification. There are three main specifications are required for designing adaptive filter, i.e. algorithm, filter struc- ture, application. There are numbers of structures, but widely used FIR filter structure because of its stability. Adaptive filters have been successfully applied in such diverse fields as communications, radar, sonar, seismology and biomedical engineer- ing. Although these applications are indeed quite different in nature, nevertheless, they have one ba- sic common feature: an input vector and desired response are used to compute an estimation error,

    of adjustable filter coefficients. However, essential

    difference between the various applications of adaptive filtering arises in the manner in which the desired response is extracted.

  2. ALGORITHMS

    The algorithm is the procedure used to adjust the adaptive filter coefficients in order to minimize a prescribed criterion i.e. error signal. Most reported developments and applications use the FIR filter with the LMS algorithm because it relatively simple to design and implement. Many adaptive algo- rithms can be viewed as approximations of the wiener filter. As shown in figure 1, the adaptive algorithm uses the error signal

    e (k) = d(k) y(k) (1)

    to update the filter coefficients in order to minimize a predetermined criterion. The most widely mean square-error (MSE) criterion is defined as

    = E * e2(k) ] (2)

    Most widely used algorithm is LMS (Least Mean Square). Because it is relatively simple to design and implement. There are some set of LMS-type algorithms obtained by the modification of the LMS algorithm [5]. The motivation for each is practical consideration such as faster convergence, simplicity of implementation, or robustness of operation. Mean square error behavior, convergence and steady state analysis of different adaptive algo- rithms are analyzed in [2]-[4]. The LMS algorithm requires only 2L multiplications and additions and is the most efficient adaptive algorithm in terms of computation and storage requirements. The com- plexity is much lower than that of other adaptive algorithms such as kalman and recursive least square algorithms.

    1. LMS Algorithm

      The LMS algorithm is a method to estimate gra-

      dient vector with instantaneous value. It changes the filter tap weights so that e (k) is minimized in the mean-square sense. The conventional LMS al- gorithm is a stochastic implementation of the steep- est descent algorithm.

      e (k) = d(k) w(k) X(k) (3)

      Coefficient updating equation is

      w (k+1) = w(k) + x(k) e(k), (4) Where is an appropriate step size to be chosen as 0 < < 0.2 for the convergence of the algorithm. The larger step sizes make the coefficients to fluctuate wildly and eventually become unstable.[6]

      The most important members of simplified LMS algorithms are:

    2. Normalized LMS (NLMS) Algorithm

      The normalized LMS algorithm is expressed as

      w (k + 1) = w (k) +2 µ (k) e (k) x (k). (5)

      (k) = / (m+1) px (k). (6)

      Where µ (k) is the time varying step size normal- ized by L= (m+1) and the power of the signal x (k). Where 0 < < 1. *3+-[4].

    3. Leaky LMS Algorithm

      Insufficient spectral excitation of the algorithm of LMS algorithm may result in divergence of the adaptive algorithms. Divergence may avoided by using leaky mechanism during the coefficient adap- tation process. The leaky LMS algorithm is ex- pressed as

      w (k+1) = v w(k) + µ e(k) x(k). (7) Where v is leaky factor with range 0 << v < 1.

    4. Signed LMS algorithm

      This algorithm is obtained from conventional LMS recursion by replacing e(k) by its sign. This leads to the following recursion:

      w(k+1) = w(k) + x(k) sgn{e(k)} (8)

    5. Signed-Regressor Algorithm (SRLMS)

      The signed regressor algorithm is obtained from the conventional LMS recursion by replacing the tap- input vector x (k) with the vector sgn{x(k)}.Consider a signed regressor LMS based adaptive filter that processes an input signal x(k) and generates the output y(k) as per the following:

      w (k+1) = w(k) + sgn,x(k)}e(k) (9)

    6. Sign Sign Algorithm (SSLMS)

      This can be obtained by combining signed- regressor and sign recursions, resulting in the fol-

      lowing recursion:

      w(n+1) = w(n) + sgn,x(n)} sgn,e(n)}, (10)

    7. Recursive Least square (RLS)

      The RLS method typically converges much faster than the LMS method, but at cost of most computa- tional effort per iteration. Derivation of these results can be found in references books [7]-[9]. Unlike the LMS method, which asymptotically approaches the optimal weight vector using a gradient based search, the RLS method attempts to find the optim- al weight at each iteration. The expression for RLS method is

      w (k) = (11)

      the design parameter associated with the RLS me- thod are the forgetting factor 0 < 1, the regula- rization parameter, >0, and the transversal filter order, m 0.the required filter order depends on the application.

  3. ADAPTIVE FILTER APPLICATION: SYSTEM IDENTIFICATION

    Mathematical models of physical phenomena can be effectively apply analysis and design techniques to practical problems. In many instances, a mathe- matical model can be developed using underlying physical principles and understanding of the com- ponent of the system and how they are intercon- nected. But in some cases, this approach is less ef- fective, because the physical system or phenome- non is too complex and is not well understood. In these cases, we have to design this mathematical model based on the measurement of the input and outut. Typically, we assume that the unknown system can be modelled as a linear time system. The problem of obtaining a model of system from input and output measurements is called the sys- tem identification problem.[9]

    Adaptive filter are highly effective for perform- ing system identification using the configuration shown in figure 2.

    Figure 2 System identification

    To illustrate the entire algorithm, consider the system identification problem shown in figure 2. Let the system to be identified has the following transfer function.

    H(z)=

    300

    250

    200

    e 2 (k)

    150

    100

    NLMS Squared error

    Here input x(k) consists of N=1000 samples of white noise uniformly distributed over [-1,1]. Effec- tiveness of adaptive filter can be assess by compar- ing the magnitude response of the system, H(z), with the magnitude response of the adaptive filter, w(z), using final steady state weight, w(N-1). Note that this is true in spite of the fact that H(z) is a IIR filter with six poles and six zeros, while the steady state adaptive filter is an FIR filter with different specifications.[10]

  4. SIMULATION RESULT

    This section presents the results of simulation using MATLAB to investigate the performance behav- iours of various adaptive algorithms. The principle

    50

    0

    0 50 100 150 200 250 300 350 400 450 500

    k

    (b)

    Leaky LMS Squared error

    450

    400

    350

    300

    e 2 (k)

    250

    200

    150

    100

    50

    0

    means of the comparison is the steady state error of the algorithms which depends on the parameters such as step size, filter length and the number of iterations and identifies the unknown system. Here system is identified using different adaptive algo- rithms such as LMS, NLMS, Leaky LMS, sign data LMS, sign error LMS, sign sign LMS and RLS. All simulations plots are average over 500 independent runs and filter order m =50.

    LMS Squared error

    0 50 100 150 200 250 300 350 400 450 500

    k

    (c)

    Figure 3 Plots of MSE using (a) LMS Method (b) NLMS Method (c) Leaky LMS Method using

    µ=0.01.(continue)

    From simulation result shown in figure 3 we have seen that NLMS converge faster than LMS, Leaky LMS have same as LMS but it has excess MSE higher than the LMS. The equation of sign-

    sign LMS algorithm requires no multiplication.

    450

    400

    350

    300

    e 2 (k)

    250

    200

    150

    100

    50

    0

    0 50 100 150 200 250 300 350 400 450 500

    k

    ( a)

    Sign sign LMS method and sign error LMS method is not useful for DSP filter applications. This simpli- fied LMS is designed for a VLSI or ASIC implemen- tation to save multiplications. It is used in the adap- tive differential pulse code modulation for speech compression. However, when this algorithm is im- plemented on DSP processor with a pipeline archi- tecture and parallel hardware multipliers, the throughput is slower than the standard LMS algo- rithm because the determination of signs can break the instruction pipeline and therefore severely reduce the execution speed.

    500

    450

    400

    350

    300

    e2(k)

    250

    200

    150

    100

    50

    0

    1200

    1000

    sign data LMS Squared error

    0 50 100 150 200 250 300 350 400 450 500

    k

    (d)

    Sign error LMS Squared error

    Figure 4 shows the plots of convergence speed using different adaptive algorithms. we can see from that RLS method converge faster as compare to other method and sign sign and sign error take to much time and samples to convert into mini- mum MSE. The results in [1] show that the perfor- mance of the signed data LMS algorithm is superior than conventional LMS algorithm, the performance of signed LMS and sign-sign LMS based realiza- tions are comparable to that of the LMS based filter- ing techniques in terms of signal to noise ratio and computational complexity.

    800

    e 2(k)

    600

    400

    200

    0

    900

    800

    700

    600

    e 2 (k)

    500

    0 50 100 150 200 250 300 350 400 450 500

    k

    (e)

    Sign sign LMS Squared error

    15

    10

    5

    Amplitude

    0

    -5

    -10

    -15

    LMS covergence speed

    0 100 200 300 400 500 600 700 800 900 1000

    index

    400

    300

    (a)

    200

    100

    0

    6

    5

    4

    e 2(k)

    3

    2

    0 50 100 150 200 250 300 350 400 450 500

    k

    ( f)

    RLS Squared error

    NLMS covergence speed

    4

    3

    2

    Amplitude

    1

    0

    -1

    1

    0

    0 50 100 150 200 250 300 350 400 450 500

    k

    (g)

    Figure 3 Figure 3 Plots of MSE using (d) Sign data LMS Method (e) sign error LMS Method (f) Sign Sign LMS Method (g) RLS Method using µ=0.01. (Continue)

    -2

    -3

    0 100 200 300 400 500 600 700 800 900 1000

    index

    (b)

    Figure 4 Plots of convergence speed using (a) LMS Method (b) NLMS Method using µ=0.01.(continue)

    15

    10

    5

    Amplitude

    0

    -5

    -10

    -15

    LeakyLMS covergence speed

    0 100 200 300 400 500 600 700 800 900 100

    index

    (c)

    Sign data LMS covergence speed

    0.6

    0.5

    0.4

    Amplitude

    0.3

    0.2

    0.1

    0

    -0.1

    -0.2

    RLS covergence speed

    0 100 200 300 400 500 600 700 800 900 1000

    index

    (g)

    10

    5

    Amplitude

    0

    -5

    -10

    -15

    20

    15

    10

    Amplitude

    5

    0

    -5

    -10

    -15

    0 100 200 300 400 500 600 700 800 900 1000

    index

    (d)

    Sign error LMS covergence speed

    Figure 4 Plots of convergence speed using (c) Leaky

    LMS Method (d) sign data LMS Method (e) sign error LMS Method (f) sign sign LMS Method (g) RLS Method using µ=0.01.(continue)

    Table 1

    -20

    Method

    MSE

    C

    MSE

    C

    µ = 0.01

    µ =0.004

    LMS

    0.0870

    450

    0.1967

    900

    NLMS

    0.0170

    400

    0.0170

    400

    Leaky LMS

    0.0896

    600

    0.2076

    1000

    Sign data LMS

    0.0630

    400

    0.1257

    700

    Sign error LMS

    0.6216

    2300

    0.8309

    4000

    Sign sign LMS

    0.4732

    1500

    0.7469

    3000

    RLS

    1.4443e-

    004

    30

    1.4443e-

    004

    30

    0 100 200 300 400 500 600 700 800 900 1000

    index

    (e)

    20

    15

    10

    Amplitude

    5

    0

    -5

    -10

    Sign sign LMS covergence speed

    Table 1 shows the relation between the MSE and convergence speed for different algorithms using two different values of µ. It shows that for small value of µ MSE is high and convergence time is also high.

    M = µ (L+1) P(x) (12)

    -15

    -20

    0 100 200 300 400 500 600 700 800 900 1000

    index

    (f)

    Where M gives miss adjustment factor, P(x) gives the power of input signal and L indicates the filter length.

    Table 2

    µ

    M

    0.01

    0.17

    0.004

    0.068

    Table 2 shows the relation between excess MSE and step size. Where M indicates the miss adjustment factor. if the step size is higher M is also higher. It means that after converge in to minimum MSE there are excess MSE is also presents due to noisy gradient estimation. And it may not be zero at min- imum MSE. So there is always tradeoff between the convergence speed and steady state accuracy.

  5. CONCLUSION

We have studied and analyzed different adaptive algorithms for system identification. LMS algo- rithm is useful for practical implementation.RLS method is faster than the LMS methods but require larger number of floating point operation. For LMS m(50) Flops is required and for RLS 3m2 (7500) Flops are required. Normalized LMS method, leaky LMS method, sign data, sign error and sign sign LMS are the modified version of LMS method, which are used according to requirement of appli- cation. Sign error and sign sign LMS method have larger MSE and take too much time to converge. There is always the tradeoff between convergence speed and steady state accuracy.

References

  1. Mohammad Zia Ur Rahman, Rafi Ahamed Shaik and D V Rama Koti Reddy, Noise Cancel- lation in ECG Signals using computationally Sim- plified Adaptive Filtering Techniques: Application to Biotelemetry, Signal Processing: An Interna- tional Journal 3(5), November 2009.

  2. Allan Kardec Barros and Noboru Ohnishi, MSE Behavior of Biomedical Event-Related Fil- ters IEEE Transactions on Biomedical Engi- neering, 44( 9), September 1997.

  3. Ahmed I. Sulyman, Azzedine Zerguine, Con- vergence and Steady-State Analysis of a Variable Step-Size Normalized LMS Algorithm, IEEE 2003.

  4. S.C.Chan, Z.G.Zhang, Y.Zhou, and Y.Hu, A New Noise-Constrained Normalized Least Mean Squares Adaptive Filtering Algorithm, IEEE 2008.

  5. Syed Zahurul Islam, Syed Zahidul Islam, Ra- zali Jidin, Mohd. Alauddin Mohd. Ali,Performance Study of Adaptive Filtering Al- gorithms for Noise Cancellation of ECG Sig- nal,IEEE 2009.

  6. Syed Zahurul Islam, Syed Zahidul Islam, Ra- zali Jidin, Mohd. Alauddin Mohd. Ali,Performance Study of Adaptive Filtering Al- gorithms for Noise Cancellation of ECG Sig- nal,IEEE 2009.

  7. Moonen and peroudeler,An Introduction to Adaptive Signal Processing, McGraw Hill, secod edition,2000.

  8. Haykin, s. Adaptive Filter Theory,4th edition, prentice Hall,2002.

  9. Sen M.kuo, woon-seg gan,Digital signal Pro- cessors.2005

  10. Robert J. Schilling, Sandra L. Har- ris,Fundamentals of Digital Siganl Processing,2009.

Ms. Kinjal N. Rasadia is currently persuing M.E in L.D.College of engineering & technology, Ahmedabad

.E mail: kinj_rasadia@yahoo.com

Co-Author Dr.Kiran Parmar is currently working as a Asso- ciate Professor & Head, EC engineering department, L.D.College, Ahmedabad.

Leave a Reply