An Affine Projection Algorithm with Variable Step Size for Channel Equalization

DOI : 10.17577/IJERTV2IS111022

Download Full-Text PDF Cite this Publication

Text Only Version

An Affine Projection Algorithm with Variable Step Size for Channel Equalization

Sanjay Kumar Singh Electronics Engineering

Board of Technical Education, Lucknow Uttar Pradesh, India.

AbstractThe recent digital transmission system demands the channel equalizers to have short training time and high tracking rate. The purpose of an adaptive equalizer is to operate on the channel output such that the cascade connection of the channel and the equalizer provides an approximation to an ideal transmission medium. The adaptive filtering algorithm employed in the equalizer design suffers from the convergence speed misadjustment trade off problem. In this paper, variants of affine projection algorithm (APA) providing fast convergence and minimum misadjustment has been presented. Simulation results are provided to corroborate the analytical results.

Index TermsAffine projection algorithm (APA),Channel equalization, Least Mean Square algorithm (LMS), Least Square algorithm (LS), normalized LMS,

  1. INTRODUCTION

    One of the most important advantages of digital transmission system is their higher reliability in noisy environment compared to their analog counter parts. Unfortunately the digital transmission suffers from inter symbol interference (ISI) where the transmitted pulses are smeared out so that the pulses that corresponds to different symbols are not separable. In order to solve this problem equalizers are designed which is meant to work in such a way that the BER (bit error rate) should be low and SNR (signal to noise ratio) should be high.

    Since the channels transfer function may be stationary or non-stationary so adaptive [1, 2] equalizers are exploited mostly. An adaptive equalizer is an equalization filter that automatically adapts to time varying properties of the communication channel. It is a filter that self-adjusts its filter coefficients according to an optimizing algorithm.

    The rest of the paper is organized as follows. Section II describes the basic concept of transversal equalizer. In section III the conventional affine projection algorithm is discussed. Section IV describes the variants of APA. Section V provides the experimental results and section VI presents the conclusions.

  2. CHANNEL EQUALIZATION

    The inter symbol interference (ISI) imposes obstacle in achieving increased digital transmission rates with the required accuracy. ISI problem is resolved by channel

    equalization. The channel parameters are not known in advance and moreover they may vary with time. Hence it is necessary to use the adaptive equalizers, which provide the means of tracking the channel characteristics. The following figure shows a diagram of a channel equalization system.

    Fig. 1. Adaptive equalizer in a chain of the transmission system

    The source block transmits QPSK symbols xk ±1±j1 (k=1 K) with equal probability. The total number of transmitted symbols is denoted as K. The channel block introduces ISI using a finite impulse response (FIR) type of channel model. At the output of channel, a noise sequence nk is added. This noise is assumed to be additive white Gaussian noise (AWGN) with variance n2. The sum of channel output and noise sequence forms the received signal rk, which is fed into equalizer block. Finally the equalizer output qk is fed to the slicer to obtain estimate zk of the transmitted data symbol xk. The equalizer block performs the equalization of channel. Different performance criterion can be utilized in equalization. For Mean Square Error (MSE) criterion, the LMS algorithm update of the equalizer coefficient vector is given by

    hk+1 = hk + 2µekrk

    where rk=[rk rk-1 rk-(N-1) ]T is the input vector, hk is the weight vector, ek is the error signal, N is the number of adaptive filter coefficients and µ is the step size parameter. The step size parameter µ controls the adaption speed of the adaptive filter.

    In order to implement the adaptive equalizer, we need to generate a reference signal for the adaptive algorithm. For the initial adaption of the filter coefficients we need at the receiver to be able to generate a replica of the transmitted data sequence. This known sequence is referred to as the

    training sequence. During the training period the desired signal is used as a reference signal and the error signal is defined as ek = xk-D – qk (see Fig. 1.). After the training period, the equalization can be performed in decision- directed manner. This mode can be utilized if the channel can be assumed to be time variant. Therefore, it can be assumed that the decisions in the slicer output are correct most of the time and the slicer decisions can be used as reference signal. In the decision directed mode, the error signal is defined as ek = zk – qk (see Fig. 1.). The mean square

    error (MSE) [23] for the filter in the kth time instant is

    defined as

    MSEk = E[|ek|2]

  3. AFFINE PROJECTION ALGORITHM

    The algorithm based on Least Square (LS) solution is a

    e(n) is a vector of size P×1 given by

    e(n)=d(n)-U(n)w(n) (4)

    P is projection order which is number of input vectors and

    is the step-size. P and µ affects the performance of APA.

  4. VARIANTS OF AFFINE PROJECTION ALGORITHM

    The low cost APA [19] includes fast affine projection algorithm (FAP), Gauss Seidel pseudo affine projection algorithm (PAP), and low complexity dichotomous coordinate descent (DCD) based APA, FAP[24] and PAP. The variable step size (VSS) based APA are discussed below.

    1. Optimal variable step size Affine Projection Algorithm

      The update recursion (2) can be written in terms of the

      member of zero forcing algorithms (ZFA). ZFA may provide

      weight-error vector,

      w(n) wo w(n)

      as:

      the convergence of the adaptive filter in M iterations (M is

      w(n 1) w(n) U* (n)[U(n)U* (n)]1e(n)

      (5)

      the filter length) by solving the M×M full rank equation matrix. There is another member of ZFA, which is base on partial rank filtering equations. This algorithm is known as affine projection algorithm (APA)[15]. It is also called as

      partial rank algorithm. Ehen the APA provides full rank

      Squaring both sides and taking expectations, we find that that the mean square deviation (MSD)[2] satisfies:

      E w(n 1) 2 E w(n) 2 2 ReE e* (n)(U(n)U*(n))1 U(n)w(n)

      2 E e* (n) U(n)U* (n)1 e(n)

      solution, it becomes equivalent to LS algorithm and may converge in M iterations. The APA provides slower

      E w(n) 2 ()

      (6)

      convergence [16] with lower projection orders (partial rank solutions), and the convergence speed is also highly dependent on the correlation of the input process. The Affine Projection Algorithm with projection order one is equivalent

      to normalized LMS algorithm (NLMS) [19].

      If we choose such that () is maximized, then this

      choice guarantees that the MSD will undergo the largest decrease from iteration n to n+1 .

      Maximizing

      () 2 ReE e* (n)(U(n)U*(n))1 U(n)w(n) 2 E e*(n) U(n)U*(n)1 e(n)

      with respect to , leads to the optimum step-size

      Re E e* (n) U(n)U* (n) 1 U(n)w(n)

      (7)

      o (n)

      E e* (n) U(n)U* (n) 1 e(n)

      Assuming the noise sequence (n) is identically and independently distributed and statistically independent of the regression data {U(n)}, neglecting the dependency of w(n) on past noises, o (n) is approximated as-

      Fig. 2. Adaptive Scheme and signals involved using APA

      o(n)

      E w(n) 2

      (8)

      E w(n) 2 2Tr E U(n)U* (n) 1

      Consider data {d(n)} that arise from the model

      d (n) u(n)wo (n)

      (1)

      v

      v

      where

      o E w(n) 2 E w* (n)U* (n) U(n)U* (n)1 U(n)w(n)

      where w is an unknown column vector that we wish to

      estimate, (n) accounts for measurement noise and u(n)

      Observe that

      U* (n) U(n)U* (n)1 U(n)

      is a projection

      denotes 1XM row input (regressor) vectors. Let w(n) be an

      estimate for wo at iteration n. The Affine Projection

      matrix onto R(U*(n)) ,the range space of U*(n).

      Algorithm computes w(n) via:

      Let

      p(n)

      U* (n) U(n)U* (n)1 U(n)w(n)

      (9)

      w(n 1) w(n) U* (n)[U(n)U* (n)]1e(n)

      where

      (2)

      which is the projection of w(n)

      Since

      onto R(U*(n)).

      U(n) [u(n)u(n 1)……u(n P 1)]T

      d(n) [d (n)d (n 1)…….d (n P 1)]T

      (3)

      p(n) 2 w* (n)U* (n) U(n)U* (n)1 U(n)w(n) (10) therefore the optimum step-size in (8) becomes-

      o (n)

      E p(n) 2

      (11)

      The newly generated projection matrix in (17) is time dependent; the latest data are more significant in the pseudo

      v

      v

      E p(n) 2 2Tr E U(n)U* (n)

      In calculating this o (n) , however, the major obstacle is

      inverse matrix by which the error vector is projected.

      The variable step size Affine Projection Algorithm with forgetting factor (VS-APA-FF) is:

      that p(n) is not available during adaptation, since wo is

      unknown.

      2

      2

      w(n 1) w(n) (n)U* (n)[U(n)U* (n)]1 e(n)

    2. Variable step size Affine Projection Algorithm

    (n)

    p ' (n)

    When v (i) =0, p(n)=U*(n)(U(n)U*(n))-1e(n) and even with white noise, it holds under expectation that

    max

    p ' (n) C

    2

    2

    1

    (20)

    * * 1

    (12)

    p ' (n) p ' (n 1) (1 )U'*(n) U' (n)U'*(n)

    e(n)

    E[p(n)] E U (n) U(n)U (n) e(n)

    Motivated by these facts, we estimate p(n) by time averaging as follows:

    p (n) p (n 1) (1 )U* (n) U(n)U* (n)1 e(n) (13)

    0 < 1 Note that U(n) is only replaced by U' (n) during the error evaluation phase (19), not during the weights updating phase because of instability which has been observed in some

    with a smoothing factor (0<1).

    simulations of replacing U(n) by

    U' (n)

    for both. This

    Using

    p (n) 2

    instead of

    E p (n) 2

    in (11), the VSS-

    phenomenon is most possibly due to the ill-conditioning of

    APA becomes

    w(n 1) w(n) (n)U* (n)[U(n)U* (n)]1e(n)

    p (n) 2

    (n) max 2

    (14)

    the input matrix U(n) caused by forgetting process.

    A special case of this Algorithm is the variable step size NLMS with forgetting factor (VS-NLMS-FF) obtained by setting K = 1. For this case, the input matrix U(n) is a row

    vector and the forgetting factor processing is implemented

    p (n) C

    only in the row direction.

    where C is a positive constant.

    v

    v

    From (11) and (14), we see that C is related to 2TrE U(n)U* (n), and this quantity can be

    approximated as K/ SNR.

    When p (n) 2 is large, w (n) is far from w0 and (n) tends

    U' (n) = U(n)(L) (21)

    D.Regularization of ill conditioned Projection Matrix

    In (19) of the previously proposed Algorithm,

    to . On the other hand when

    p (n) 2

    is small, w(n)

    ( U' (n)U'* (n)

    ) is potentially ill-conditioned with small

    max

    approaches w0 and step-size is small. Thus depending on

    singular values. Using the singular value decomposition

    U'

    p (n) 2 , (n) varies between 0 and max .

    To guarantee Filter stability [14], max is chosen less than 2

    C. Optimal variable step size APA with Forgetting Factor

    Now, we introduce a forgetting factor into the pseudo- inverse projection matrix, resulting in a marked convergence enhancement. The input matrix at time n can be described as:

    (SVD), can be decomposed as:

    U' =RV (22)

    where R and V are K×K and L×L unitary matrices, respectively. is a K×L matrix with nonnegative diagonal elements of singular values n, The ill-conditionness of U is characterized by its condition number,

    condU=max/min=1/K (23) from (17), the SVD of the weighted input matrix U' is:

    U' = (K)U(L) = (K)[RV](L)

    U[k 1,l 1] (n) u(n k l)

    k = 0, 1, . . ., K 1; l = 0, 1, . . . , L 1;

    By introducing a forgetting factor , 0 < 1,

    (15)

    = R((K)(L))V (24)

    = R ' V

    where ' is a K × L matrix with all zero entities except

    [ ' ]

    j,j

    = 2(j1)j

    , j = 1, 2, . . .,K. The condition number

    U' (n) u(n k l)k l ku(n k l)l

    of the weighted input matrix U' is:

    [k 1,l 1]

    In matrix notation, we represent this as

    (16)

    cond U'

    = 1/[

    2(K1)

    K] =

    2(1K)

    condU

    U

    U

    '

    [k 1,l 1]

    (n) K U(n)L

    (17)

    which illustrates the increasing condition number due to decrease in and increase in K. Because of this ill

    where (m) is an m × m diagonal matrix with

    [(m)]j,j = j1 j = 1, 2, . . .,m (18)

    then (12) becomes

    p' (n) U'* (n) U* (n)U'* (n)1 e(n)

    conditioning, the estimated p' may not be a true evaluation of the error signal. Even if the error signal is stable, the projected p' could be unstable. Thus the VS-APA and VS-

    (19) APA-FF Algorithms adopt a smoothing function, in the form

    of (13), to alleviate this problem with the cost loss of error signal fidelity, which sacrifices convergence speed and/or misadjustment.

    To address this problem we use Tikhonov regularization approach, under which (19) becomes:

    p' (n) U'* (n) U* (n)U'* (n) 2 I 1 e(n)

    (25)

    where I is the identity matrix, and is a hyper parameter to control the amount of regularization. The modified Algorithm becomes:

    w(n 1) w(n) (n)U* (n)[U(n)U* (n)]1e(n)

    2

    p ' (n)

    (n) max 2

    p ' (n) C

    (26)

    Note that the smoothing function is no longer needed since the regularization process accomplishes this function.

  5. SIMULATION RESULTS

    We illustrate the performance of the Affine Projection Algorithm by carrying out computer simulations for variable step sizes and projection orders. QPSK generator provides the test signal and additive white Gaussian noise (AWGN) is used as noise signal. The Filter length is chosen to be of 32 taps and the covariance matrix of offset 1 is selected.

    Fig. 3 Scatter Plot for µ=0.01 and po=2

    Fig. 4 Scatter Plot for µ=0.05 and po=2

    Fig. 5 Scatter Plot for µ=0.05 and po=10

  6. CONCLUSION

We see that as the step-size increases the convergence speed increases but at the same time steady state error also increases. Also when projection order increases the convergence speed increases but at the same time steady state error also increases. The symbols are largely scattered when the signal is transmitted through the channel; this is due to the presence of noise, but when they pass through the equalizer the scattering of the symbols reduces. The Variable step size methods for Affine Projection Algorithm (VSS- APA) improve the SNR and reduce the computational requirement.

ACKNOWLEDGMENT

The author would like to express his sincere gratitude to Associate Professor Shri Y.K.Mishra and Assistant Professor Dr. R.K.Singh at Kamla Nehru Institute of Technology, Sultanpur, India for valuable discussions.

REFERENCES

  1. arhang-Boroujeny, B., Adaptive Filters: Theory and Applications, John Wiley & Sons, New York, 1998

  2. S. Haykin, Adaptive Filter Theory, Pearson Education Private Limited Delhi, 4th edition, 2002

Affine-Projection Algorithms, IEEE Transactions on Signal Processing, Vol. 53, No. 12, pp. 4545-4555, December 2005.

  1. Jacob Benesty, IEEE, Hernán Rey, Leonardo Rey Vega, and Sara Tressens ,A Nonparametric VSS NLMS Algorithm, IEEE Signal Proc. Letters, Vol.13,No.10, pp.581-584, Oct. 2006.

  2. Jin Woo Yoo and PooGyeon Park, An Affine Projection Algorithm with Variable Projection Order using the MSE Criterion, Proceedings of the International Multi Conference of Engineers and Computer Scientists 2012 Vol II, IMECS 2012, March 14 – 16, 2012,

    Hong Kong

  3. Alberto Gonzalez, Miguel Ferrer, Felix Albu, and Maria de Diego, Affine Projection Algorithms: Evolution to smart and fast Algorithms and Applications, 20th European Signal Processing Conference (EUSIPCO 2012), August 27 – 31, 2012, Romania.

  1. J.G.Proakis, Manolakis, Digital Signal Processing, Pearson

    Education of India private Limited, New Delhi, 4th edition, 2007

  2. Harris, R. W., D. M. Chabries and F. A. Bishop, A variable step (VS) adaptive filter algorithm, IEEE Transactions on Acoustics Speech and Signal Processing, Vol. ASSP-34, No. 2, pp. 309-316, April 1986.

  3. Kwong, R. H. and E. W. Johnston, A variable step size LMS algorithm, IEEE Transactions on Signal Processing, Vol. 40, No. 7, pp. 1633-1642, July 1992.

  4. Diniz, P. S. R. and L. W. P. Biscainho, Optimal variable step size for the LMS/ Newton algorithm with application to subband adaptive filtering, IEEE Transactions on Signal Processing, Vol. 40, No. 11, pp. 2825-2829, November 1992.

  5. Mathews, V. J. and Z. Xie, A stochastic gradient adaptive filter with gradient adaptive step size, IEEE Transactions on Signal Processing, Vol. 41, No. 6, pp. 2075-2087, June 1993.

  6. Evans, J. B., P. Xue and B. Liu, Analysis and implementation of variable step size adaptive algorithms, IEEE Transactions on Signal Processing , Vol. 41, No. 8, pp. 2517-2535, August 1993.

  7. Slock, D. T. M., On the convergence behavior of the LMS and the normalized LMS algorithms, IEEE Transactions on Signal Processing, Vol. 41, No. 9, pp. 2811-2825, September 1993.

  8. Farhang-Boroujeny, B., Variable-step size LMS algorithm: new developments and experiments, IEE Proceedings Visual Image Signal Processing, Vol. 141, No. 5, pp. 311-317, October 1994.

  9. Diniz, P. S. R., M. L. R. Campos and A. Antoniou, Analysis of LMS- Newton adaptive filtering algorithms with variable convergence factor, IEEE Transactions on Signal Processing, Vol. 43, No. 3, pp. 617-627, March 1995.

  10. Kushner, H. J. and J. Yang, Analysis of adaptive step size SA algorithms for parameter tracking, IEEE Transactions on Automatic Control, Vol. 40, No. 8, pp. 1403-1410, August 1995.

  11. Aboulnasr, T. and K. Mayyas, A robust variable step size LMS-type algorithm: analysis and simulations , IEEE Transactions on Signal Processing, Vol. 45, No. 3, pp. 631-639, March 1997.

  12. Lei Guo, Lennart Ljung, and Guan-Jun Wang Necessary and Sufficient Conditions for Stability of LMS, IEEE Transactions on Automatic Control, Vol. 42, No.6, pp. 761-770 June 1997

  13. Tanaka, M., S. Makino and J. Kojima, A block exact fast affine projection algorithm, IEEE Transactions on Speech and Audio Processing, Vol. 7, No. 1, pp. 79-86, Jan. 1999.

  14. Sankaran, S. G. and A. A. (Louis) Beex, Convergence behavior of Affine Projection Algorithms, IEEE Transactions on Signal Processing, Vol. 48, No. 4, pp. 1086-1096, April 2000.

  15. Ang, W. P. and B. Farhang-Boroujeny, A new class of gradient adaptive step size LMS algorithms, IEEE Transactions on Signal Processing, Vol. 49, No. 4, pp. 805- 810, April 2001.

  16. Radu Ciprian Bilcu, Pauli Kuosmanen, and Karen Egiazarian, A Transform Domain LMS Adaptive Filter With Variable Step-Size, IEEE Signal Proc. Letters, Vol.9, No.2, pp. 51-53 Feb 2002.

  17. H-C Shin, A.H. Sayed, W-J Song,Variable step-Size NLMS and affine projection algorithms, IEEE Signal Proc. Letters, Vol.11, No.2, pp.132-135, Feb 2004.

  18. T.Dai, B.Shahrrava, X.Chen,A variable step-size affine projection algorithm with a weighted projection matrix, IEEE Canadian Conference (CCECE), 320-323, May 2005.

  19. Stefan Werner, José Antonio Apolinário Jr.,Marcello L. R. de Campos,, and Paulo S. R. Diniz, Low-Complexity Constrained

Leave a Reply