Low Density Parity Check used in VLSI Implementation of FPGAs for Soft-Error Resilient Circuits Providing Optimal Circuits

DOI : 10.17577/IJERTV3IS20237

Download Full-Text PDF Cite this Publication

Text Only Version

Low Density Parity Check used in VLSI Implementation of FPGAs for Soft-Error Resilient Circuits Providing Optimal Circuits

V. J. Beulah Sherin Ponmalar1, M. Ruth Jenila2

PG student, Assistant Professor

Department of Electronics and Communication Engineering DMI College of engineering, Chennai-602013, India

Abstract In this paper, a convolution code namely low density parity check is proposed and implemented in VLSI architectures (FPGA).In general a digital communication system suffers from errors due to noise, distortion and interference during the data transmission and commonly uses various algorithms to correct the errors. A third generation wireless communication system uses convolutional codes for transmission of voice and control signals. whereas in existing system the convolution code namely the turbo codes are used with the belief propagation algorithm that plays a vital role in VLSI implementation which have low bit error rate and low signal to noise ratio but high decoding error rate and high decoding complexity. The tanner graphs are used to construct longer codes from smaller ones to perform row and column operations. The Low Density Parity Check uses the message passing algorithm that acts as the superior performer and iterative decoding technique for low decoding complexity and high decoding rate that helps in parallel processing for updating large number of messages and transferring between check nodes and bit nodes .The optimised method of Moores Law is implemented. This attracts the public attention as a code for fourth generation in communication system and provides high security. The simulation results indicate that our proposed scheme can reduce the power dissipation by overcoming the soft error process which is the incorrect switching or an impractical activity of a memory cell and provides an optimal circuit in order to enhance the VLSI Technologies.

Keywords Low Density Parity Check, Soft Error, Convolution Codes, Message Passing Algorithm, Power Dissipation.

I.INTRODUCTION

The objective of the project is to estimate the power dissipation and nanometer range and reduce it by using the Low Density Parity Check Code to enhance the circuit to be optimal. Flexible and reconfigurable architectures have gained wide popularity in the communication field through Low Density Parity Check due to its high security significance. Node to node communication is done through message passing Algorithm. The low density parity check plays a vital role in the resilient of FPGA. Therefore, the number of non-zero bits increases as the code length increases, as a result of which the memory size is to be

large. The coding is done through tanner graph. Soft error due to bit miss is detected using built in self test and overcome by the error correction method using Low Density Parity Check. This helps in reducing the nanometer range. With the growing trend towards portable computing and wireless communication, power dissipation has become one of the most critical factors in the continued development of the microelectronics technology. The power dissipation is due to the improved performance of the circuits and to integrate more functions into each chip. As a result, the magnitude of power per unit area is growing leading to large amount of power leakage and therefore digital bits are missed which leads to the occurrences of soft errors which are the signal or datum which is wrong caused due to the cosmic ray particles or alpha particles by the incorrect switching of a memory cell .This is overcome by the Low Density Parity Check in order to reduce the nano meter range of the circuits.

In 1971 the Intel had a chip with the range of 800nm at present there are chips at a range of 22nm which provides a reliable and optimal circuit. And through this paper, the nanometer range is expected to be reduced more by increasing the number of transistors, thus reducing the chip size to provide a reliable circuit by the reduction in power dissipation.

  1. DECODING ALGORITHM

    Turbo and LDPC decoding algorithms are characterized by strong resemblances: they are iterative, work on graph- based representations, are routinely implemented in logarithmic form, process data expressed as logarithmic- likelihood-ratios (LLRs) and require high level of both processing and storage parallelism. Both algorithms receive intrinsic information from the channel and produce extrinsic information that is exchanged across iterations to obtain the a priori information of uncoded bits, in the case of binary codes, or symbols, in the case of non binary codes. Moreover, their arithmetical functions are so similar that joint or derived algorithms for both LDPC and turbo

    decoding exist. The decoding of LDPC codes stems from the Tanner graph representation of where two sets of nodes are identified: Variable Nodes (VNs) and Check Nodes (CNs). VNs are associated to the bits of the codeword, whereas CNs corresponds to the parity-check constraints. The most common algorithm to decode LDPC codes is the Belief Propagation (BP) algorithm. There are two main scheduling schemes for the BP: two-phase scheduling and layered scheduling. The latter nearly doubles the converge speed as compared to two-phase scheduling. In a layered decoder, parity check constraints are grouped in layers each of which is associated to a component code. Then, layers are decoded in sequence by propagating extrinsic information from one layer to the following one. This process is iterated up to the desired level of reliability.

    Fig 1. paralleled decoding architechture

  2. POWER DISSIPATION DETECTION

    Power dissipation occurs as the scale of integration improves, more transistors, faster and smaller than their predecessors, are being packed into a chip. This leads to the steady growth of the operating frequency and processing capacity per chip, resulting in increased power dissipation. It is the rate at which the energy is taken from the source and converted into heat. Heat energy has to be dissipated from the chip to avoid an increase in chip temperature that can cause temporary or permanent failure.

    Analysis that are concerned for the accurate estimation of power leakage and Optimization which is the process of generating the best design without violating the design specifications are the techniques used to detect and correct power dissipation in the circuits. By the power dissipation correcting the impact to the circuit delay is enhanced, performance and throughput of the chip and its area are improved, which reduces the manufacturing costs. Other factors of chip design such as the design cycle time, testability, quality, reusability are achieved. Therefore, power efficiency cannot be achieved without yielding to one or more of these factors. The task of a design engineer is to carefully weigh each design choice within the specification constraints and select the best implementations.

  3. LOW DENSITY PARITY CHECK

    In information theory, a low-density parity-check (LDPC) code is a linear error correcting code, a method of transmitting a message over a noisy transmission channel and is constructed using a sparse bipartite graph. LDPC codes are capacity-approaching codes, which means that practical constructions exist that allow the noise threshold to be set very close (or even arbitrarily close on the BEC) to the theoretical maximum (the Shannon limit) for a symmetric memory less channel. They are based on constrained random code ensembles and iterative decoding algorithms. The noise threshold defines an upper bound for the channel noise, up to which the probabiity of lost information can be made as small as desired. Using iterative belief propagation techniques, LDPC codes can be decoded in time linear to their block length. LDPC codes are finding increasing use in applications requiring reliable and highly efficient information transfer over bandwidth or return channel-constrained links in the presence of corrupting noise. Although implementation of LDPC codes has lagged behind that of other codes, notably turbo codes, the absence of encumbering software patents has made LDPC attractive to some. In order to achieve desired power and throughputs for current applications (e.g., > 1Mbps in 3G wireless systems, > 1Gbps in magnetic recording systems), fully parallel and pipelined iterative decoder architectures are needed. Compared to turbo codes, LDPC codes enjoy a significant advantage in terms of Coding is the conversion of computational complexity and are known to have a large amount of inherent parallelism .However, the randomness of LDPC codes results in stringent memory requirements that amount to an order of magnitude increase in complexity compared to those for turbo codes. A direct approach to implementing a parallel decoder architecture would be to allocate, for each node or cluster of nodes in the graph defining the LDPC code, a function unit for computing the reliability messages, and employ an interconnection network to route messages between function nodes.

    Fig no 2: Paralleled LDPC architechture

    1. Soft Error

      In electronics and computing, a soft error is a type of error where a signal or datum is wrong. Errors may be caused by a defect, usually understood either to be a mistake in design or construction, or a broken component. A soft error is also a signal or datum which is wrong, but is not assumed to

      imply such a mistake or breakage. After observing a soft error, there is no implication that the system is any less reliable than before. In a computer's memory system, a soft error changes an instruction in a program or a data value. Soft errors typically can be remedied by cold booting the computer. A soft error will not damage a system's hardware; the only damage is to the data that is being processed.

    2. Decoder Graph

      Two units namely bit functional units and check functional units are used in the graph. Check functional units are used for row operations for check nodes. Bit functional units are used for column operations for bit nodes.6*9 matrix is used for parity check matrix. It is done by the message passing algorithm using the technique of parallelism .The row operations are performed the same number of times as the parity check bits, and the column operations are

      performed the same number of times as the code bits. Parallel LDPC decoders are classified as a fully-parallel LDPC decoder and a partially- parallel LDPC decoder.

      The fully parallel decoder has a CFU and BFU for every check nodes and bit nodes, and achieves high-throughput. The LDPC decoder keeps intermediate messages for every non-zero bit in the parity check matrix, and exchanges them between the row and column iteratively.

      Fig no 3: paralleled architechture for standard mode

    3. Message Passing Algorithm

      It is also called sum product or max product algorithm. It is an iterative decoding technique. In message-passing LDPC decoding, a large number of messages need to be updated and transfered between check and variable nodes in each

      iteration. The incoming messages received from the channel at variable nodes are directly passed along the edges to the neighboring check nodes. This algorithm iteratively passes messages, that are computed as log domain probabilities, between two classes of nodes: the variable nodes and the constraint (parity) nodes. They perform local decoding operations to compute the outgoing messages depending on the incoming messages received from the variable nodes. The messages are exchanged along the graph factors. The out-going messages (extrinsic) are passed in both directions along every edge.

      It represents the global code constraint into the local code constraints which are represented by the connections between variable nodes and check nodes. MPA is optimal (maximum likelihood decoding) for the codes whose factor graph is cycle free otherwise sub-optimal due to the cycles in the factor graph. The overall decoding complexity is linear with the code length. They are explained under Binary Erasure channel. This algorithm is proportional to the total number of edges in the underlying graph.

      Fig no 4: flow chart

    4. TANNER GRAPH

      Fig 5: Tanner Graph

      A Tanner graph, named after Michael Tanner, is a bipartite graph used to state constraints or equations which specify error correcting codes. In coding theory, Tanner graphs are used to construct longer codes from smaller ones. Both encoders and decoders employ these graphs extensively. Tanner graphs are partitioned into subcode nodes and digit nodes. For linear block codes, the subcode nodes denote rows of the parity-check matrix H. The digit nodes represent the columns of the matrix H.

      An edge connects a subcode node to a digit node if a nonzero entry exists in the intersection of the corresponding row and column. The advantage of these recursive techniques is that they are computationally tractable. The coding algorithm for Tanner graphs is extremely efficient in practice, although it is not guaranteed to converge except for cycle-free graphs, which are known not to admit asymptotically good codes.

      Fig no 6: soft error resilient FPGAs using LDPC

      The architecture benefits from various optimizations performed at three levels of abstraction in system design namely LDPC code design, decoding algorithm, and decoder architecture. First, the interconnect complexity problem of current decoder implementations is mitigated by designing LDPC codes having embedded structural regularity features that result in a regular and scalable message-transport network with reduced control overhead. Second, the memory overhead problem in current day decoders is reduced by more than 75% by employing a new decoding algorithm for LDPC codes that removes the multiple checkto-bit message update bottleneck of the current algorithm. A message-passing algorithm is also proposed that reduces the memory overhead of the current algorithm for low to moderate-throughput decoders.

      Moreover, a parallel soft-inputsoft-output (SISO) message update mechanism is proposed. The iterative decoding process of both codes consists of two main steps: They are computing independent messages proportional to the a posteriori probability distributions of the code bits and communicating the messages. On the other hand, the communication mechanism for LDPC codes is defined by a pseudorandom bipartite graph and is internal with respect to message computation (i.e., an internal interleaver), while the computational complexity is very low ( order of the logarithm of the code length).

  4. SIMULATION RESULT

    CONCLUSION AND FUTURE WORK

    As a result, power dissipation in circuits is reduced by overcoming the soft error through message passing algorithm in node to node communication. The soft error is overcome for preventing damage of the circuits. This reduces chip size, interconnection delay, complexity of circuits and increases quality, security, and speed. Therefore, the nanometer range of circuits will be reduced which provides an enhancement for optimal and reliable circuits that acts as a superior advancement in VLSI technologies.

  5. REFERENCE

  1. E. Boutillon, C. Douillard, and G. Montorsi, Iterative decoding of concatenated convolution codes: Implementation issues, Proc, IEEE, vol. 95, no. 6, pp. 12011227, Jun. 2007.

  2. H. Moussa, A. Baghdadi, and M. Jezequel, Binary de Bruijn interconnection network for a flexible LDPC/turbo decoder, in Proc. IEEE Int. Symp. Circuits and Systems, 2008, pp. 97 100.

  3. M. Matina, G. Masera, S. Papaharalabos, P. Mathiopoulos, and F.Gioulekas, On practical implementation and generalizations of operator for turbo and LDPC decoders, IEEE Tran. In strum. Meas., vol. 61, no. 4, pp. 888895, Apr. 2012.

  4. Quality Supervision and Quarantine, GB 20600-2006, Digital Terrestrial Television Broadcasting Transmission System Frame Structure, Channel Coding and Modulation Beijing, China, 2006, China Standard Press Std.

  5. IEEE Standard for Local and Metropolitan Area Networks Part 16: Air Interface for Fixed and Mobile Broadband Wireless, 2006, IEEE Std 802.16e-2005 Std.

  6. IEEE Standard for Information Technology- Telecommunications and Information Exchange Between Systems-Local and Metropolitan Area Networks, , 2009, IEEE Std 802.11n-2009 Std..

  7. A. Polydoros, Algorithmic aspects of radio flexibility, in IEEE International Symposium on Personal, Indoor and Mobile Communications, 2008, pp. 15.

  8. M. Alles, T. Vogt, and N. Wehn, FlexiChaP: A reconfigurable ASIP for convolutional, turbo, and LDPC code decoding, in Proc. 5th International Symposium on Turbo Codes and Related Topics, 2008, pp. 8489.

Leave a Reply