Low Voltage-Power-Area FGMOS Neural Classifier Circuit for VLSI Analog BIST

DOI : 10.17577/IJERTV1IS3059

Download Full-Text PDF Cite this Publication

Text Only Version

Low Voltage-Power-Area FGMOS Neural Classifier Circuit for VLSI Analog BIST

Ganesh Lakshmana Ku mar.M*,

Deptt. Of ECE, M.A.N.I.T, Bhopal, India.

Kavita Khare ,

Deptt. Of EC E,

        1. T, Bhopal, India.

          Priyanka Sharma,

          Deptt. Of ECE, M.A.N.I.T, Bhopal, India.

          Abstract

          This paper presents a novel Neural Classifier using FGMOS (Floating Gate MOSFET) . Basic reason for using FGMOS in Neural Classifier instead of classical MOSFET based Neural Classifier is to get significant reduction in area and power. Additional advantage of FGM OS based Neural Classifier is the simple circuitry as compared to classical MOSFET based Neural Classifier. 51.037% reduction in area is achieved in FGMOS based Neural Classifier (in .12µm technology). Along with this Neural classifier, 100 synapse and 10 Neurons reconfigurable network also implemented.

          Keywords- FGMOS, Synapse, Neuron, Reconfigurable Network, Floating Gate.

          1. Introduction

            The use of neural classifier in the BIST architecture is illustrated in Fig. 1. The neural classifier compares the result of test stimulus and applied stimulas and is classified as a valid or invalid code-word pointing to a functional or fau lty operation respectively. The focus of this paper is the neural classifie r, wh ich is a generic BIST co mponent independent of the circuit under test.

            Fig. 1. BIST architecture

            Fig. 2 Neuron and synapse models[1]

            Synapse can be considered as a multiplier of an input signal value by the stored weight value. A Neuron sums the output values of the connected synapses and passes the resulting sum through a nonlinear act ivation function, as shown in fig. 2.

          2. MOSFET Based Synapses Network design The basic function of a synapse is mult iplication. Linear mu ltip licat ion, as dictated by the mathematica l mode l, is

            area e xpensive in analog ICs.

            Itail

            Fig. 3.(a) Current sources control circuit.(b)Synapse circuit schematic

            The synapse circuit chosen for this design is a simp le multip lying DAC, which represents a differentia l pair with programmable tail current (Fig. 3(b)) and current ratio shown in fig 3(a). A differentia l input voltage produces a differentia l output current which is collected on the summing nodes common to all synapses connected to one neuron.

          3. Floating Gate Based Synapse And Weight Sto rage Circuits

            3.1 FGMOS We ights Update Method

            Fig.4 shows dynamic me mory which is work idea taken fro m [2], [3] and Table.I g ives behavior of weight update circuit. There are t wo modes of operation in FGM OS update method

            1. Hold Mode

            2. Updating Mode

              (i). Charg ing Mode(fig 5(b)) (ii). Discharging Mode(fig 5(a ))

              Fig.4.Weight Updating Circuit

              Table I. Mode of operation

              S .No

              B1

              B2

              Mode

              1

              1

              1

              Hold

              2

              0

              1

              Charging

              3

              1

              0

              Discharging

              Fig.5.(a) equvalent discharging circuit (b)charging circuit

              Bit2 and Bit1 are both logic signals that are either at VDD or ground. Vh is either at VDD or about one

              |VTP| below VDD . Vhbar is either at ground or about one VTN above ground. Only one of Vh and Vh_bar is active at a time.

              In Hol d mode Bit1Bit 2 are at VDD so both transistors are off state. In Updating Mode, Charging Mode ,Bit2 to VDD, Bit1 to ground and Vh to active and equivalent mode is shown in fig.5(b). Discharging Mode , Bit 2 to VDD, Bit1 to ground and Vh_bar to active and equivalent mode is shown in fig.5(a). If the current is sma ll, and the logic signals are pulsed for a short time , very sma ll charge packets can be added to or removed from Chold . Further, if the current is proportional to the change in the network error, and the logic pulse time is proportional to the learning rate, this circu it allows a very natural imple mentation

              3.2. Synapse Based On FGMOS

              Basically, this work idea is taken from refe rence [4], Lo w Powe r and Low Vo ltage FGM OS characteristics are described in reference [5]. A FGM OS based schematic of the synapse circuit with nonvolatile we ight storage, which is taken fro m [1] as a future expansion, is shown in Fig. 6. It is a nonlinear four-quadrant multip lie r with oating gate device current sources as in the electrica lly trainable a rtic ia l neural network (ETANN) chip [6]. Qualitatively, the circu it computes the product of the diffe rential input voltage and the weight, which is proportional to the difference of the two oating gate device currents.

              In a network, the I+ and I- nodes of many synapse circuits are tied together to perform the summing function.

              W

              Fig.6 . FGM OS based schematic of the synapse circuit and

              nonvolatile weight storage

              FGM OS based synapse takes inputs from the neurons in the previous layer. Weight currents always changes with FG2 and the constant current from FG1. Using a balanced we ight would ma ke the circuit behave more like an ideal mu ltiplie r, but would comp licate the

              weight incre ment circu it and increase the area of the synapse. The current in a oating gate device can be altered either by varying the control-gate voltage or by varying the charge on the oating gate. Changing the charge on the oating gate requires high -voltage pulses and may take hundreds of µs. The dynamic me mory is used to store the control-gate voltage during learning. Dynamic me mory a llo ws fast alterability but requires periodic re fresh cycles. After learn ing has converged, long-term storage is provided by storing the weight as charge on the oating gate. A more quantitative analysis of the synapse circuit follo ws. It can be shown that the diffe rential output current of the synapse circuit is

              (1)

              although, in theory, one electron resolution is possible. High-prec ision programming requires very short high – voltage pulses and a very precise sample and hold to store the target current.

              The complete synapse circuit is shown in Fig.7. During recall and learn modes, the Vs node is held at ground. Weights are perturbed by pulsing Vpert. The change in the weight is

              (5)

              where gmF is the transconductance of the floating-gate transistor. Cpertis a very small capacitor about 15 fF so that very small perturbations can be applied. As long as Vpert returns to its original level a fter the perturbation, the stored weight is unaffected.

              where Vin is V+-V-,K=µoCo xW/ L and IFi is the current of floating-gate device . Fo r larger input voltages, the output current saturates to the difference of the two floating gate device currents times the sign of the input voltage. The

              drain current of a floating-gate device is

              (2)

              where Vfg,s is the floating-gate to source potential of the floating-gate device and VT is the threshold voltage. By charge conservation, the floating – gate potential is given by

              Fig. 7. Complete synapse circuit.

          4. Reconfig rable Archite cture

            It is based on a single cascadable basic module chip which contains synapses, neuron and MUX shown in fig.8. As for the MLP (mu lti layer perceptron), we will map the synapse inputs and outputs, respectively, on

            vo an

            ltages and currents. Consequently, the neuron inputs

            (3)

            d outputs are mapped, respectively, on currents and

            voltages.

            where Cfgcgis the floating gate to control gate capacitance Cfg,dis the floating gate to drain capac itance, Qfis the charge on the floating gate, Cfg,b is the floating gate to substrate capacitance, Cfg,s is the floating gate to source capacitance,g is the gate coupling ratio, dis the drain coupling ratio, and CT is the total capacitance seen by the floating gate.

            Once learning is co mplete, the current of F G2 is stored temporarily in a sample-and-hold circuit in the periphery. High-voltage pulses are applied to F G2 until its current is equal to the target current stored in the sample- and-hold. In practice, it is difficu lt to achieve very high precision when programming floating-gate devices

            Fig. 8 Block diagram of single S and N of reconfigurable n/w

            In the above figure select(S) pin of M UX must be high to select the input voltages. The o/p current of S is fed to N and its o/p voltages is fed back to MUX. Total 100 synapse and 10 neuron are reconfigurable as shown in fig.10.

          5. Results

            Fig. 9 shows that FGMOS gain area factor when compared with MOSFET based synapse circuit. W ith

            FGM OS fe wer current branches are required, and therefore the power consumption also decreases. Co mplete improve ment is shown in Table.II. Finally, as shown reference[1] 100 synapse and 10 neurons circuit is shown in fig.10 which is a generic BIST co mponent independent of the circuit under test (CUT).

            54.6µm

            (a)

            40.1µm

            (b)

            15µm

            10µm

            Fig.10..100 Synapse,10 Neuron based Recongurable network

          6. Conclusions

            In this paper circuit co mple xity of NEURA L CLASSIFIER have been significantly reduced by using FGM OS. With FGM OS fewe r current branches are required, and therefore the power consumption also decreases. FGM OS is also provides reduction in area. This NEURA L Classifier has also additional benefits related to the frequency response, since the number of internal nodes is sma lle r. The devices can be biased in the most appropriate operating region for a wider range of input signals, by shifting the effective threshold voltages accordingly in the FGM OS t ransistors , hence also facilitating la rger operating bandwidth,. All this benefits can be achieved without the need for extra leve l shifters . Finally, we can conclude FGM OS required less area when compared to the MOSFET based Neural Classifie r.

            Fig .9.(a)MOSFET based Synapse Layout (b)FGMOS based Synapse

            Table.II Comparation table of MOSFET and FGMOS

          7. REFERENCES

S .No

Parameter

MOS FET

FGMOS

%S aving

using

FGMOS

1.

Area

819µm2

401µm2

51.037

%

2.

No of

elements Used

23-Nmos 4-Pmos

5-Nmos 1-Pmos

2-FGM OS

3

Power

Consumption

2.036mW

0.5102mW

74.941 %

  1. Dzmitry Maliuk, Haralampos-G. Stratigopoulos and Yiorgos Makris An Analog VL SI Multilayer Perceptron and its Application T owards Built -In Self-T est in Analog C.ircuits 2010 IEEE 16th International On-Line T esting Symposium, pp71-76,2010.

  2. C. R. Schneider and H. C. Card, Analog CMOS deterministic Boltzmann circuits, IEEE J. Solid-State Circuits, vol. 28, pp. 907914, Aug. 1993.

  3. G. Cauwenberghs, A learning analog neural network chip with continuous-time recurrent dynamics, in Advances in Neural Information Processing System s 6, J. D. Cowan, G. Tesauro, and J. Alspector, Eds. San Mateo, CA: Morgan Kauffmann, 1994, pp. 858865.

  4. Antonio J. Montalvo, Ronald S. Gyurcsik, and John J. Paulos, Toward a General-Purpose Analog VLSI Neural Network with On-Chip Learning IEEE transactions on neural networks, VOL. 8, NO. 2, MARCH 1997.

  5. Esther Rodriguez- Ville ga s, Low Power an d Lo w Voltage Circuit Design with the FGMO S T ransistor in IET Circuits, Devices and Systems Serie s 20, first publishe d in 2006.

  6. M. Holler, S. T am, H. Castro, and R. Benson, An electrically trainable artificial neural network (ET ANN) with 10240 floating gate synapses, in Proc. Int. Joint Conf. Neural Network s, Washington, D.C., vol. I, June 1989, pp. 191 196.

Leave a Reply