Fusion of Fingerprint Recognition and Digital Signature Verification

DOI : 10.17577/IJERTV3IS040182

Download Full-Text PDF Cite this Publication

Text Only Version

Fusion of Fingerprint Recognition and Digital Signature Verification

Dr. Ujwalla Gawande, Rutuja Mowade, Ashmi Nagdeve, Apurva Paturkar, Sandesh Bhaiswar

Computer Technology Department, Y.C.C.E., Nagpur, India

Abstract Fingerprint recognition is considered as a one of the most important technique for acquiring high accuracy for user verification .There are various methods for fingerprint recognition. Solid state fingerprint sensors offer the reduced contact area of fingerprint obtained. The multiple impressions of the same finger acquired through the sensors do not provide large region of overlap because of which matching of fingerprint becomes difficult. It hampers the progress of verification system. So in order to deal with this problem we construct the fusion image of two or more fingerprint image of the same fingerprint. This fusion image reduces the storage improve the matching time. In the proposed algorithm, two or more image of same finger is initially aligned. The alignment takes place using corresponding minutiae points .This algorithm is called as Iterative closest point (ICP) which computes transformation matrix and their by defining spatial relationship between two fingerprint images. This method of using fusion image improves the performance of matching system by ~4%. Signature verification is the biometric method used tremendously for authentication. In order to verify the persons identity signature authentication is most widely used. The texture and topological features of signature recognition includes baseline slant angle, aspect ratio, and normalized area, center of gravity of the whole signature image and the slope of the line joining the center of gravities of the two halves of the signature image. The set of the original signatures is obtained from which the mean values and standard deviations of are computed. The mean signature acts as the template for verification against the claim test signature. The Euclidean distance serves as a measure of similarity between the two. If the distance is less than threshold value then the signature is original otherwise detected as forgery.

Keywords- fingerprint recognition, fusion image, iterative closest point (ICP), signature verification, Euclidean distance.

INTRODUCTION

The fingerprint based verification system provide high level of uniqueness, the compact solid state fingerprint sensors can be easily embedded into wide variety of devices which are used for user authentication. But these solid state sensors can sense only a limited portion of fingerprints and therefore, the amount information obtained is also limited. So in order to address this problem of limited information in a single fingerprint template, mosaicking technique is used. What is Mosaicking Techniques? The multiple impression of same fingerprint are fused together of resulting in a more complete fingerprint templates. So this fused image is called as composite image. Following are the advantages of composite template [1] –

a] Instead of comparing the query image which each of the individual images of same finger a composite image enables

the reduction of number of comparisons(only one comparisons)and their by reducing the probability of false reject.

b] It reduces the matching time since only one comparison is made.

The signature verification and recognition are used in banking transactions, electronic funds transfer, and document analysis. Signature verification can be of two types-offline and online system. The online system consists of a pen and e- pad. The offline system consists of images that are previously stored in the database and then after processing these images are compared and verified. However the offline signature verifications are more complex than the online signature verifications. There are several factors that affect offline system such as type of pen used for the signature, the fancy handwriting style and the non repetitive nature of the variations of the signature.

I. FINGERPRINT RECOGNITION:

While registering any fingerprint image due to the non uniform pressure applied by the subject, the fingerprint image may have non linear plastic distortions. A noisy image may be formed due to the presence of dirt on the sensor or bruises on finger. So if two images of the same finger have different amount of noise or distortion then it is quite difficult to register them. This can be reduced by using registration algorithm, which finds transformation T such that it relates the two images of the same finger. Suppose X and Y are the two entities and Rx and Ry their range image respectively. The intensity values are directly used as range values – i.e., the intensity value of the image at the planar coordinate (x, y) is treated as the range value, z, at that location. The goal of a registration algorithm is to find T such that the objective function, D(Rx,Ry), is minimized:

D(Rx , Ry) = || Tx- f(x) || (1) Where f : Rx Ry| ¥x Rp, f(p) Ry.

We express transformation matrix, T , in homogeneous coordinates. Equation(2) shows the transformation matrix where , and are the rotation angles about x, y and z axes respectively, tx, ty, tz are the translation components along the three axes. In practice since the f is not known, so the objective function in eq (1) is replaced by the evaluation function that covers in itself the information of a set of corresponding points in Rx and Ry.consider N pairs of corresponding points, (pi,qi), pi Rx, qi Ry and i=1,2,..N.

The evaluation function E(Rx, Ry) is given by:

E(Rx, Ry) = || Txi qi ||2(3)

The high level features like corners, edges, etc. are extracted from the two surfaces, for selecting the correspondence points. The correspondence points are also called as control points. In some application the correspondence points are manually identified by the expert domain. So given the correspondence points, the evaluation function E(Rx, Ry) in eq (3) can be minimized by searching for the global minimum in the 6 dimensional parameter space using iterative procedure, though it doesnot guarantee convergence to global minimum. So to solve this problem, the iterative closest point(ICP) algorithm assumes that initial approximate T0 is already known. If the approximation of this T0 is good then the global minimum can be reached quickly and surely. The ICP algorithm tries to minimize the distances between points in one image to geometric entities in the other.[2]

We minimize it using:

j

Ek(Rp, Rq)= (ds)2 (Tkpi, S k)..(4) Ds= distance form point to the plane

Sj= tangential plane corresponding to point qj in image Ry After the initial alignment by examining homogeneous regions in the two images, the control points are automatically chosen. ICP is used to minimize the function. Since an approximate initial transformation matrix is known the convergence is faster.

  1. FINGERPRINT MOSAICKING:

    By using the modified ICP algorithm, the problem of 3D surface registration can be solved.

    What is minutiae point?

    The terminations or the bifurcations formed by the ridges present in fingerprint image are called as minutiae points.

    The minutiae point extraction is performed on each individual image. For each image we have different set of minutiae points. The two sets of the minutiae points are compared by using elastic point matching algorithm. The reference minutiae pair is selected such that one minutiae point is from one image and another point is from the other image. The number of corresponding minutiae pairs is determined using the remaining set of the poits. A pair is considered as a reference pair, if it gives large number of corresponding pairs.

    Consider (pi,qi)..(pN,qN) be the corresponding minutiae pairs

    pi= (pxi,pyi,pzi,pi) and qi=( qxi,qyi,qzi,qi) (x,y)=spatial coordinates of minutiae points. Z= intensity of image at (x,y)

    = minutiae orientation

    T = Transformation matrix

    The initial transformation T0 is calculated using Horns method[5], that operates on the (x,y,z) values.

    The translation parameters viztx ,ty, tzare computed using the centroid of the point sets (pxi ,pyi , pzi ) and (qxi , qyi , qzi ), and the rotation components are computed using the cross- covariance matrix between the centroid-adjusted pairs of points[2].

  2. EXPRESSING FINGERPRINT AS A RANGE

    IMAGE:

    The range values are nothing but the intensity values. For example: consider an image at the planar coordinate (x,y), the intensity value of this image is treated as the range value, z, at that location. The two range images Rx and Ry are obtained from the corresponding intensity images Ix and Iy respectively.

    The range images Rx and Ry are then subject to the iterations of the iterative closest point algorithm. At each round of iteration k, the transoformation Tk is chosen such that it minimizes the value of Ek in eq.(4)

    When (| Ek – Ek-1 |)/(N) < , the process is said to have converged[2],

    Where is some threshold, is nearly equal to 0.

    Tsolution i.e. the final transformation matrix is used in the following ways:

    1. To create a composite image by integrating two individual images. The spatial extent of the composite image is generally larger than the individual images. This larger image is then subjected to the pre-processing and then minutiae extraction process.

    2. Tsolution can be used for augmenting the minutaie sets from the individual images.

  3. CONSTRUCTION OF A COMPOSITE IMAGE:

    By using the final transformation matrix i.e. Tsolution , the two intensity images Ip and Iqare integrated to form a new image Ir, to compute the new spatial coordinate of every pixel in Ip. the new minutiae set is then extracted from Ir.

    The composite image is then subjected to the pre- processing techniques, minutiae extraction and post processing stage as explained further.

    (1a) Initial alignment (1b) final alignment

    (1c) Minutiae extracted (1d) Composite minutiae set

    Obtainedafter augmenting individual minutiae sets.

    from mosaicked images

    This composite image is then set to match with the fingerprint image which we obtain dynamically through the sensor.

    The fingerprint image obtained dynamically undergoes the following processing:

    Fig.2 overall system for fingerprint recognition

  4. ALGORITHM LEVEL DESIGN FOR MATCHING: The composite image and the image obtained dynamically are subjected to matching by using the following approach. Both the images undergo through the stages individually, and later are set for minutiae matching. The three stage approach to implement minutiae extractor is: 1) Pre-processing, 2) Minutia extraction and 3) Post-processing stage.

Fig.3 Minutia Extractor

    1. Fingerprint Image Preprocessing

      In order to make further operations clearer it is mandatory to make the image clearer. The images acquired from the sensors or other Medias are not of perfect quality, so therefore we need to apply some processing to make the image clearer and usable for the operations in order to perform matching.

      The following are the pre-processing operations applied on an image:

      1. Image Enhancement:

        The process of enhancement increases the pixel value of an image.

        The enhancement can be carried out by two methods:

        1. Histogram equalization

        2. Fast Fourier transformation

          1. Image enhancement by Histogram equalization:

            It increases the perceptional information by expanding the pixel value distribution of an image[3]. The original histogram of a fingerprint image has the bimodal type (fig. 4), the visualization effect is enhanced and the histogram after the histogram equalization occupies all the range from 0 to 255 (fig 5) .

            Fig 4. Original histogram of

            Fingerprint image Fig5.Histogramafter histogram equalization

            Fig 6. Image enhancement by histogram equalization 6(a).Original image 6(b).Image after histogram equalization

          2. Fingerprint Enhancement by Fourier Transform:

          The image is divided into small processing blocks of 32 by 32 pixels. The Fourier transform is performed according to:

          . (1)

          For u = 0, 1, 2, 31 and v = 0, 1, 2, 31.

          In order to enhance a specific block by its dominant frequencies, we multiply the FFT of the block by its magnitude for a specific set of times. Where the magnitude of the original

          the fingerprint image we perform locally adaptive binarization method. It transforms a pixel value to 1 if the value is larger than the mean intensity value of the current block (16×16) to which the pixel belongs [Fig 7].

          FFT = abs(F(u,v)) = |F(u,v)|.

          Enhanced block is obtained according to: g(x,y) = F F(u,v) x | F(u,v)| (2) , Where F-1(F(u,v)) is obtained by:

          for x = 0, 1, 2, …, 31 and y = 0, 1, 2, …, 31.

          (3)

          Fig 8. Fingerprint image after adaptive binarization Binarizedimage(left), Enhanced gray image(right)

          The value of k in formula (2) is determined experimentally and is called as experimentally determined constant. We generally choose k=0.45 to calculate. Higher the value of "k" more is the improvement in the appearance of the ridges and can fill up small holes in ridges, having too high "k" can result in false joining of ridges resulting in termination becoming a bifurcation. Fig 7.presents the image after FFT enhancement.

          Fig 7.Fingerprint enhancement by FFT Enhanced image (left), Original image (right)

          Here it can be seen that the enhanced image after FFT has the improvements to connect some falsely broken points on ridges and to remove some spurious connections between ridges. The shown image at the left side of figure is also processed with histogram equalization after the FFT transform.

      2. Fingerprint Image Binarization:

        The 8 bit gray fingerprint image is transformed to a 1 bit image where 0-value is for ridges and 1-value is for furrows. Ridges in the fingerprint are highlighted with black color while furrows are white after this pre-processing. To binarize

      3. Fingerprint Image Segmentation:

The image area that holds background information that is without effective ridges and furrows is first discarded. Then the bound of the remaining effective area is sketched out, since the minutia in the bound region is confusing with that spurious minutia that is generated when the ridges are out of the sensor. This is called as region of interest (ROI).

The 2 step method is used to extract the ROI[3]:

  1. Block direction estimation and direction variety check.

  2. Morphological methods.

  1. Block direction estimation

    In order to estimate the block direction for each block of the fingerprint image with WxW in size(W is 16 pixels by default), the following algorithm is used:

    1. Calculate the gradient values along x-direction (gx) and y-direction (gy) for each pixel of the block. Two Sobel filters are used to fulfill the task.

    2. For each block, use following formula to get the Least Square approximation of the block direction. tg2ß = 2 (gx*gy)/(gx2-gy2) for all the

    pixels in each block.

    The formula is easy to understand by regarding gradient values along x-direction and y-direction as cosine value and sine value. So the tangent value of the block direction is estimated nearly the same as the way illustrated by the following formula[3].

    tg2= 2sincos/(cos2 -in2 )

    After the estimation of each block direction, the blocks without significant information on ridges and furrows are discarded based on the following formulas:

    E = {2 (gx*gy)+(gx2-gy2)}/ W*W*(gx2+gy2)

    Where E is the certainty level.

    For each block, if E is below a threshold, then the block is regarded as a background block.

    Fig.9 Direction map.

    Binarized fingerprint (left), Direction map

    bound so as to get the tightly bounded region just containing the bound and inner area.

      1. MINUTIA EXTRACTION

        1. Fingerprint ridge thinning:

          Thinning is used to eliminate the redundant pixels of ridges, till the ridges are just one pixel wide ridge. Iterative, parallel thinning algorithm is used for thinning. The algorithm marks down redundant pixels in each small image window (3×3) in each scan of the full fingerprint image. And finally removes all those marked pixels after several scans. To extract thinned ridges from gray-level fingerprint images directly, it uses a one-in-all method. Their method traces along the ridges having maximum gray intensity value. Since only pixels with maximum gray intensity value are remained, binarization is implicitly enforced[3].

          (right)

  2. ROI extraction by Morphological operations The morphological operations adopted are:

  1. OPEN

  2. CLOSE

The OPEN operation can expand images and remove peaks introduced by background noise [Fig 10]. The CLOSE operation can shrink images and eliminate small cavities [Fig 11].

      1. Minutia Marking:

        Fig.14. Thinning

        Fig 10. Original Image Area Fig 11. After CLOSE operation

        Fig 12.After OPEN operation Fig 13. ROI + Bound

        The bound is the subtraction of the closed area from the opened area. Then the algorithm throws away those leftmost, rightmost, uppermost and bottommost blocks out of the

        After the fingerprint ridge thinning, marking minutia points is relatively easy. For each 3×3 window, if the central pixel is 1 and has exactly 3 one-value neighbors, then the central pixel is a ridge branch [Fig 15]. If the central pixel is 1 and has only 1 one-value neighbor, then the central pixel is a ridge ending [Fig 16].

        0

        1

        0

        0

        1

        0

        1

        0

        1

        0

        0

        0

        0

        1

        0

        0

        0

        1

        Fig 15. Bifurcation Fig 16. Termination

        0

        1

        0

        0

        1

        1

        1

        0

        0

        Fig 17. Triple counting branch

        Fig 17 illustrates a special case in which a genuine branch is triple counted. Suppose both the uppermost pixel with value 1 and the rightmost pixel with value 1 have another neighbor outside the 3×3 window, so the two pixels will be marked as branches too. But only one branch is located in the small region. So a check routine requiring that none of the neighbors of a branch are branches is added.

        Also the average inter-ridge width D is estimated at this stage. The average distance between two neighboring ridges is called as the average inter-ridge width. The D value is approximated in a simple way. Scan a row of the thinned ridge image and sum up all pixels in the row whose value is one. Then divide the row length with the above summation to get an inter-ridge width. For more accuracy, such kind of row scan is performed upon several other rows and column scans are also conducted, finally all the inter-ridge widths are averaged to get the D[3]. Together with the minutia marking, all thinned ridges in the fingerprint image are labeled with a unique ID for further operation. The labeling operation is realized by using the Morphological operation: BWLABEL.

        Fig.18 Minutiae marking(extraction)

    1. Minutia Postprocessing

      1. False Minutia Removal

        The preprocessing stage does not totally heal the fingerprint image. Actually all the previous pre-processing stages themselves occasionally introduce some errors which later lead to false minutia, Thereby significantly affecting the accuracy of matching if they are simply regarded as genuine minutia. So some mechanisms of removing false minutia are essential to keep the fingerprint verification system effective. Seven types of false minutia are specified in following diagrams[3]:

        m1 m2 m3 m4

        m5 m6

        m7

        Fig 19.False Minutia Structures.m1 is a spike piercing into a valley. In the m2 case a spike falsely connects two ridges. m3 has two near bifurcations located in the same ridge. The two ridge broken points in the m4 case have nearly the same orientation and a short distance. m5 is alike the m4 case with the exception that one part of the broken ridge is so short that another termination is generated. m6 extends the m4 case but with the extra property that a third ridge is found in the middle of the two parts of the broken ridge. m7 has only one short ridge found in the threshold window.

        The following procedure is used in removing false minutia

        :

        1. If the distance between one bifurcation and one termination is less than D and the two minutia are in the same ridge(m1 case) . Remove both of them. Where D is the average inter-ridge width representing the average distance between two parallel neighboring ridges.

        2. If the distance between two bifurcations is less than D and they are in the same ridge, remove the two bifurcations. (m2, m3 cases).

        3. If two terminations are within a distance D and their directions are coincident with a small angle variation. And they suffice the condition that no any other termination is located between the two terminations. Then the two terminations are regarded as false minutia derived from a broken ridge and are removed. (case m4,m5, m6).

        4. If two terminations are located in a short ridge with length less than D, remove the two terminations (m7).

          There are two advantages of this procedure:

          1. The ridge ID is used to distinguish minutia and the seven types of false minutia are strictly defined comparing with those loosely defined by other methods.

          2. The order of removal procedures is well considered to reduce the computation complexity.

      2. Unify terminations and bifurcations

Each minutia is characterized by the following parameters at last:

1) x-coordinate, 2) y-coordinate, and 3) orientation.

The orientation calculation for a bifurcation is specially considered. All three ridges deriving from the bifurcation point have their own direction, [Fig 16b]. It chooses the minimum angle among the three anticlockwise orientations

starting from the x-axis. Both methods cast the other two directions away, so some information loses. The three new terminations are the three neighbor pixels of the bifurcation and each of the three ridges connected to the bifurcation before is now associated with a termination respectively [Fig 20a ].

Fig.20. A bifurcation to three terminations Three neighbors become terminations (Left) Each termination has their own orientation (Right)

0

0

1

1

1

0

0

0

1

And the orientation of each termination (tx,ty) is estimated by

count the matched minutia pairs by assuming two minutia having nearly the same position and direction are identical.

4.4.1. Alignment Stage:

  1. The ridge associated with each minutia is represented as a series of x-coordinates (x1, x2xn) of the points on the ridge. A point is sampled per ridge length L starting from the minutia point, where the L is the average inter-ridge length. And n is set to 10 unless the total ridge length is less than 10*L.

    So the similarity of correlating the two ridges is derived from:

    i

    S = mi=0xiXi/[mi=0xi2X 2]^0.5,

    where (xi~xn) and (Xi~XN) are the set of minutia for each fingerprint image respectively. And m is minimal one of the n and N value. If the similarity score is larger than 0.8, then go to step 2, otherwise continue to match the next pair of ridges.

  2. For each fingerprint, translate and rotate all other minutia with respect to the reference minutia according to the following formula:

following method: Track a ridge segment whose starting point is the termination and length is D. Sum up all x-

xi_new

yi_new

(xi x)

(yi y)

coordinates of points in the ridge segment. Divide above

=TM *

summation with D to get sx. Then get sy using the same way. Get the direction from: atan((sy-ty)/(sx-tx)).

i_new

i

,

where (x,y,) is the parameters of the reference minutia, and TM is

cos sin 0

0

TM = sin cos

0 0 1

Fig.21 Real minutiae points

    1. Minutia Match

      We consider two set of the minutia points and apply matching algorithm.

      An alignment-based match algorithm is used. It includes two consecutive stages: i) alignment stage and ii) match stage[3].

      1. Alignment stage:

        Given two fingerprint images to be matched i.e. composite image and image obtained dynamically,choose any one minutia from each image; calculate the similarity of the two ridges associated with the two referenced minutia points. If the similarity is larger than a threshold, transform each set of minutia to a new coordination system whose origin is at the referenced point and whose x-axis is coincident with the direction of the referenced point.

      2. Match stage: After we get two set of transformed minutia points, we use the elastic match algorithm to

The following diagram illustrate the effect of translation and rotation:

Y-axis

D

Y'-axis

X'-axis

x

E

E

F y

F

D

X-axis

This approach is based on static characteristics of the signature which are invariant. The signature recognition here becomes a typical pattern recognition task knowing that variations in signature pattern are inevitable; the task of signature authentication can be narrowed to drawing the threshold of the range of genuine variation. The images of the signatures written on a paper are obtained using a scanner or a camera in offline systems..

1.2. On-line or Dynamic Signature Verification Technique:

The second type of signature verification technique is online or dynamic system. This technique is based on dynamic characteristics of the process of signing. This verification uses signatures that are captured by pressure sensitive tablets that extract dynamic properties of a signature in addition to its shape.The number of order of the strokes, the overall speed of the signature and the pen pressure at each point etc are the dynamic features that make the signature more unique and more difficult to forge. Application areas of Online Signature Verification include protection of small personal devices (e.g. PDA, laptop), authorization of computer users for accessing sensitive data or programs and authentication of individuals for access to physical devices or buildings.

2. PREPROCESSING

Fig 22. Effect of translation

The new coordinate system is originated at minutia F and the new x-axis is coincident with the direction of minutia F. No scaling effect is taken into account by assuming two fingerprints from the same finger have nearly the same size.

4.4.2. Match Stage:

The matching algorithm for the aligned minutia patterns needs to be elastic since the strict match requiring that all parameters (x, y, ) are the same for two identical minutia is impossible due to the slight deformations and inexact quantizations of minutia.

A bounding box is placed around each template minutia. If the minutia to be matched is within the rectangle box and the direction discrepancy between them is very small, then the two minutia are regarded as a matched minutia pair. Each minutia in the template image either has no matched minutia or has only one corresponding minutia[3].

The final match ratio for two fingerprints is the number of total matched pair over the number of minutia of the template fingerprint. The score is 100*ratio and ranges from 0 to 100. If the score is larger than a pre-specified threshold, the two fingerprints are from the same finger. However, the elastic match algorithm has large computation complexity and is vulnerable to false minutia.

II.SIGNATURE VERIFICATION

  1. Types of Signature Verification :

    There are two different approaches of signature verification.

    1. Off-Line or Static Signature Verification Technique:

      The design of any offline signature verification system generally requires the solution of five sub problems: data acquisition, pre-processing, feature extraction, comparison process and performance evolution.

      The main aim to perform Pre-processing on an image is to enhance the image quality and obtain a transformed image.

      The steps involved in Preprocessing are:-

      1. Conversion

      2. Noise Removal

      3. Rotation

      4. Smoothing

      5. Thinning

      6. Signature Extraction

      7. Normalization

  1. Converting colored image to gray scale images

    We get the output of the scanning devices and image capturing devices in color format.. A coordinate matrix and three color matrices constitutes the colored image. The Coordinate matrix contains X, Y coordinate values of the image. The color matrices are denoted as red (R), green (G), and blue (B). For grey scale images ,scanned or captured color images are first converted to grey scale using the following equation (1) [6]

    Gray color = 0.299 * Red + 0.5876 * Green + 0.114*Blue

    Fig.23 original image fig.24 gray scale image

  2. Noise removal: The noise removal involves removal of the unwanted pixels from the image. Noise reduction is also smoothing or noise filtering . Images are often damaged because of positive and negative impulses stemming from decoding errors or noisy channels. Undesirable effects due illumination and other objects in the environment may also degrade a image. Median filter is widely used for smoothing and restoring images corrupted by noise. For reducing impulsive or salt-and-pepper type noise ,median filter is used which is a non- linear process. In a median filter, a window slides over the image, and for each positioning of the window, the median intensity of the pixels inside it determines the intensity of the pixel located in the middle of the window.[6]

    Fig.25 noise removal

  3. Rotation: Rotation of a signature is necessary as time domain approaches are sensitive to angle

    light background, and so the brightness threshold, is approximately chosen and that threshold value is applied to image pixel f(x,y) as in the equation (2)[6]

    If f(x, y) H then f(x, y) = Background

    else f(x, y) = Object .(2)

    The signature image obtained by separating it from the complex background detail is converted into binary image white background taking the pixel value of 1.

    g) Normalisation : Normalization is required to standardize the size of signatures having interpersonal and intrapersonal differences. Due to the irregularities in the image scanning and capturing process, the image dimensions may vary. Also, height and width of signatures vary from person to person and, sometimes, even the same person may use different size signatures. For that we need to remove or minimise the size differences and obtain a standard signature size for all signatures. All signatures will have the same dimensions after this normalisation process.. During the normalization process, the aspect ratio between width and height of a signature is kept intact.[6]

    i

    Normalization process is done by using the following eq.(3) & (4)

    variations compared to frequency domain

    xi= x 1-x

    min

    x M .(3)

    approaches. It coincides the axis of inertia of all the xmax-xmin

    yi= y 1-y

    M (4)

    signatures to the same horizontal axis. The signature is then rotated clockwise to remove skewness.

  4. Smoothing: Smoothing is performed to remove noice from signature and to expose its feature for further featuring . The adaptive filter, which preserves edges and high frequency components of the signature, is used for smoothing.

    i

    ymax-ymin

    min X

    Fig.26 normalized image

  5. Thining: Thining is a morphological process necessary for the reduction of data and time. . It consists of two sub-iterations: one aimed at deleting the south-east boundary points and the northwest corner points while the other one is aimed at deleting the north-west boundary points and the southeast corner points. It reduces the signature to a skeleton of unitary thickness.

  6. Signature extraction : The extra background created due to rotation is removed by extracting the smallest box that covers the signature. The smallest box is determined by the height and width of the signature and is then cropped to the measured dimension. For this purpose, thresholding is used widely. This can be done by choosing the threshold value H. the pixel values that are less than or equal to H are assigned to

0 and those greater than H to 1.This is done to separate the signature pixels from the background pixels. Clearly we are interested in dark objects on a

h) Gray Scale Image

Images that are without color, or achromatic images are grayscale images.

The levels of a grayscale range from 0 (black) to 1 (white).

  1. FEATURE EXTRACTION:

    To improve the accuracy of the signature verification system we perform the task of feature extraction. The following are the features that are extracted from the pre-processed image[4]:

    1. Height to width ratio (F1):

      The ratio of height to width of the signature is the feature F1. The coordinates of the bounding box of the cropped signature is determined. By using these coordinates the height and the width are computed. At different times the height and width can change for a person. But the ratio

      of height and weight of an individuals signature is approximately constant.

      F1 = ( height of the signature/ width of the signature)

    2. Occupancy Ratio (F2):

      It is the ratio of number of pixels which belongs to the signature and the total number of pixels of the signature image. The information about the signature density is provided by this feature.

      F2= (number of pixels which belongs to the signature/ total number of pixels in the Signature image)

    3. Density ratio (F3): The signature image is divided into two halves vertically. F3 is the ratio of number of pixels which belong to the left half of the signature image to the number of pixels which belong to the right half of the signature image. The signature density ratio of the two halves of the signature image information is provided by F3.

      F3= (density of the left half of the signature/density of theright half of the signature)

    4. Critical points (F4): The large variation in the intensity in all the directions is found in the regions of corners. The corner points are treated as the critical points. These critical points are counted in the signature image. The counting is done by using Harris corner method.

    5. Center of gravity (F5): the number of the white pixels in the binary image is treated as ON pixel. F5 is the average coordinate point of all ON pixels of the binary signature image.

    6. Slope of center of gravity (F6)-

      The signature image is divided into two halves. The center of gravity of each part is determined separately. The slope of the line joining the center of gravities is determined.

    7. Center of masses of sub-regions (F7)- Firstly the signature is divided vertically to get two center of masses. Then, each half of signature images is divided horizontally to get four center of masses. Again, four regions of signature image are divided vertically to get eight centers of masses. Finally, eight regions of signature image are divided horizontally to get sixteen centers of masses. The feature F7 is the above thirty centers of masses of the signature image.

      The above features F1 to F7 are extracted and stored in a feature vector. This feature vector is used to train the system as well as for verification of a sample signature.

  2. VERIFICATION

The features are extracted from the signature images of the different individuals. The mean features are determined by using the feature extracted for each person. All the features of the query image is extracted and the Euclidean distance is calculated with respect to the mean signature features of the original signature images. The acceptance range is set according to the maximum and minimum Euclidean distance values of the original images. If the distance of the query

image with respect to the mean signature is within the acceptance range then the signature is said to be authenticated otherwise it is detected as forged one.

The performance of the system can be measured y using the three different percentages[5]:

    1. False rejection rate (FRR): it is the percentage of the original signatures that are incorrectly classified.

    2. False acceptance rate (FAR): it is the percentage of the forgeries that are incorrectly classified.

    3. Accuracy : it is the percentage of signature those are exactly classified.

The threshold must be chosen such that there is an acceptable trade-off between FAR and FRR. If threshold value selected is high, FAR will increase. If the threshold value is low then FRR will increase. So for this purpose the threshold value is chosen as 2.5.

    1. Algorithm for signature verification using Euclidean Distance[5]:

      1. Input the set of signatures of a person.

      2. Convert the image into gray scale and binary image

      3. Perform noise reduction on both images

      4. Perform the rotation on binary image in order to equalize the inclination based on the baseline slant.

      5. Find the bounding box of image and crop the image

      6. Extract the features F1, F2, F3, F4, F5, F6 and F7 from the signature and store in a feature matrix.

      7. Dataset is created by computing the mean signature feature values.

      8. Calculate the Euclidean distance of the query signature features from mean signature features of the dataset.

      9. If the distance is below a certain threshold then query signature is verified to that of the claimed person otherwise it is detected as a forged one.

  1. EXPERIMENTAL RESULTS:

    1. esults and Discussions:

      1. Experimental Results for fingerprint recognitions: We have two different techniques to obtained a composite minutiae set .The two minutiae sets are

        1] Obtained by extracting from the composite image ( MR1)

        2] Obtained by integrating individual minutiae sets (MR2)

        These sets are treated as template minutiae sets against which the query minutiae set can be matched. The four different impressions of the same finger where obtained. The first two impressions were used to construct the composite minutiae set of a finger. The other two impressions where used as query images. In this way ,for performing the mosaicking algorithm the four impressions of each of 8 subjects where obtained. The composite image for each user was generated. The average number of minutiae increases from 36 to 45 after mosaicking. Matching was accompanied

        using minutiae matching algorithm. The matching performance is observed to improve significantly when the composite template is used instead of individual impressions therefore this gives best performance. Consider a query image IU with the minutaie set MU and the template minutaie sets MP, MQ , MR1 , MR2 , so the following comparisons are made- 1) MU with MP

    2. MU with MQ

    3. MU with MR1

    4. MU with MR2

      The ROC curves depicts the performance of these 4 different matchings. From the graph it is cleared that composite images results in improved matching performance. Better matching performance is given by MR1, rather than MR2, which may be due to the incorrect minutaie orientation in MR2.

      Fig.27 The ROC curves indicating improvement in matching performance after mosaicking templates.

        1. Experimental Results for signature verification:

          Datab ase name

          Original signatur e

          Test signature

          Max threshold

          Pre-computed threshold

          Average threshold

          FRR

          FAR

          Acc

          FRR

          FAR

          Acc

          FRR

          FAR

          Acc

          D1

          10

          6 originals

          10 forgeries

          0

          50

          68.75

          50

          0

          81.2

          33.3

          40

          62.5

          D2

          8

          4originals

          12forgeries

          0

          0

          100

          25

          0

          93.7

          25

          0

          93.7

          D3

          8

          4originals

          12forgeries

          25

          16.7

          81.2

          25

          0

          93.7

          25

          0

          93.7

          Signature verification results

          The false acceptance rate (FAR) and false rejection rate (FRR) and accuracy have been tested by using various threshold values. The threshold are

          1] For calculating Euclidian distance, the Max threshold value 2.5 is reliant .the value should not be greater than 5 it has been observed that this threshold value is crossed by very few original signatures.

          2] Pre-computed threshold: while creating dataset this value is computed. The maximum distances from the original signature to the min signature determines this type of

          threshold value. This value usually increases the FRR and decreases the FAR.

          3]Average threshold : The min of max threshold and pre- computed threshold is calculated this is called as average threshold and the performance is tested with this min value

          .This results in an acceptable trade of between FAR and FRR.

  2. SUMMARY AND CONCLUSION:

    For fingerprint recognition we have describe the construction technique for the composite template that integrates the information available in two different impressions of same finger. In order to established an initial approximate alignment the corresponding minutiae points are used . For registering the two impressions a modified ICP algorithm is used. These ICP algorithms generate the transformation matrix and construct the composite template. The minutiae points are extracted from this composite template and are tested against the query image of the same finger.

    For the signature verification the proposed method promises simple and reliable solutions to the problem of signature verification .It includes an image clustering process based on Euclidian distance approach . This approach enables the handling of clusters of different sizes and shapes of signatures.

    Hence in this way the fingerprint recognition and signature verification takes place thereby giving satisfactory accuracy

  3. FUTURE WORK:

    Both this concepts can be used together in the construction of stylus providing a high security and authentication of user. The construction of stylus is such that is captures the fingerprint images as well as matches the user signature. This pen can be further use in many government as well as private companies to prevent unauthorized access to the system

  4. REFERENCES:

      1. A. Jain and A. Ross. Fingerprint mosaicking. In Proc. of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), volume 4, pages 4064{4067, Orlando, FL, May 2002.

      2. FINGERPRINT MOSAICKING, Anil Jain and Arun Ross Michigan State University, East Lansing, MI, USA 48824,fjain, rossarung@cse.msu.edu, IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) , Orlando, Florida, May 13 – 17, 2002

      3. wuzhilli(Vincent AT comp.hkbu.ed.u.hk) OEW 801, Hong Kong Baptist University

      4. Ranjan Jana et al, / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 5 (1) , 2014, 707-710 Offline Signature Verification using Euclidian Distance

      5. Berthold K. P. Horn, Closed-form solution of absolute orientation using unit quaternions, Journal of the OpticalSociety of America, vol. 4, no. 4, pp. 629642, April 1987.

      6. International Journal of IT, Engineering and Applied Sciences Research (IJIEASR) Volume 2, No. 1, January 2013 Signature Recognition & Verification System Using Back Propagation Neural Network

Leave a Reply