Combining Left and Right Palmprint Images for Personal Identification

DOI : 10.17577/IJERTCONV5IS17006

Download Full-Text PDF Cite this Publication

Text Only Version

Combining Left and Right Palmprint Images for Personal Identification

A. Jenifer1

1.M.E Student,

Department of Computer Science and Engineering, Institute of Road and Transport Technology, Erode.

Mrs. A. Kavidha2

2.Associate Professor,

Department of Computer Science and Engineering, Institute of Road and Transport Technology, Erode.

Abstract-Multibiometrics can provide higher identifica- tion accuracy than single biometrics, so it is more suita- ble for some real-world personal identification applica- tions that need high-standard security. Among various biometrics technologies, palmprint identification has received much attention because of its good perfor- mance. Combining the left and right palmprint images to perform multibiometrics is easy to implement and can obtain better results. However, previous studies did not explore this issue in depth. In this paper, we pro- posed a novel framework to perform multibiometrics by comprehensively combining the left and right palm- print images. This framework integrated three kinds of scores generated from the left and right palmprint im- ages to perform matching score-level fusion. The first two kinds of scores were, respectively, generated from the left and right palmprint images and can be obtained by any palmprint identification method, whereas the third kind of score was obtained using a specialized algorithm proposed in this paper. As the proposed algo- rithm carefully takes the nature of the left and right palmprint images into account, it can properly exploit the similarity of the left and right palmprints of the same subject. Moreover, the proposed weighted fusion scheme allowed perfect identification performance to be obtained in comparison with previous palmprint identi- fication methods.

Index Terms Palmprint recognition, biometrics, multi- biometrics.

  1. INTRODUCTION

    PALMPRINT identification is an important personal iden- tification technology and it has attracted much attention.

    The palmprint contains not only principle curves and wrin- kles but also rich texture and miniscule points, so the palmprint identification is able to achieve a high accuracy because of available rich information in palmprint [1][8]. Various palmprint identification methods, such as coding based methods [5][9] and principle curve methods [10], have been proposed in past decades. In addition to these methods, subspace based methods can also perform well for palmprint identification. For example, Eigenpalm and Fisherpalm [11][14] are two well-known subspace based palmprint identification methods. In recent years, 2D appearance based methods such as 2D Principal Compo nent Analysis (2DPCA) [15], 2D Linear Dis- criminant Analysis (2DLDA) [16], and 2D Locality Pre-

    serving Projection (2DLPP) [17] have also been used for palmprint recognition. Further, the Representation Based Classif-ication (RBC) method also shows good perfor- mance in palmprint identification [18]. Additionally, the Scale Invariant Feature Transform (SIFT) [19], [20], which transforms image data into scale-invariant coordinates, are successfully introduced for the contactless palmprint iden- tification.

    No single biometric technique can meet all requirements in circumstances [21]. To overcome the limitation of the unimodal biometric technique and to improve the perfor- mance of the biometric system, multimodal biometric methods are designed by using multiple biometrics or using multiple modals of the same biometric trait, which can be fused at four levels: image (sensor) level, feature level, matching score level and decision level [22][25]. For the image level fusion, Han et al. [26] proposed a multispectral palmprint recognition method in which the palmprint im- ages were captured under Red, Green, Blue, and Infrared illuminations and a wavelet-based image fusion method is used for palmprint recognition. Examples of fusion at fea- ture level include the combination of and integration of multiple biometric traits. For example, Kumar et al. [27] improved the performance of palmprint-based verification by integrating hand geometry features. In [28] and [29], the face and palmprint were integrated for personal identifica- tion. For the fusion at matching score level, various kinds of methodes are also proposed. For instance, Zhang et al.

    [30] designed a joint palmprint and palmvein fusion system for personal identification. Dai et al. [31] proposed a weighted sum rule to fuse the palmprint minutiae, density, orientation and principal lines for the high reso-lution palmprint verification and identification. Particularly, Mo- rales et al. [20] proposed a combination of two kinds of matching scores obtained by multiple matchers, the SIFT and orthogonal line ordinal features (OLOF), for contact- less palmprint identification. One typical example of the decision level fusion on palmprint is that Kumar et al. [32] fused three major palmprint representations at the decision level.

    Conventional multimodal biometrics methods treat different traits independently. However, some special kinds of bio- metric traits have a similarity and these methods cannot ex- ploit the similarity of different kinds of traits. For example, the left and right palmprint traits of the same subject can be viewed as

    Fig. 1. Procedures of the proposed framework.

    this kind of special biometric traits owing to the similarity between them, which will be demonstrated later. However, there is almost no any attempt to explore the correlation between the left and right palmprint and there is no spe- cial fusion method for this kind of biometric identifica- tion. In this paper, we propose a novel framework of com- bining the left with right palmprint at the matching score level. Fig. 1 shows the procedure of the proposed frame- work. In the framework, three types of matching scores, which are respectively obtained by the left palmprint matching, right palmprint matching and crossing matching between the left query and right training palmprint, are fused to make the final decision. The framework not only combines the left and right palmprint images for identifica- tion, but also properly exploits the similarity between the

    representation based methods and SIFT based methods can also be applied for palmprint identification.

    1. Line Based Method

      Lines are the basic feature of palmprint and line based methodes play an important role in palmprint verification and identification. Line based methods use lines or edge detectors to extract the palmprint lines and then use them to perform palmprint verification and identification. In gen- eral, most palms have three principal lines: the heartline, headline, and lifeline, which are the longest and widest lines in the palmprint image and have stable line shapes and positions. Thus, the principal line based method is able to provide stable performance for palmprint verification.

      Palmprint principal lines can be extracted by using the Gobor filter, Sobel operation, or morphological operation. In this paper, the Modified Finite Radon Transform (MFRAT) method [10] is used to extract the principal lines of the palmprint. The pixel-to-area matching strategy is adopted for principal lines matching in Robust Line Orien- tation Code (RLOC) method [33], which defines a princi- pal lines matching score as follows:

      ¯

      m n

      S( A, B ) = ( A(i, j ) & B (i, j ))/ NA , (1)

      i =1 j =1

      where A and B are two palmprint principal lines images, &

      represents the logical AND operation, NA is the number

      left and right palmprint of the same subject. Etensive ex-

      periments show that the proposed framework can integrate

      of pixel points of A, and

      B i

      B i j

      ¯ ( , )

      j

      represents a neighbor area

      can be defined as a set of five

      most conventional palmprint identification methods for per-forming identification and can achieve higher accuracy

      of B(i, j ). For example, ¯ ( , )

      )

      pixel points, B(i 1, j ), B(i + 1, j ), B(i, j ), B(i, j 1), and

      than conventional methods.

      B(i, j

      1). The value of A(i, j ) & ¯ ( ,

      B i j

      will be 1 if A i j

      and at least one of

      ( , B i j

      is 0. S

      A

      B

      )

      ( ,

      This work has the following notable contributions. First,

      + ¯ B i j ) are simultaneously principal lines

      it for the first time shows that the left and right palmprint

      points, otherwise, the value of A(i, j ) & ¯ ( , )

      ( , )

      of the same subject are somewhat correlated, and it demon- strates the feasibility of exploiting the crossing matching score of the left and right palmprint for improving the ac- curacy of identity identification. Second, it proposes an elaborated framework to integrate the left palmprint, right palmprint, and crossing matching of the left and right palmprint for identity iden-tification. Third, it conducts extensive experiments on both touch-based and contactless palmprint databases to verify the proposed framework.

      The remainder of the paper is organized as follows: Sec- tion II briefly presents previous palmprint identification methods. Section III describes the proposed framework. Section IV reports the experimental results and Section V offers the conclusion of the paper.

  2. PREVIOUS WORK

    Generally speaking, the principal lines and texture are two kinds of salient features of palmprint. The principal line based methods and coding based methods have been widely used in palmprint identification. In addition, sub-space based methods,

    is between 0 and 1, and the larger the matching score is, the more similar A and B are. Thus, the query palmprint can be classified into the class that produces the maximum match- ing score.

    1. Subspace Based Methods

    Subspace based methods include the PCA, LDA, and ICA etc. The key idea behind PCA is to find an orthogonal sub- space that preserves the maximum variance of the original data. The PCA method tries to find the best set of projection directions in the sample space that will maximize the total scatter across

    all samples by using the following objective function:

    JP C A = arg maxW |W T St W |, (4)

    where St is the total scatter matrix of the training samples, and W is the projection matrix whose columns are or- thonormal vectors. PCA chooses the first few principal components and uses them to transform the samples in to a low-dimensional feature space.

    LDA tries to find an optimal projection matrix W and trans- forms the original space to a lower-dimensional feature space. In the low dimensional space, LDA not only maxim- izes the Euclidean distance of samples from different clas- ses but also minimizes the distance of samples from the same classes. As a result, the goal of LDA is to maximize the ratio of the between-class distance against within-class distance which is defined as:

    |W T Sb W |

  3. THE PROPOSED FRAMEWORK

    A. Similarity Between the Left and Right Palmprints

    In this subsection the illustration of the correlation be- tween the left and right palmprints is presented. Fig. 2 shows palmprint images of four subjects. Fig. 2 (a)-(d) show four left palmprint images of these four subjects. Fig. 2 (e)-(h) show four right palmprint images of the same four

    J

    L D A =

    arg max W

    |W T

    Sw W |

    , (5)

    subjects. Images in Fig. 2 (i)-(l) are the four reverse palm- print images of those shown in Fig. 2 (e)-(h). It can be seen

    where Sb is the between-class scatter matrix, and Sw is the

    within-class scatter matrix. In the subspace palmprint identifi- cation method, the query palmprint image is usually classified into the class which produces the minimum Euclidean distance with the query sample in the low-dimensional feature space.

    C. Representation Based Method

    The representation based method uses training samples to represent the test sample, and selects a candidate class with the maximum contribution to the test sample. The Collaborative Representation based Classification (CRC) method, Sparse Representation-Based Classification (SRC) method and Two-Phase Test Sample Sparse Representation (TPTSSR) method are two representative representation based methods [35], [36]. Almost all representation based methods can be easily applied to perform palmprint identi- fication. The CRC method uses all training samples to rep- resent the test sample. Assuming that there are C classes

    and n training samples x1 x2 . . . xn , CRC expresses the test sample as:

    y = a1x1 + a2 x2 + . . . + an xn , (6)

    where y is the test sample, and ai (i = 1, 2, . . .n) is the Weight coefficient. It can be rewritten as y = X A, where

    T

    A = [a1a2 · · · an ] , X = [x1x2 · · · xn ]. x1 x2 · · · xn and y are

    all column vectors. If X is nonsingular, A can be obtained

    by using A = X 1 y. If X is singular, A can be obtained

    by using A = ( X T X + I )1 X T y, where is a small positive constant and I is the identity matrix. The contribu- tion

    of the i th training sample to representing the test sample

    is ai xi . So the sum of the contribution from the j th class is

    s j = a j1 x j1 + a j2 x j2 + · · · + a jn x jn , jk (k = 1, 2 . . .) is the sequence number of the kth training sample from the j th

    class. The deviation of s j from y can be calculated using

    e j = ||y (a j1 x j1 + a j2 x j2 + · · · + a jn x jn )||2,

    j C.

    (7)

    The TPTSSR method was proposed in 2011 and it has performed well in face recognition and palmprint identifi- cation [37]. The method first determines M nearest neigh- bor training samples for the test sample. Then it uses the determined M neighbor training samples to represent the test sample, and selects the class with the greatest contribu- tion to representing the query sample as the class to which the query sample belongs.

    that the left palmprint image and the reverse right palm- print image of the same subject are somewhat similar.

    Fig. 3 (a)-(d) depict the principal lines images of the left palmprint shown in Fig. 2 (a)-(d). Fig. 3 (e)-(h) are the reverse right palmprint principal lines images correspond- ing to Fig. 2 (i)-(l). Fig. 3 (i)-(l) show the principle lines matching images of Fig. 3 (a)-(d) and Fig. 3 (e)-(h), re- spectively. Fig. 3 (m)-(p) are matching images between the left and reverse right palmprint principal lines images from differ-ent subjects. The four matching images of Fig.3(m)- (p)

    Fig. 2. Palmprint images of four subjects. (a)-(d) are four left palmprint images; (e)-(h) are four right palmprint corresponding to (a)-(d); (i)-(l) are the reverse right palmprint images of (e)-(h).

    Fig. 3. Principal lines images. (a)-(d) are four left palmprint principal lines images, (e)-(h) are four reverse right palmprint principal lines image, (i)-

    (l) are principal lines matching images of the same people, and (m)-(p) are principal lines matching images from different people.

    are: (a) and (f) principal lines matching image, (b) and (e) principal lines matching image, (c) and (h) principal lines matching image, and (d) and (g) principal lines matching image, respectively.

    Fig. 3 (i)-(l) clearly show that principal lines of the left and reverse right palmprint from the same subject have very similar shape and position. However, principal lines of the left and right palmprint from different individuals have very different shape and position, as shown in Fig. 3 (m)-(p). This domenstrates that the principal ines of the left palmprint and reverse right palmprint can also be used for palmprint verification/identification.

    1. Procedure of the Proposed Framework

      This subsection describes the main steps of the proposed framework. The framework first works for the left palmprint images and uses a palmprint identification method to calculate the scores of the test sample with respect to each class. Then it applies the palmprint identification method to the right palmprint images to calculate the score of the test sample with respect to each class. After the crossing matching score of the left palmprint image for testing with respect to the reverse right palmprint images of each class is obtained, the proposed framework performs matching score level fusion to integrate these three scores to obtain the identification result.

      The method is presented in detail below.

      We suppose that there are C subjects, each of which has m

      available left palmprint images and m available right palmprint

      k k

      images for training. Let X i and Yi denote the i th left palm- print image and i th right palmprint image of the kth subject respectively, where i = 1, . . . , m and k = 1, . . . , C . Let

      Z 1 and Z 2 stand for a left palmprint image and the corre-

      sponding right palmprint image of the subject to be identified.

      Z 1 and Z 2 are the so-called test samples.

      Step 1: Generate the reverse images Y k of the right palm-

    2. Matching Score Level Fusion

    In the proposed framework, the final decision making is based on three kinds of information: the left palmprint, the

    palmprint. As we know, fusion in multimodal biometric sys- tems can be performed at four levels. In the image (sensor)

    level fusion, different sensors are usually required to capture

    the image of the same biometric. Fusion at decision level is too rigid since only abstract identity labels decided by different

    matchers are available, which contain very limited information about the data to be fused. Fusion at feature level involves the

    use of the feature set by concatenating several feature vectors to form a large 1D vector. The integration of features at the

    earlier stage can convey much richer information than other

    a better identification accuracy than fusion at other levels. However, fusion at the feature level is quite difficult to imple-

    ment because of the incompatibility between multiple kinds of data. Moreover, concatenating different feature vectors also lead to a high computational cost. The advantages of the

    scorelevel fusion have been concluded in [21], [22], and [39]

    and the weight-sum scorelevel fusion strategy is effective for component classifier combination to improve the performance of biometric identification. The strength of individual matchers

    Fig. 4 shows the basic fusion procedure of the proposed method at the matching score level. The final matching

    print images Y k

    k andY k

    i

    will be used as training

    score is generated from three kinds of matching scores.

    samples. Y k i . Both Yi i

    The first and second matching scores are obtained from the

    is

    Y k l c) =

    Y k ( LY l + 1

    l 1

    i obtained by: i ( , i

    k

    ( = . . . L Y, c = 1 . . . CY ), where L Y and C Y are the row

    number and column number of Yi respectively.

    Step 2: Use Z 1 , X k s and a palmprint identification method,

    i

    such as the method introduced in Section II, to calculate the

    score of Z 1 with respect to each class. The score of Z 1 with

    respect to the i th class is denoted by si .

    where Y is a palmprint image. X k are a set of palmprint images

    Step 3: Use Z 2, Y k s and the palmprint identification method

    i

    used in Step 2 to calculate the score of Z 2 with respect to each

    Fig. 4. Fusion at the matching score level of the proposed framework.

    left and right palmprint, respectively. The third kind of score is calculated based on the crossing matching between

    the left and right palmprint. wi (i = 1, 2, 3), which denotes the weight assigned to the i th matcher, can be adjusted and viewed as the impor-tance of the corresponding matchers

    Fig. 5.(a)-(d) are two pairs of the left and right palmprint images of two subjects from PolyU database.

    Fig. 6. (a)-(d) are two pairs of the left and right hand images of two subjects from IITD database. (e)-(h) are the corresponding ROI images extracted from

    1. and (d).

    Differing from the conventional matching score level fu- sion, the proposed method introduces the crossing match-

    ing score to the fusion strategy. When w3 = 0, the proposed method is equivalent to the conventional score level fusion. Therefore, the performance of the proposed method will at least be as good as or even better than conventional meth- ods by suitably tuning the weight coefficients.

  4. EXPERIMENTAL RESULTS

    More than 7,000 different images from both the contact- based and the contactless palmprint databases are em- ployed to evaluate the effectiveness of the proposed meth- od. Typical state-of-the-art palmprint identification meth- ods, such as the RLOC method, the competitive code method, the ordinal code method, the BOCV method, and the SMCC method [7], are adopted to evaluate the perfor- mance of the proposed framework. Moreover, several re- cent developed contactless based methodes, such as the SIFT methods [19] and the OLOF+SIFT method [20], are also used to test the proposed framework. For the sake of completeness, we compare the performance of our method with that of the conventional fusion based methods.

    1. Palmprint Databases

      The PolyU palmprint database (version 2) [40] contains 7,752 palmprint images captured from a total of 386 palms of 193 individuals. The samples of each individual were collected in two sessions, where the average interval be- tween the first and second sessions was around two months. In each session, each individual was asked to pro- vide about 10 images of each palm. We notice that some individual provide few images. For example, only one im- age of the 150th indi-vidual was captured in the second session. To facilitate the evaluation of the performance of our framework, we set up a subset from the whole database by choosing 3,740 images of 187 individual, where each individual provide 10 right palm-print images and 10 left

      palmprint images, to carry out the following experiments. Fig. 5 shows some palmprint samples on the PolyU data- base.

      The public IITD palmprint database [41] is a contactless palmprint database. Images in IITD database were captured in the indoor environment, which acquired contactless hand im-ages with severe variations in pose, projection, rotation and translation. The main problem of contactless databases lies in the significant intra-class variations resulting from the absence of any contact or guiding surface to restrict such variations [20]. The IITD database consists of 3,290 hand images from 235 subjects. Seven hand images were captured from each of the left and right hand for each indi- vidual in every session. In addition to the original hand images, the Region Of Interest (ROI) of palmprint images are also available in the database. Fig. 6 shows some typi- cal hand images and the corresponding ROI palmprint im- ages in the IITD palmprint database. Compared to the palmprint images in the PolyU database, the images in the IITD database are more close to the real-applications.

    2. Matching Results Between the Left and Right Palmprint

      To obtain the correlation between the left and right palm-print in both the PolyU and the IITD databases, each left palmprint is matched with every right palmprint of each subject and the principal line matching score is calcu- lated for the left palmprint and this subject. A match is counted as a genuine matching if the left palmprint is from the class; if otherwse, the match is counted as an imposter matching.

    3. SIFT Based Method

    SIFT was originally proposed in [19] for object classi- fication applications, which are introduced for contactless palmprint identification in recent years [20], [38]. Because

    the contactless palmprint images have severe variations in poses, scales, rotations and translations, which make con- ventional palmprint feature extraction methods on contact- less imaging schemes questionable and therefore, the iden- tification accu-racy of conventional palmprint recognition methods is usually not satisfactory for contactless palm- print identification. The features extracted by SIFT are in- variant to image scaling, rotation and partially invariant to the change of projection and illumination. Therefore, the SIFT based method is insensitive to the scaling, rotation, projective and illumination factors, and thus is advisable for the contactless palmprint identification.

    The SIFT based method firstly searches over all scales and image locations by using a difference-of-Gaussian function to identify potential interest points. Then an elab- orated model is used to determine finer location and scale at each candidate location and keypoints are selected based on the stability. Then one or more orientations are assigned to each keypoint location based on local image gradient directions. Finally, the local image gradients are evaluated at the selected scale in the region around each keypoint [19]. In the identification stage, the Euclidean distance can be employed to determine the identity of the query image. A smaller Euclidean distance means a higher similarity between the query image and the training image.

    It seems that the crossing matching score can also be calcu-lated based on the similarity between the right query and left training palmprint. We also conduct experiments to fuse both crossing matching scores to perform palmprint identification. However, as the use of the two crossing matching scores does not lead to more accuracy improve- ment, we exploit only one of them in the proposed method.

    D.. Computational Complexity

    In the proposed method, since the processing of the re- verse right training palmprint can be performed before palmprint identification, the main computational cost of the proposed method largely relies on the individual palmprint identification method. Compared to the conventional fusion strategy that only fuses two individual matchers, the pro- posed method consists of three individual matches. As a result, the proposed method needs to perform one more identification than the conventional strategy. Thus, the identification time of the proposed method may be about

    1.5 times of that of conventional fusion strategy.

    To evaluate the computational cost of the proposed method, algorithms adopted in the proposed method are implemented by using MATLAB 7.10.0 on a PC with dou- ble-core Intel(R) i5-3470 (3.2GHz), RAM 8.00GB, and Windows 7.0 operating system. The time taken for the pro- cessing the reverse right training palmprint for each class is about 4.24s and 2.91s on both databases. Some representa- tive average identification time of the proposed method and conventional fusion strategy.

    Fig. 8. The comparative results between the proposed method and the conventional fusion method on the PolyU database.

    It is impossible to exhaustively verify all possible weight coefficients to find the optimal coefficients. Due to the limit of space, only a set of representative weight coeffi- cients that minimize the final identification error rate of our framework and conventional fusion methods are re- ported. Empirically, the score that has the lower identifi- cation error rate usually has a larger weight coefficient. In addition, the optimal weight coefficients vary with the methods, since each method adopted in the proposed framework utilizes different palmprint feature extraction algorithm.

    The first m left and m right palmprint are selected as the training samples to calculate the left matching score

    si and the right matching score ti , respectively. The rest of the left and right palmprints are used as test samples. m reverse right palmprints are also selected as the train-

    ing samples to calculate the crossing matching score gi based on the rule of the proposed framework. Table I-VI list the identification error rate of the proposed frame- work using different palmprint identification methods.

    The experimental results of the PolyU database show that the identification error rate of the proposed method is about 0.06% to 0.2% lower than that of conventional fusion methods. The comparison between the best identification results of the proposed method and conventional fusion scheme are depicted as Fig. 8, which shows that the frame- work using different methods outperform the conventional fusion schemes.

  5. CONCLUSION

This study shows that the left and right palmprint images of the same subject are somewhat similar. The use of this kind of similarity for the performance improvement of palm-print identification has been explored in this paper. The proposed method carefully takes the nature of the left and right palm- print images into account, and designs an algorithm to evalu- ate the similarity between them. Moreover, by employing this similarity, the proposed weighted fusion scheme uses a meth- od to integrate the three kinds of scores generated from the left and right palmprint images. Extensive experiments demon- strate that the proposed framework obtains very high accuracy and the use of the similarity score between the leftand right palmprint leads to important improvement in the accuracy.

This work also seems to be helpful in motivating people to explore potential relation between the traits of other bi- modal biometrics issues

REFERENCES

  1. A. W. K. Kong, D. Zhang, and M. S. Kamel, A survey of palmprint recognition, Pattern Recognit., vol. 42, no. 7, pp. 14081418, Jul. 2009.

  2. D. Zhang, W. Zuo, and F. Yue, A comparative study of palmprint recognition algorithms, ACM Comput. Surv., vol. 44, no. 1, pp. 1 37, Jan. 2012.

  3. D. Zhang, F. Song, Y. Xu, and Z. Lang, Advanced pattern recogni- tion technologies with applications to biometrics, Med. Inf. Sci. Ref., Jan. 2009, pp. 1384.

  4. R. Chu, S. Liao, Y. Han, Z. Sun, S. Z. Li, and T. Tan, Fusion of face and palmprint for personal identification based on ordinal fea- tures, in

    Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2007,

    pp. 12.

  5. D. Zhang, W.-K. Kong, J. You, and M. Wong, Online palmprint identification, IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 9,

    pp. 10411050, Sep. 2003.

  6. A.-W. K. Kong and D. Zhang, Competitive coding scheme for palm-print verification, in Proc. 17th Int. Conf. Pattern Recognit., vol. 1. Aug. 2004, pp. 520523.

  7. W. Zuo, Z. Lin, Z. Guo, and D. Zhang, The multiscale competitive code via sparse representation for palmprint verification, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2010, pp. 22652272.

  8. Z. Sun, T. Tan, Y. Wang, and S. Z. Li, Ordinal palmprint represen- tion for personal identification [represention read representation], in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., vol. 1. Jun. 2005,

    pp. 279284.

  9. A. Kong, D. Zhang, and M. Kamel, Palmprint identification using feature-level fusion, Pattern Recognit., vol. 39, no. 3, pp. 478487, Mar. 2006.

  10. D. S. Huang, W. Jia, and D. Zhang, Palmprint verification based on principal lines, Pattern Recognit., vol. 41, no. 4, pp. 13161328, Apr. 2008.

Leave a Reply