- Open Access
- Total Downloads : 28
- Authors : Ravinder Kumar
- Paper ID : IJERTCONV5IS10041
- Volume & Issue : ICCCS – 2017 (Volume 5 – Issue 10)
- Published (First Online): 24-04-2018
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
New Biometric Approach for Person Authentication
Ravinder Kumar
Deptt. of Computer Sc. & Engg. HMRITM, Affiliated with GGSIPU Delhi, India
Abstract This Paper presents a Finger-knuckle-print (FKP) a new member of the biometrics family for person authentication. FPK is an inherent skin pattern of the outer surface around the phalangeal joint of one's finger. It has high capability to discriminate different individuals and has many advantages over existing biometrics. FKP recognition system comprises four major components: FKP image acquisition, ROI extraction, Feature extraction, and Feature matching. New entropy based features are used for matching over a database, which consists of 7920 images from 660 different fingers. The performance of the system is measured in terms of recognition rate and compromising results are obtained.
Keywords Finger-knuckle-print; Authentication; entropy- based features; biometrics.
-
INTRODUCTION
Biometrics is automated methods of recognizing a person based on his/her physiological or behavioral characteristic. Various biometrics modalities used so far for recognition are; face, fingerprints, hand geometry, handwriting, iris, retinal, vein, and voice. Biometric technologies are becoming the foundation of an extensive array of highly secure identification and personal verification solutions. As the level of security breaches and transaction fraud increases, the need for highly secure identification and personal verification technologies is becoming apparent [1].
Among various kinds of biometric identifiers, hand-based biometrics has been attracting considerable attention over recent years. Fingerprint, palm print, hand geometry, hand vein, and inner knuckle print have been proposed and well investigated in the literature. The popularity of hand-based biometrics should be attributed to its high user acceptance. In fact, the image pattern in the finger knuckle surface is highly unique and thus can serve as a distinctive biometric identifier. Recently, it has been noticed that the texture in the outer finger surface has the potential to distinguish individuals.
Finger Knuckle Print (FKP) is relatively new biometric and is catching up nowadays due to its unique texture and ease of acquisition. Woodward et al. [2-3] extracted 3-D features from finger surface to identify a individual identity. The Gabor filters, which combines orientation and magnitude information is used for the feature extraction. Two subsets of the features based on statistics and the Gabor transform are fused at the feature level and the score level to yield better identification accuracy. Motivated by the success of Local Gabor Binary Patterns (LGBP) on the face recognition, a method in [4] uses LGBP to identify FKPs. Since the Gabor features are large in dimension so Orthogonal Linear
Discriminant Analysis (OLDA) is applied for the dimension reduction for the recognition of FKP in [5].
The SURF algorithm in [6] uses different key-point detectors and feature descriptors. The key-point detector is based on the approximation of the Hessian matrix and it uses integral images to lessen the computation time. A 2D Gabor filter is applied in [7] to enhance the image followed by the Orientation Enhanced SIFT (OE-SIFT) descriptors. The technique in [8] uses a combination of two features derived from the gradient field of the FKP. This technique employs the direction of gradient field (Orientation) and also the coherence that gives the strength of the averaged gradient in the distribution of local gradient vectors to find the location of phalangeal joint of the Finger-Knuckle.
The Fourier transform coefficients of the image are taken in [9] as the feature and the band-limited phase-only correlation (BLPOC) technique is employed to calculate the similarity between two sets of Fourier transform coefficients.
Riesz CompCode is implemented in [10] for the feature extraction that consists of six bit-planes, three of which are from the images responses to the 2nd-order Riesz transforms and the other three are from the classical CompCode scheme. Hence, RieszCompCode makes use of the advantages of Riesz transform and CompCode in characterizing the local image features together.
Monogenic Code [11] is applied as it implicitly reflects the range of the local orientation and the range of the local phase of the examined pixel. The monogenic signal is an isotropic 2-D extension of the 1-D analytic signal.
The distances and angles between image data vectors are considered in [12] to measure the data similarities, in the hope of sufficiently capturing the manifold structure. Both angle and distance are fused using the parallel fusion strategy, on which the complex locality preserving projections (CLPP) is applied to extract low-dimensional features capable of preserving the manifold structure of the input data set. In order to remove the redundant information among features, CLPP is extended to the approach of the orthogonal complex locality preserving projections (OCLPP), which produces orthogonal basis functions.
Woodard and Flynn in [13] use the 3D range image information of hand for extracting the curvature Surface of the knuckles and the information comprising finger measurements, color, texture, and crease patterns is extracted from intensity images. Ravikanth et al. [15] have used the 2D finger-back surface images to implement subspace analysis methods for feature extraction.
The fusion code is extracted in [14] using the Gabor wavelets for both the palmprints and the finger knuckles and then Hamming distance is used to generated the scores and then the two modalities are fused at the decision level.
In this paper, we propose a (FKP) based approach for person authentication. Entropy based features are extracted from the Finger-knuckle-print images. To reduce the size of extracted features, feature selection and reduction techniques are used. Experiments are performed on (PolyU) Finger- Knuckle-Print Database, which consists of 7920 images from 660 different fingers. The performance of the system is measured in terms of recognition accuracy using various descriptors.
The paper is organizes as follows: Section II discusses the proposed FKP based recognition system. Experimental results are presented in section III. Section IV discussed the conclusions.
-
FKP RECOGNITION SYSTEM
The schematic diagram of our FKP-based personal authentication system is shown in Figure 1. The system is composed of a data acquisition module and a data processing module. The data acquisition module is composed of a finger bracket, a ring LED light source, a lens, a CCD camera and a frame grabber. The captured FKP image is inputted to the data processing module, which comprises three basic steps: region of interest (ROI) extraction, feature extraction and coding, and matching. A critical issue in data acquisition is to make the data collection environment as stable and consistent as possible so that variations among images collected from the same finger can be reduced to the minimum. In general, a stable image acquisition process can effectively reduce the complexity of the data processing algorithms and improve the image recognition accuracy. Meanwhile, we want to put as little constraint as possible on the users for high user friendliness of the system. With the above considerations, a semi-closed data collection environment is designed in our system. The LED light source and the CCD camera are enclosed in a box so that the illumination is nearly constant.
Fig. 1. Structure of the proposed FKP based personal auhentication system.
The whole system is composed of a data acquisition module and a data processing module.
One difficult problem is how to make the gesture of the finger nearly constant so that the captured FKP images from the same finger are consistent. In our system, the finger bracket is designed for this purpose. Referring Figure 2, a basal block and a triangular block are used to fix the position of the finger joint. In data acquisition, the users can easily put
his/her finger on the basal block with the middle phalanx and the proximal phalanx touching the two slopes of the triangular block. Such a design aims at reducing the spatial position variations of the finger in different capturing sessions.
Fig. 2. (a) The outlook of the developed FKP image acquisition device and
-
the device is being used to collect FKP samples.
The triangular block is also used to constrain the angle between the proximal phalanx and the middle phalanx to a certain magnitude so that line features of the finger knuckle surface can be clearly imaged. After the image is captured, it is sent to the data processing module for pre-processing, feature extraction and matching. The size of the acquired FKP images is 768×576 under a resolution of 400 dpi. Figure
3 shows four sample images acquired by the developed device. Two images in the first row are from one finger and images in the second row are from another finger. Examples of images for the same finger were captured at two different collection sessions with an interval of 56 days. We can see that by using the developed system, images from the same finger but collected at different times are very similar to each other. Meanwhile, images from different fingers are very different, which implies that FKP has the potential for personal identification.
Fig. 3. Sample FKP images acquired by the developed system. (a) and (b) are from one finger while (c) and (d) are from another finger. Images from the same finger are taken at two different sessions with an interval of 56 days.
-
Region of interest (ROI) extraction
The FKP images collected from different fingers are very different. On the other hand, for the same finger, images collected at different collection sessions will also vary because of the variation in spatial locations of the finger. Therefore, it is necessary and critical to align FKP images by adaptively constructing a local coordinate system for each image. With such a coordinate system, an ROI can be cropped from the original image for reliable feature extraction and matching. This section proposes an algorithm for the local coordinate system determination and ROI sub- image extraction. Because the finger is always put flatly on the basal block when the FKP image is captured, the bottom
boundary of the finger is stable in every images and can be taken as the X-axis of the ROI coordinate system. However, the Y-axis is much more difficult to determine. Intuitively, we want to locate the Y-axis in the center of the phalangeal joints o that most of the use full features in the FKP image can be preserved within the ROI. It can be observed that line features on the two sides of the phalangeal joint have different convex directions. Taking this fact as a hint, we propose to code line pixels based on their convex directions and then make use of the convex direction codes to determine the Y-axis. Figure 4 illustrates the main steps of the coordinate system determination and the ROI extraction. In the following, we describe these steps are as follows:
Step 1: image down-sampling
Step 2: determine the X-axis of the coordinate system Step 3: crop a sub-image Is from Id
Step 4: Canny edge detection
Step 5: convex direction coding for Ie
Step 6: determine the Y-axis of the coordinate system
Fig. 4. Illustration for the ROI extraction process. (a) ID image which is obtained by a down-sampling operation after a Gaussian smoothing; (b) X- axis of the coordinate system, which is the line Y=Y0, fitted from the bottom boundary of the finger; (c) Is image extracted from ID; (d) Ie image obtained by applying a Canny edge detector on Is; (e) ICD image obtained by applying the convex direction coding scheme to IE; (f) plot of con Mag x for a typical FKP image ; (g) line X=xo' and (h) ROI coordinate system, where the rectangle indicates the area of the ROI sub-image that will be extracted.
-
Feature Extraction and Feature Selection
Biometric feature extraction is the process by which key features of the sample are selected or enhanced. Typically, the process of feature extraction relies on a set of algorithms; the method varies depending on the type of biometric identification used. Feature extraction: reduce dimensionality by (linear or non- linear) projection of D-dimensional vector onto d-dimensional vector (d < D).
Referring Figure 2, a basal block and a triangular block are used to fix the position of the finger joint. In data acquisition, the users can easily put his/her finger on the basal block with the middle phalanx and the proximal phalanx touching the two slopes of the triangular block. Such a design aims at reducing the spatial position variations of the finger in different capturing sessions. The triangular block is also used to constrain the angle between the proximal phalanx and the middle phalanx to a certain magnitude so that line features of the finger knuckle surface can be clearly imaged. After the image is captured, it is sent to the data processing module for pre-processing, feature extraction and matching. The size of the acquired FKP images is 768×576 under a resolution of 400 dpi. Figure 3 shows four sample images acquired by the developed device. Two images in the first row are from one finger and images in the second row are from another finger. Examples of images for the same finger were captured at two different collection sessions with an interval of 56 days. We can see that by using the developed system, images from the same finger but collected at different times are very similar to each other. Meanwhile, images from different fingers are very different, which implies that FKP has the potential for personal identification.
Here are some examples of biometric feature extraction:
-
A fingerprint feature extraction program will locate measure and encode ridge edgings and bifurcations in the print.
-
A voice recording may filter out particular frequencies and patterns.
-
A digital picture may pull out particular measurements, like the relative positions of the ears, forehead, cheekbones and nose.
-
Iris prints will encode the mapping of furrows and striations in the iris.
-
In various computer vision applications widely used is the process of retrieving desired images from a large collection on the basis of features that can be automatically extracted from the images themselves. These systems called CBIR (Content-Based Image Retrieval) have received intensive attention in the literature of image information retrieval since this area was started years ago, and consequently a broad range of techniques has been proposed. The algorithms used in these systems are commonly divided into three tasks:
-
Extraction,
-
Selection, and
-
Classification.
The extraction task transforms rich content of images into various content features. Feature extraction is the process of generating features to be used in the selection and classification tasks. Feature selection reduces the number of features provided to the classification task. Those features which are likely to assist in discrimination are selected and used in the classification task. Features which are not selected are discarded [9]. From these three activities, feature extraction is most critical because the particular features made available for discrimination directly influence the efficacy of the classification task. The end result of the extraction task is a set of features, commonly called a feature vector, which constitutes a representation of the image. In the last few years, a number of above mentioned systems. Using
image content feature extraction technologies proved reliable enough for professional applications in industries.
Feature Extraction work is applied on the finger knuckle print database which contains the total images of 165 users having 12 samples of each left index, right index, left middle and right middle finger. Some of the functions thus applied for Feature Extraction are detailed below:
The Hanman-Anirban entropy Function [11] is:
100
95
TABLE II. PERFORMANCE OF KNUCKLE
Modality
Recognition Rate (%)
Left index
87%
Right index
85%
Left middle
90.5%
Right middle
87%
100
95
1
(1)
() ()
( ()) 90
3
90
GAR
GAR
85
85
80
where n is the window size like 3 × 3, 5 × 5, 7 × 7 etc. g(k) is the intensity value of pixels present in that particular window. h(k) is the count of how many times particular intensity occurs in a window.
75
70
65
-3 -2
10 10
-1 0
10 10
FAR
1 2
10 10
80
75
70
-3 -2
10 10
-1 0 1 2
10 10 10 10
FAR
Here, we have taken the window size as 5 × 5 for which the result are in the form of ROC curve are given in figure 5 and table 1.
100
98
96
94
-
(b)
100
98
96
94
100
95
90
GAR
85
80
75
70
100
98
96
94
GAR
92
90
88
86
84
92
GAR
90
88
86
84
82
80
-3 -2 -1 0 1 2
10 10 10 10 10 10
FAR
92
GAR
90
88
86
84
82
80
-3 -2 -1 0 1 2
10 10 10 10 10 10
FAR
65
-3 -2
10 10
-1 0
10 10
FAR
1 2
10 10
82
80
-3 -2
10 10
-1 0
10 10
FAR
1 2
10 10
-
-
(d)
Fig. 6. ROC curve of (a) left index knuckle (b) left middle knuckle (c) right
100
98
96
94
GAR
92
(a) (b)
100
95
90
GAR
85
middle knuckle (d) right index knuckle.
Another variant of Hanman filter [13] for feature extraction:
() = × (, ) × (2) × (, ) (4)
90
88
86
84
-3 -2 -1 0 1 2
10 10 10 10 10 10
FAR
80
75
70
-3 -2 -1 0 1 2
10 10 10 10 10 10
FAR
() =
2
[(,)](5)
(c) (d)
((,))
Fig. 5. ROC curve of (a) left index knuckle(b) right middle knuckle(c) right index knuckle (d) left middle knuckle.
[,(, ) =
2 ]
(6)
TABLE I. PERFORMANCE OF KNUCKLE
=
22
(7)
Modality
Recognition Rate (%)
Left index
91%
Right middle
92%
Right index
91%
Left middle
86%
u=1, 2, 3
s=0.4, 0.6, 0.8, 1.0
The results so obtained using another variant of Hanman Filter [12] are given in table III and Figure 7.
The Hanman Filter [12] for feature extraction:
TABLE III. PERFORMANCE OF KNUCKLE
() =
× (, ) × (2) (2)
Modality
Recognition Rate (%)
Left index
86%
Right index
86%
Left middle
83.5%
Right middle
87.5%
() =
u=1, 2, 3
(,) 2
× ]
(3)
s=0.4, 0.6, 0.8, 1.0
v=1, 2, 3, 4
The results so obtained using Hanman Filter [12] are presented in table II and Figure 6.
100
95
GAR
90
85
80
75
-3
10
-2 -1
10 10
0
10
FAR
1 2
10 10
100
95
GAR
90
85
80
75
-3
10
-2 -1
10 10
0
10
FAR
1 2
10 10
removing most irrelevant and redundant features from the data, feature selection helps improve the performance of learning models by:
-
Alleviating the effect of the curse of dimensionality
-
Enhancing generalization capability
-
Speeding up learning process
-
Improving model interpretability
100
95
90
85
GAR
80
75
70
65
(a) (b)
100
95
90
GAR
85
80
75
70
Feature selection also helps people to acquire better understanding about their data by telling them which are the important features and how they are related with each other.
Feature evaluation is critical while designing a biometric based recognition system under the framework of supervised learning. The existing research in biometrics has not made any attempt to evaluate the usefulness of the features that
60
-3 -2
10 10
-1 0
10 10
FAR
1 2
10 10
65
-3 -2
10 10
-1 0
10 10
FAR
1 2
10 10
have been proposed in the literature Feature subset selection helps to identify and remove much of the irrelevant and
(c) (d)
Fig. 7. ROC curve of (a) right index knuckle (b) right middle knuckle (c) left middle knuckle (d) left index knuckle
-
Feature Extraction based on SURF algorithm: Speeded-Up Robust Features (SURF) [12] is an improvement on Scale-Invariant Feature Transform (SIFT) [12]. SIFT algorithm is a widely adopted object recognition technique, which is also very effective in implementing face and fingerprint recognition. Compared to SIFT algorithm, SURF uses different key point detector and feature descriptor. The key point detector is based on the approximation of Hessian matrix, and uses integral images [13] to reduce the computation time, so it can be called the Fast-Hessian detector. The descriptor, on the other hand, describes a distribution of Haar-wavelet responses within the interest point neighborhood. Again, integral images are used for speed. SURF uses a new indexing step based on the sign of the Laplacian, which increases not only the matching speed, but also the robustness of the descriptor. The entry of an integral image I(x) at a location = (, ) represents the sum of all pixels in the input image I of a rectangular region formed by the point x and the origin, With I(x) calculated, it only takes four additions to calculate the sum of the intensities over any upright, rectangular area, independent of its size. Result based on this algorithm is around 20% recognition rate on left middle knuckle modality.
-
Feature Selection
Feature Selection methods choose features from the original set based on some criteria, Information Gain, Correlation and Mutual Information are just criteria that are used to filter out unimportant or redundant features. Embedded or wrapper methods, as they are called, can use specialized classifiers to achieve feature selection and classify the dataset at the same time. Feature selection: reduce dimensionality by selecting subset of oiginal variables. Example: forward or backward feature selection. In machine learning and statistics, feature selection, also known as variable selection, feature reduction, attribute selection or variable subset selection, is the technique of selecting a subset of relevant features for building robust learning models. When applied in biology domain, the technique is also called discriminative gene selection, which detects influential genes based on DNA microarray experiments. By
redundant features. The small dimension of feature set reduces the hypothesis space, which is critical for the success of online implementation in personal recognition. Furthermore, researchers have shown that the irrelevant and redundant training features adversely effects the classifier performance. These observations provide us the motivation to perform the experiments to evaluate the advantages of the feature subset selection and combination of some common biometric modalities.
-
Feature Evaluation and Selection
Feature selection is used to identify the useful features and remove the redundant information. The usage of small size feature vector results in reduced computational complexity which is critical for online personal recognition. The selection of effective features may also result in increased accuracy. Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) have been traditionally used to reduce the large dimension of feature vectors. However, PCA or LDA transforms the feature vectors to reduced dimension rather than actually selecting a subset. Several feature subset selection algorithm have been proposed in the literature. The feature subset evaluation and selection algorithm consists of three basic modules as shown in Figure1; feature subset-generation and evaluation and the stopping criteria. Let N be the total number of potential biometric features in the biometric training dataset. The exhaustive search through the total number of 2N candidate feature subsets in infeasible even with moderate N. Therefore various search strategies (e.g. starting point, search direction, etc.) have been studied in the literature. The goodness of each of the candidate feature subset is evaluated with a feature evaluation criterion. The goodness index of current feature subset is compared against those from the previous best feature subset and replaced if this index from current feature subset is better than those from the previous best feature subset. There are two commonly used feature evaluation criterion; wrapper-based and filter-based. The wrapper is one of the most commonly used algorithms, which evaluates and selects feature subset by repeated use of a particular classification algorithm. However, it is highly time consuming and prohibitive when the dimension of feature vectors is large (such as those from palm prints evaluated in this work). Therefore we employed filter-based algorithm for the feature evaluation which is detailed in next section. The
feature selection process usually stops with a suitable stopping criteria; e.g. predefined number of features, iterations or goodness index, addition or deletion of features does not increase goodness index, etc. In this work we used Correlation based Feature Selection (CFS) algorithm which has been shown [10] to be quite effective in feature subset selection.
-
-
EXPERIMENTAL RESULTS AND CONCLUSION
In the experiments, all classes of FKPs were involved. Each image in the probe set was matched against all the images in the gallery set. Therefore, in this experiment there were 660 (165×4) classes and 3960 (6606) images in the gallery set and the probe set each. The number s of genuine matchings and imposter matchings 23,760 and 7,828,920, respectively. By adjusting the matching threshold, a receiver operating characteristic (ROC) curve, which is a plot of genuine accept rate (GAR) against false accept rate (FAR) for all possible thresholds, can be created. The ROC curve can reflect the overall performance of a biometric system. Here we have the table which gives us the detail about the feature extraction function used with the modality and its performance is given which further helps us in feature selection and helps to reduce the work of classifier hence increasing its efficacy and performance.
TABLE IV. RECOGNITION RATE OF VARIOUS FUNCTIONS APPLIED FOR FEATURE EXTRACTION
Function for feature
extraction
Modality
Recognition rate
() × ()
1
()
× ( 3 )
Left index
91%
Right index
92%
Left middle
91%
Right middle
86%
()
= × (, )
× cos 2) × (, )
(
Left index
87%
Right index
85%
Left middle
90.5%
Right middle
87%
()
= × (, )
× cos 2) × (, )
(
Left index
86%
Right index
86%
Left middle
83.5%
Right middle
87.5%
()
1
× [( () ()]
3 )
Left index
56%
Right index
64%
Left middle
58%
Right middle
66%
-
CONCLUSION
FKP recognition provides new way for authentication identity. It has the advantages of easy to capture, short response time, small feature size, low hardware cost no emotional coupling with criminal records. Experiments are carried out in order to measure the performance of the proposed method. It shows that the proposed FKP recognition method is with high performance in terms of accuracy and efficiency on the testing database.
REFERENCES
-
Ajay Kumar, Ch. Ravikanth,Personal Authentication using Finger Knuckle Surface, IEEE Transactions on Information Forensics and Security, vol. 4, no. 1, pp. 98-110, March. 2009
-
Anil K. Jain, Fellow, IEEE, Arun Ross, Member, IEEE, and SalilPrabhakar, Member, IEEE,An Introduction to Biometric Recognition,IEEE Transactions On Circuits And Systems For Video Technology, Vol. 14, No. 1, January 2004
-
Ajay Kumar 1,2 , David Zhang 2, 1Department of Electrical Engineering, Indian Institute of Technology Delhi, HauzKhas,New Delhi, 110016, 2IndiaDepartment of Computing, Hong Kong Polytechnic University, Hung, Biometric Recognition using Feature Selection and Combination
-
Ch. Ravikanth, Ajay Kumar,Biometric Authentication Using Finger- Back Surface,Biometrics Research LaboratoryDepartment of Electrical Engineering, Indian Institute of Technology Delhi HauzKhas,
New Delhi 110 016, INDIA
-
Xiang Sean Zhou, Ira Cohen, Qi Tian, Thomas S. Huang,Feature Extraction and Selection for Image Retrieval,Beckman Institute for Advanced Science and Technology University of Illinois at Urbana Champaign Urbana, IL 61801.
-
Ajay Kumar, Yingbo Zhou,Human Identification Using KnuckleCodes,Department of Computing The Hong Kong Polytechnic University Hung Hom, Kowloon, Hong Kong
-
MichaChora´s and RafaKozik,Knuckle Biometrics Based on Texture Features,Image Processing Group, Institute of Telecommunications,
UT&LS Bydgoszcz chorasm@utp.edu.pl
-
Lin Zhang a, Lei Zhang a1, David Zhang a2, Hailong Zhu b,Online finger-knuckle-print verification for personal authentication,a Biometrics ResearchCenter,DepartmentofComputing,TheHongKongPolytechnicU niversity,HongKongResearch InstitteofInnovativeProduct&Technology,TheHongKongPolytechnicU niversity,HongKong.
-
Miguel A. Ferrer, Carlos M. Travieso& Jesus B. Alonso, Using hand Knukle Texture for Biometric IdentificationUniversidad de Las Palmas de Gran Canaria.
-
Ajay Kumar, Yingbo Zhou,Personal Identification using Finger Knuckle Orientation Features,Electronics Lettersvol. 45, no. 20, September 2009.
-
M. Hanmandlu and Anirban Das, 2011. Content-based Image Retrieval by Information Theoretic Measure.Defence Science Journal, 61 (5), 415-430.
-
M. Hanmandlu and F. Sayeed, Information sets and Information processing with an application to face recognition, communicated to IEEE Trans. On System, Man and Cybernetics-Part-B.
-
ZHU Le-qing, "Finger knuckle print recognition based on SURF algorithm" ,2011 Eighth International Conference on Fuzzy Systems and Knowledge Discovery (FSKD).
-
Dr.Vinayak Ashok BharadiInformation Technology Department Thakur College of Engineering & Technology, Mumbai University,"Texture Feature Extraction For Biometric Authentication using Partitioned Complex Planes in Transform Domain",IJACSA Special Issue on Selected Papers fromInternational Conference & Workshop On Emerging Trends In Technology 2012.
-
Lin Zhang , Lei Zhang; Zhang, D., Hailong Zhu ,Online finger- knuckle-print verification for personal authentication, Pattern Recognition, 43, 7, 2560-2571, 2010.
-
Lin Zhang , Lei Zhang, Zhang, D., Hailong Zhu, Ensemble of local and global information for finger-knuckle-print recognition,Pattern Recognition, 44, 9, 1990-1998, 2011.