Multilevel Security Using Score-Level Fusion of Face and Ear-Based Biometric Modalities

DOI : 10.17577/IJERTV3IS051076

Download Full-Text PDF Cite this Publication

Text Only Version

Multilevel Security Using Score-Level Fusion of Face and Ear-Based Biometric Modalities

Mrs. Pranali A. Patil

Department of E&TC

Rajarshi Shahu College of Engineering, Tathwade University of Pune, India

Prof. Dr. D. S. Bormane

Department of E&TC

Rajarshi Shahu College of Engineering, Tathwade University of Pune, India

Abstract Multimodal biometric method is one of the best personnel verification and identification systems in recent years. It is also considered as the best method for offering security, to avoid unauthorized access. Multimodal biometrics is used nowadays because the previously used unimodal biometric system has various drawbacks such as, unacceptable error rates, noisy data and variations. So, a reliable and better recognition system requires robust multimodal biometric system. Here, a new multimodal biometric system is proposed for recognition purpose. The ear and the face are the two multimodal biometric traits used. The features of the ear image are extracted by a simple geometric method, which is based on the max-line of the ear image. Next is the face image feature extraction. The previously used feature extraction technique in face image was LBP, which has some drawbacks such as high dimensionality, noise and high error rate. In order to overcome the drawback of LBP in face image feature extraction, another method called ULBP (Uniform Local Binary Pattern) is being proposed. It has various advantages such as dimensionality reduction, less error rate and low noisy data that will increase the accuracy of our system. Finally, simple product fusion is used to find the matching score of both the ear and the face image of an individual. Also, to take a final decision based on the fused score of the ear and face, a classification technique called K-NN classifier is being used. The K-NN classifier helps our system to find whether the user is genuine or imposter. The application of ULBP improves the recognition rate and the accuracy of our system.

Keywords Multimodal Biometrics, recognition, ULBP, K-NN classifier, Fusion

  1. INTRODUCTION

    Biometric identification is an effective method for human identification. Also, it has received a lot of attention in recent years [2]. It should provide reliable personnel recognition schemes to either confirm or determine the identity of an individual [7]. Moreover, the biometric systems are advantageous because they do not require a person to carry cards or remember information, unlike the conventional authentication systems based on smart cards or passwords [8]. It has also reduced the various drawbacks of personal identification systems such as entering Personal Identification

    Number (PIN), typing logins and passwords and displaying identification cards. The biometric identification method is a natural method that can easily deal with those problems [3]. In real world applications, various unimodal biometric systems are used that depend on the single biometric marker for personal recognition. In addition, the unimodal biometric recognition systems are not much efficient, accurate and robust due to the difficulty caused by external factors [4]. It often confronts a variety of problems such as noisy data, intra- class variations, restricted degrees of freedom, non- universality, spoof attacks and unacceptable error rates [5].

    The problems caused by unimodal biometrics can be reduced by a multimodal biometric authentication system. The multimodal biometric authentication system is reliable and more accurate due to the presence of multiple evidences in biometrics [4]. The important concept of multimodal biometrics system is that it integrates multiple sources of information obtained from different biometric cues [9], [10]. In this multimodal biometrics system, there are many possible data sources for human identification systems. But, the physiological biometrics has many advantages over the methods based on human behavior. Usually, the face and the ear image obtained through camera are the most interesting human anatomical parts used for passive and physiological biometrics systems [6].

    The use of ear in human identification system has more advantages such as, it is stable and the features are fixed and unchangeable [7]. Similarly, the face is also a passive identification method with some drawbacks like illumination pose change in unimodal biometric recognition system [13]. However, the incorporation of multiple biometric markers can also lead to additional complexity in the design of a biometric system. For instance, a technique known as data fusion must be employed for integrating multiple pieces of evidence to infer identity [11]. Many studies and algorithms have been proposed for multimodal biometric fusion. The fusion of multimodal biometric system information can be performed at three different levels, namely, feature level, matching score level and decision level [8]. The LBP method used for feature extraction from face has various shortcomings, which leads to the requirement of a better feature extraction method. Hence, we present another technique called ULBP (Uniform Local Binary Pattern) in this paper. The technique is applied for the face image feature extraction in multimodal biometric system. The application of ULBP can improve the overall accuracy of the system for better recognition. Also, the

    dimension of the extracted features can be reduced using ULBP. The histogram feature of the extracted ULBP pattern is plotted to serve as a useful tool for thresholding. The matching score of the extracted ULBP feature is fused with the ear image feature matching score. Finally, the user can be identified based on the fused matching score of both the ear and the face image.

    The rest of the paper is organized as follows: A brief review of researches related to the proposed technique is presented in section 2. The proposed multilevel security using score-level fusion of the face and the ear-based biometric modalities is presented in section 3. The detailed experimental results and discussions are given in section 4. The conclusion is summed up in section 5.

  2. LITERATURE REVIEW

    Literature review presents several techniques for multimodal biometrics using the face and the ear images.

    Jitendra B. Jawale and Anjali S. Bhalchandra [2] have designed an ear based multiple geometrical feature extraction method. The important purpose of the design was to identify a person using ear biometrics. Generally, the human ear was a perfect source of data in many applications such as person identification. The ear biometric was a good solution, which satisfies the increasing need for security in various public places. The only reason behind this is that the ears were visible and in addition, it can be obtained easily even without the knowledge of the examined persons. Several advantages of the ear has enabled it to be best suitable for the ear biometric identification.

    To satisfy the need for a better feature extraction technique, Dakshina Ranjan Kisku et al. [12] have proposed the Gaussian Mixture Model for face and ear biometrics. This model has made use of the Gabor wavelet filters to extract the facial and the ear features from the spatially enhanced face and ear biometrics. To create the measurement vectors of discrete random variables, this Gaussian mixture model was applied to the Gabor face and ear responses. In GMM, the estimation of density parameters was performed by using the expectation maximization algorithm. Through the dempster-shafer decision theory, the reduced feature sets of face and ear were fused together. This roposed scheme was validated and studied using two multimodal databases. The databases that were used are the IIT Kanpur databases and the virtual databases of the face and the ear images. Compared to the existing techniques, this work was proved to have better accuracy and significant improvements.

    Fusion is an important process in multimodal biometrics. Ning Wang et al. [14] have proposed a complex fusion method at both the pixel level and the feature level. It was designed to overcome the various problems that have occurred in multimodal biometrics fusion. The designed method was applicable at the pixel level complex fusion. Based on the complex vectors the method was used to fuse the visible and the thermal face imagery. Here, the theoretical derivation at the pixel level complex fusion was extended to 2D based classification methods such as (2D) 2 PCA, (2D) 2 LDA and (2D) 2 FPCA. The accurate covariance matrix was evaluated by these methods and it reflected the differences inherited

    from separate sensors as well. The above methods were evaluated with the multimodal database, NVIE and it has been shown that the designed method was more efficient in identification and verification.

    To identify the reliable multimodal biometric system, Shekhar Karanwal [15] has compared the results from the two- level and the three-level biometrics. It was compared based on the one level 2d discrete wavelet transform for decomposition and image fusion purpose and SIFT for feature extraction purpose. Here, the resolution, the image size and the distance ratio were set to a certain limit. The results obtained from the second and third biometric traits have undergone a total of 30 iterations and have produced an accuracy level of 96.6% and 93.3% respectively. Also, the experimental results demonstrate that the system can reconstruct the original image from the fused images, which makes the system more reliable. Due to the difficulty in observing the facial expression by computer, Priya Metri et al. [16] have designed a method for the recognition of emotions from the face, the hand and the body posture. Here, they have analyzed a way that enables the computer to be more aware of the users emotional expressions. The method designed by them can recognize the emotions from facial expressions. The multimodal emotion recognition system was used by them that consist of two different models, one for facial expression recognition and the other for hand gesture recognition. Further, based on a third classifier, they have combined the results of the two different models. The experimental results have shown that the multimodal biometric system was capable of providing better

    results for recognition.

    To have an effective ear biometric recognition, Samuel Adebayo Daramola and Oladejo Daniel Oluwaninyo [17] have designed a method of automatic ear recognition system based on the energy-edge density feature and the Back Propagation Neural Network (BPNN). Firstly, based on the Haar wavelet transform, the input ear image was decomposed into four sub- bands. Next, for each detailed sub-band, fused feature was extracted from image blocks. Then, for effective ear image classification, the fused feature was used as input to the neural network. The designed method was evaluated using the ear images collected from 350 people. The experimental results have shown that the designed method was very effective for recognition, when compared to the previous methods.

  3. PROPOSED APPROACH

    The aim of multi-biometrics is to improve the quality of recognition over an individual method, by combining the results of multiple features, sensors or algorithms. In multi- modal biometrics, choosing of a right modality is a challenging task in recognizing a person. Here, two biometric modalities, namely, the ear and the face will be chosen to design and develop a technique for the multi-modal biometric recognition using score level fusion. In this research, the ear and the face will be used as the two different modalities for the fusion of multi-biometric identities. The proposed multimodal biometric recognition will be done using four important steps as in figure 1. The steps include Preprocessing, Feature extraction from the ear image, Matching with the first biometric and Matching with second

    biometric using score level fusion. At first, the preprocessing will be done using median filtering and the geometric feature from the ear will be extracted. Then, distance matching based on Euclidean distance will be made to find the feature matching score and to decide whether the input is a genuine user or not. If the input is not a genuine user, uniform local binary pattern (ULBP) feature from face will be extracted to find the matching score. Once the matching scores are obtained, the fusion of scores will be carried out depending on the simple product rule. On the basis of the scores obtained, the recognition will be achieved using a K-NN classifier. The implementation is done using MATLAB and the performance of the algorithm will be evaluated.

    1. Preprocessing

      In the preprocessing stage of our multimodal biometric system, the preprocessing of both the acquired ear and face images are to be made. Here, both the face and the ear images have been obtained under the same lightening condition without any illumination. Firstly, the preprocessing of the ear image is to be carried out. The sides of the acquired face image may be tilted or occluded by hair. So, before acquisition, the occlusion of hair over the ear should be removed. The important thing to be noted is that the image should be taken from the right side of the face. The portion of ear at the side of the face image acquired is cropped manually before preprocessing. The cropped color image is then converted to grayscale for better feature extraction. After the grayscale conversion, edge detection forms the important process in feature extraction. For the edge detection purpose, a technique called canny edge detection has been used. Finally, the preprocessed ear image is given as input for feature extraction.

    2. Feature Extraction from Ear Image

      Feature extraction from the ear image is possible only after the preprocessing of the acquired image. The ear image has some unique aspects like the shape and the edges, which help in the better extraction of features. The different shapes of the ear are round, oval, triangular and rectangular. The other factors include the edges found inside the ear. The shape and the pattern of the ear image are different for every human. So in feature extraction, the first thing to be considered is the shape of the ear. The next step after finding the shape of the ear is the edge detection. Only through edge detection, the important and relevant information can be extracted from the ear image. The technique used here for edge detection is the canny edge detection, which gives better results under illumination condition.

      Fig.1: Proposed Multimodal System

      After the edge detection process, the broken edges were joined using a method called dilation.

      Feature extraction from outer edges using Max line detection

      After edge detection, the features of the edges and the other remaining edges have to be found out. Here, two feature vectors, V1 and V2 are to be extracted. The feature vector V1 is extracted through the outer shape or the edges of the ear image. The second feature vector V2 is detected by the other edges. The shape of the ear can be identified by the first feature vector V1. The feature vectors of the outer edges are in angle O1, O2.On as shown in fig.2, where n is the number of angles detected from the centre of the max line over the outer edges. The max line detection is an important process in feature extraction because the n outer edge feature angles, On, are based on the centre C of the max line.

      To detect the max line M, each pixel distance in the boundary vector and the other pixels in the boundary of the ear are calculated. Among these distances, wo largest distances were taken from the bottom and the top of the point. The coordinates of the two largest distance points were x1, y1 and x2, y2. Both the largest distance points were connected to form a max line with centre C. The features of the ear image were found by selecting n maximum number of points Pn over the outer shape of the ear or edges. The angle of each point from the max line is noted as the first feature V1.

      Feature vector, V1 over the outer boundary edges is represented as:

      (1)

      Fig. 2 Image showing the angle

      Feature extraction from inner edges V2

      The second feature vector is detected by drawing a normal line perpendicular to the max line that intersects over the inner edges of the ear. The drawn normal line divide the max line into (n+1) equal parts, where n is positive. The line drawn with (n+1) equal parts are denoted as xn, where n is the number of lines drawn perpendicular to the max line. The angle of each line intersection point over the inner edges is found out to be the second feature vector V2. The inner edge intersection points were denoted as q1,q2 qn , where n is the number of intersection points over the inner edges. The feature angle extracted from the inner edges is represented

      as i1,i2 ,i3…….in , where n is the number of inner edge feature angle. The inner edge feature V2 is represented as:

      (2)

      The normal line intersection points and the feature angles are shown in Fig. 3 (a) and Fig. 3(b) respectively.

      binary 0. Otherwise consider it as binary 1. Finally, after a single circular comparison, an 8-bit binary out like 00000100 is obtained. Thus, LBPP,R produces 2p binary patterns i.e., if P=8 it produces a total of 256 patterns. The 256 binary patterns are obtained through the rotation over a circle of radius R. However, the binary pattern of the rotated pixel along the perimeter of the circle will differ.

      In order to obtain a uniform local binary pattern, the effect of change in binary pattern due to the rotation has to be removed. Here, the rotation invariant pattern (RI) is obtained by the equation,

      (3)

      Where, i=0,1,.p-1

      The above equation computes the RI binary pattern from the image, when ROR(x,i) circularly right shifts on the P-bit number times in a bit-wise manner. It provides the most important binary pattern that offers better detail about the pixels, called as uniform pattern. The uniform patterns are the binary patterns which has a maximum of 2 bitwise conversions, either from 0 to 1 or from 1 to 0. If the bitwise conversion in a binary pattern is greater than 2, then it is called as non-uniform pattern. The ULBP is quantified by the below equation.

      Fig.3 (a) Normal line intersection Fig.3 (b) Normal line intersection point angle feature

      Here, C p = Central gray level pixel of image

      np = Neighboring pixel of image Where,

      (4)

    3. Feature Extraction from Face Image

      Among the various feature extraction techniques available, LBP is the most commonly used feature extraction technique in face biometrics. But, the extracted output feature from the face image using LBP consists of both uniform and non- uniform pattern that increases the output feature length. Particularly, the non-uniform pattern in LBP has undesirable characteristics such as high dimensions, partial correlation and unwanted noise that produce irregular distribution in texture classification. The increase in feature length will reduce the accuracy of the output result. So, in order to overcome these drawbacks of LBP, we have proposed another method called ULBP (Uniform Local Binary Pattern) for feature extraction from the face image.

      In ULBP, the input grayscale image is of size n m . For

      an efficient feature extraction using ULBP, the input grayscale image is segmented into pixels. Basically in LBP, a centre pixel Cp is selected and compared with 8 neighboring pixel value np in a 3×3 window. If the value of the centre pixels Cp is greater than its neighboring pixel values, consider it as

      (5)

      In the above equation, there are only p 1uniform pattern and one non-uniform pattern. The extracted Uniform Local Binary Pattern is represented by Histogram.

      Histogram in ULBP

      The histogram chart for ULBP is represented for each pattern obtained through the above process. The histogram y[h] for uniform local binary pattern is represented by the below equation,

      (6)

      Where, h=0,1P(P-1)+P

      (7)

      Here, t(i, j) is the decimal value of the ULBP operator at each pixel position I (i, j) of face image with N M pixels and the number of samples is P.

    4. Multilevel Matching in Multimodal Biometrics

      In Multimodal biometrics, there are two levels of verification using the ear and the face image. They are as follows:

      1. Ear image verification

      2. Face image verification

        1. Ear image verification

          In ear image verification, there are two types of verification method, namely, single level verification and multilevel verification. In a single level verification, we have to compare both the outer edge features and the inner edge

          features at a time. Because of comparing more features at a

          Finally, matching score of the ear image feature En is compared with the threshold level. If the threshold level is greater than the feature value En , consider it as genuine. Else, request for the face image.

        2. Face Image Verification

          In face image verification, let the obtained histogram features of the input face image be FQ. To find the matching score between the input face query image and the training image Tn in the database, the distance between them is found. For this purpose, the Euclidian distance is being used. The Euclidian distance between the input histogram feature vectors

          and the trained features can be computed by the below equation,

          2

          N

          time, the computation time required for single level verification is high and accuracy is also found to be low. In

          Fn FQn

          n1

          • FTn )

            (10)

            order to overcome these drawbacks, we use multilevel verification method to compare the feature vectors of the ear image. It has two phases of verification. In the first phase, the outer edge features of the input ear image are compared with outer edge features alone. The obtained outer edge matched

            The final feature vector matching score of an image histogram is defined as Fn F1 , F2 ,….FN where, N represents the total number of image blocks and n varies from 1 to N. n represent nth block of the image histogram. However, in ULBP, the

            features FM are compared with the inner edge features in the

            dimensionality of the feature vector is reduced as

            second phase. The matched features are obtained by calculating the difference between the two feature vectors. Let the two feature vectors of two images be V1 [o1 , o2 ,…..on ] and W1 [1 ,2 ,….n ]. The distance S between these two features are computed by the equation given below.

            2 p to P(P 1) 2 at the end.

    5. Score Level Fusion and Recognition

    In score level fusion, a single fusion score for both the face and the ear image matching score has to be found. To find this single fusion matching score, a simple product rule is being utilized. The simple product rule for fusion is represented as:

    (8)

    Finally, a matched image through outer edge feature is

    obtained and is represented as F1 in the first stage of

    m

    Fs Fn * En

    n1

    (11)

    classification. The outer edge features of the two images

    Where,

    Fn and

    En are the two normalized matching score of

    are said to be matched, only if their corresponding angles are equal. The matched features F1 is given by the equation stated below.

    (9)

    A certain threshold level is being already set. Here,

    the face and theear image. Fs indicates the fusion score. The diagram for multimodal biometrics is shown in Fig. 4.

    yi 1, if

    oi i

    is less than certain threshold limit.

    Otherwise yi 0. In the next level, inner edge features of the matched image in the first stage is compared with the inner edge features alone. Among these features, the features that are equal in angle are said to be the matched features of the ear image and is denoted by the vector En

    Fig.4 Multimodal Biometric fusion

    The final stage of the multimodal biometrics is classification. The fused score of the face and the ear image is

    considered as the input to the classifier. The K-NN classifier is used for the recognition purpose and it is one of the best classifier with less computation time.

    K-NN classifier

    In this section, the computed fusions score value FS of both the face and the ear of a single person is given as input. This input fusion score FS of the test image is compared with all the fusion score FT in the training data. For this comparison, the Euclidian distance between the input data and the training data are computed. The Euclidian distance between the train and test feature vector can be computed by the equation given below.

    2

    n

    every person is captured for a minimum of three times. It contains both the ear image of male and the female [19]. The next is the face image database, which contains human face images captured by same camera. Each and every image in the database has a bright homogeneous background and the positions of the subjects are in an upright frontal position. With the different poses, four emotions such as neutral, smile, laughter and sad were included for every individual. The file formats of the images are JPEG and each image is of size 640×480 pixels, with 256 grey levels per pixel. The images are organized as male and female in two main directories. In each of these directories, there are sub-directories with names as numbers, where each index corresponds to an individual. In each of these directories, there are n different images of that

    subject, which have names of the form n.jpg, where n are the image number for that subject [18].

    Y FSi

    i1

    • FTi )

    (12)

    4.3. Experimental Result

    The results obtained at various stages of our method are

    Based on those computed Euclidian distance values, the fusion score with minimum distance is considered as the nearest neighbors of the input data.

  4. RESULTS AND DISCUSSION

    In this section, we analyze and discuss the proposed technique. The experimental setup and the evaluation metrics are discussed in section 4.1. The dataset description is given in section 4.2. The experimental result is given in section 4.3. The performance evaluation is given in section 4.4.

      1. Experimental Setup and Evaluation metrics

        We have implemented the proposed method using MATLAB in a system having 4 GB RAM and 1.67 GHz Intel core processor. The evaluation metrics used here is the accuracy. The accuracy in multimodal biometric is computed based on FAR (False Acceptance Rate) and FRR (False Rejection Rate). FAR is defined as the rate for which the system identifies the non-authorized person. This occurs due to the wrong matching of the template with the input. False Rejection Rate is the rate of authorized person rejected incorrectly by the system. FAR is represented by the following equation.

        shown below.

        The first stage of our proposed multimodal biometric system is the ear feature extraction. Fig.5.1 is the occurred ear image from a person. The image shown in Fig. 5.2 is the preprocessed grayscale image of the occurred ear image. After

        FAR(t)

        GMS

        NGRA

        this, the preprocessed ear image is given as input for feature

        extraction. For feature extraction, we have used simple geometric method. Fig. 5.3 show how the edges of the ear

        Where, GMS means genuine matching score and NGRA

        means Number of Genuine Recognition Attempts.

        FRR can be calculated by the following equation.

        image are detected in feature extraction stage. The Fig.5.4 shows the max-line drawn based on the detected edges. Based on the max-line drawn, the inner and outer edge features of the ear image are extracted. The extracted feature can have a

        Where,

        FRR(t)

        IMS

        NIRA

        specified matching score, which may be useful at the recognition stage.

        The second stage of the multimodal biometrics is the face image feature extraction. For this, we have used a feature

        IMS means Imposter Matching Score and NIRA means

        Number of Impostor Recognition Attempts

      2. Dataset Description

    The ear image database contains 50 ear images of different person captured by Cannon EOS 50D camera. The right ear of

    extraction technique called ULBP, which will improve the performance of the system. The resultant images for the face image feature extraction are shown below. The image in Fig.6.1 represents the input preprocessed grayscale face image. At the next stage, the input preprocessed face image is given as input to the ULBP. The output of the feature

    extracted ULBP is shown in Fig.6.2. Finally, the histogram graph is plotted for the specified feature obtained through the ULBP technique. The histogram graph for the ULBP feature obtained is shown in fig.6.3.

    Fig.6.3. Histogram Features

    4.4. Performance Evaluation

    The performance analysis is made based on the evaluation metrics such as accuracy, FAR and FRR.

    The accuracy curve for the ear and the face multimodal biometrics system using ULBP is shown in Fig. 7.1. The accuracy curve is plotted based on the certain threshold value and the accuracy obtained.

    At the first level, the ear image feature is tested. If the ear feature does not match, it request for the face image. However, in order to increase the accuracy of the biometric system as a whole, the individual results are combined at the matching score level. At the second level of experimentation, the matching scores from both the ear and the face traits are combined and the final accuracy graph is plotted as shown in Fig. 7.1. The overall performance of the system has increased, showing an accuracy of 92% at a threshold level of 110 with FAR of 3% and FRR of 2% respectively. FAR graph is shown in Fig. 7.2 and FRR graph is shown in Fig. 7.3. The accuracy obtained at different threshold values are shown in the Table

    .1. FAR obtained at different threshold values are shown in the Table. 2 and FRR obtained at different threshold values are shown in the Table. 3.

    Table.1 Accuracy obtained at different threshold

    Threshold

    80

    90

    100

    110

    % Accuracy

    72

    78

    83

    92

    Table.2. FAR obtained at different threshold

    Threshold

    80

    90

    100

    110

    % FAR

    10

    8

    6

    3

    Table.3. FRR obtained at different threshold

    Threshold

    80

    90

    100

    110

    % FRR

    7

    5

    3

    2

  5. CONCLUSION

In this paper, a method called Uniform Local Binary Pattern (ULBP) is being proposed for feature extraction from the face image. This proposed feature extraction method has provided better results than the existing methods and it is applied for face image feature extraction. The designed method extracts only the uniform pattern of the image, which has only two bit transition. Generally, the uniform patterns are the important pattern that represent important features of an image such as edge, corner etc. The evaluation is made using accuracy, FRR and FAR. The designed methods have proved that it is capable of improving the performance of the multimodal biometric system. One of the chief advantages of the designed method is its dimensionality reduction of extracted features from the face image. The evaluation metrics such as accuracy, FAR and FRR have revealed that the designed method provides better performance and improved accuracy rate.

ACKNOWLEDGMENT

We take this opportunity to express our deepest gratitude and appreciation to all those who have helped us directly or indirectly towards the successful completion of this paper.

REFERENCES

  1. Gandhimathi Amirthalingam and Radhamani. G, A Multimodal Approach for Face and Ear Biometric System, International Journal of Computer Science Issues, Vol. 10, No 2, 2013

  2. Jitendra B. Jawale and Anjali S. Bhalchandra, The Human Identification System Using Multiple Geometrical Feature Extraction of Ear An Innovative ApproachInternational Journal of Emerging Technology and Advanced Engineering, Vol.2, No.3, 2012.

  3. MichaChoras, Ear Biometrics Based on Geometrical Feature Extraction,Electronic Letters on Computer Vision and Image Analysis, Vol. 5, No.3, pp.84-95, 2005.

  4. Steven Cadavid, Mohammad H. Mahoor and Mohamed Abdel- Mottaleb, Multi-modal Biometric Modeling and Recognition of the Human Face and Ear, International Workshop on safety security and Rescue Robotics, pp.1 6, 2009.

  5. Xiaona Xu and Zhichun Mu, Feature Fusion Method Based on KCCA for Ear and Profile Face Based Multimodal Recognition, IEEE International Conference on Automation and Logistics, 2007.

  6. Micha Choras, Image Feature Extraction Methods for Ear Biometrics – A Survey, International Conference on Computer Information systems and Industrial Management Applications, pp. 261-265, 2007.

  7. Ramesh Kumar. P and K.Nageswara Rao, Pattern Extraction Methods for Ear Biometrics A Survey, World Congress on Nature and Biological inspired computing, pp.1657-1660, 2009.

  8. Yeong Gon Kim, Kwang Yong Shin, Eui Chul Lee and Kang Ryoung Park, Multimodal Biometric System Based on the Recognition of Face and Both Irises,International Journal of Advanced Robotic Systems, Vol.9, No.65, 2012.

  9. A. K. Jain and A. K. Ross, Multibiometric systems, Communications of the ACM, Vol. 47, No.1, pp. 3440, 2004.

  10. A. Rattani, D. R. Kisku, M. Bicego, and M. Tistarelli, Robust feature-level multibiometric classification, Proceedings of the Biometric Consortium Conference A special issue in Biometrics, pp. 16, 2006.

  11. Steven Cadavid, Mohammad H. Mahoor and Mohamed Abdel- Mottaleb, Multi-modal ear and face modelling and recognition, International Conference on Image Processing, pp. 4137 4140, 2009.

  12. Dakshina Ranjan KISKU, Phalguni GUPTA, Hunny MEHROTRA and Jamuna Kanta SING, Multimodal Belief Fusion for Face and Ear Biometrics,Intelligent Information Management, Vol.1,pp. 166-171,2009.

  13. Wei Jin, Bin Li and Ming Yu, Feature Extraction Based on Equalized ULBP for Face Recognition, International Conference on computer science and electronics engineering, Vo.2, pp. 532 536, 2012.

  14. Ning Wang, Qiong Li, Ahmed A. Abd El-Latif, Jialiang Peng and and Xiamu Niu, Multibiometric Complex Fusion for Visible and Thermal Face Images, International Journal of Signal Processing, Image Processing and Pattern Recognition, Vol. 6, No. 3, 2013.

  15. Shekhar Karanwal, Secure and Reliable Multimodal Biometric Systems Using two and three Biometric Traits, International Journal of Advanced Research in Computer Science and Software Engineering, Vol. 3, No. 7, 2013.

  16. Priya Metri, Jayshree Ghorpade and and Ayesha Butalia, Facial Emotion Recognition Using Context Based Multimodal Approach, International Journal of Interactive Multimedia and Artificial Intelligence, Vol.1, No. 4, pp. 12-15, 2012.

  17. Samuel Adebayo Daramola and Oladejo Daniel Oluwaninyo, Automatic Ear Recognition System using Back Propagation Neural Network, International Journal of Video & Image Processing and Network Security IJVIPNS-IJENS, Vol. 11 No. 01, 2011.

Leave a Reply