Face Detection and Recognition Using Various Frames of A Video

DOI : 10.17577/IJERTV2IS110651

Download Full-Text PDF Cite this Publication

Text Only Version

Face Detection and Recognition Using Various Frames of A Video

Rajeswari. K, Sakthivel Punniakodhi, Shafiya Begum. R

PG Scholar, Department of Information Technology, Sri Manakula Vinayagar Engineering College, Puducherry, INDIA

PG Scholar, Department of Computer Science Engineering, Sri Manakula Vinayagar Engineering College, Puducherry, INDIA

PG Scholar, Department of Information Technology, Sri Manakula Vinayagar Engineering College, Puducherry, INDIA

Abstract

Face detection and recognition is the latest biometric technology which is based on identification and verifying the person or user by their face. This can be done in various algorithms to compute and process the face for specific purposes. We propose a face detection and recognition system, which identify a person or user from a video frame from the input video. We use the neural network concept to develop the algorithm called a local mapping analysis algorithm; here we use cluster information's (neural nets using statistical cluster information's). We use this algorithm to avoid the various problem that occurs in recognizing the face with some illumination variation, pose variation etc. We use some unique feature like distance between eyes, the shape of the cheekbones and the distance or length between the jaw line etc.

Keyword: Neural network concept, local mapping analysis, illumination and pose variation

  1. Introduction

    Face detection and recognition was the most relevant application in image analysis. The challenge here is to build an automated system with equal human ability to detect and recognize a person or user. Face detection is the process to find the faces in a given image and the face recognition define as the process of verifying the faces and produce the result as it match or not.

    Face detection and recognition comes under computer vision based application used for security, access management, biometrics, personal security, biometrics, entertainment

    leisure, etc. Basically face recognition is the process of checking the input face with the faces that present in the database to provide result whether it match or not. Before recognition process we have to detect the face from a given image which is called as face detection, which produce results as a cropped face or message as no face detected.

    Previous method of face recognition does not perform perfectly because of various problems such as age of the person, illumination, variation in pose, occlusion, expressions, etc. The general method for face recognition system includes two components namely, image processing (face detection) and face recognition module. We propose a different method by using the improved skin color model algorithm for detecting faces in the image.

    Various methods are available in skin colour model algorithm such as colour image using PCA, basic colour extraction, skin color detection under changing light conditions, etc. We use skin colour model using only RGB model which uses CYMK colour for detection of the face. So our method produces more accurate than the older methods.

    Then next process is to perform a novel algorithmic analysis for extracting local features of the faces like length of eye, the width of the nose etc., which is a neural network based algorithm called as neural nets using statistical cluster informations.

    In this project first we convert the video into frames and perform face detection process and the color of the face which was detected is converted to black and white (gray scale) image for processing. The gray scale, images only mostly used for the recognition process in any type of system. From that image the feature extraction is performed by our algorithm and stores them in the recognition process.

    The figure shown below explains us about the various nodal points present in the face,

    Fig. 1. Various nodal points.

  2. Proposed System

    We implement and design an automated

    face detection and recognition system, which is module as listed below,

    1. Face Detection

    2. Local mapping analysis algorithm and Create the Database and save the Feature Extraction

    3. Video to Frames

    4. Performing Face Detection and Feature Extraction for each frame (1, 2… n)

    5. Finding the mean of Feature extraction of various frames of the same face

    6. Recognition

      Fig. 2. Overall System.

      1. Face Detection The Face Detection performed by 3 different steps as given below, Step 1: Skin colour process is performed to segment the face of the image

        Step 2: Neural Network Computing is performed for acquiring rotation angle

        Step 3: Providing upright face detector to find its face or not

        Then provide the decided face as an output for the next process to start if its a face. [3, 6]

      2. Local Mapping analysis The neural network is used for training Images using unsupervised learning and in term it also produces a low dimensional and discredited representation of the image of the training samples called as a map. [5]

        And it produces output which is used to display the person in authenticated or unauthenticated depends on the result of the face recognition.

        Fig. 4. Feature Extraction.

      3. Video to Frame converting the video into frames is the process of the reading the frames in the video and saving them as an image. Let us consider the input video as A, then the image sequence can be mathematically mentioned as below,

        n

        i

        A= i= 1

        Fig. 3. Local Mapping Analysis.

        The feature mapping for grouping the data with or without knowing the class of the input data and these extracted features can be used to detect the face in the image.

        We provide input as training data which means face image are taken as x vectors of various length n data.

      4. Face Detection and Extraction The Face Detection and Extraction performed by 4 different steps as given below,

        Step 1: Skin colour process is performed to segment the face of the image

        Step 2: Neural Network Computing is performed for acquiring rotation angle [5]

        Step 3: Providing upright face detector to find its face or not

        Step 4: Perform Local Mapping Analysis for feature extraction.

        We repeat the 4 steps for all the frames that we have gathered from the video.

      5. Mean of the Feature Extraction Performing mean process for the entire feature extracted from all the frames of the video, we add the all feature extracted from the frames of the video and divide my number of the frames. And then we use the Mean value of the Feature Extraction for comparing it with the features in the database and provide the result.

        Fig. 5. Feature Extraction of Various Frames

      6. Recognition Then we compare the various feature extractions saved in the database with the mean value of Feature Extraction and retrieve the required data from the database and provide the assay-mark. [2]

  3. Conclusion

This paper mainly on focused to solve the problems of the existing system such as illumination variation and pose variations by the use of local mapping analysis and applying it with the video input to perform face detection and recognition. The future work is to apply some super resolution concept for lower resolution input to perform face detection and recognition process.

Reference

  1. M.H. Hayes, A.U. Batur, Linear Subspace for the Illumination Robust recognition, Dec. 2001.

  2. P.N. Bellhumeur, J.P. Hespaasnha, D.J. Kriegman, The Recognition Using Class Specific & Linear Projection, July 1997.

  3. P. Niyogi M. Belkin, Laplacian, Eigenmaps & Spectral Techniques for Embedding & a Clustering, 2001.

  4. M. Bellkin, P. Niyogig, Using Manifold Structures of Partially Labeled assortment, 2002.

  5. M. Brand, Charting a Manifold, Neural Info. Processing the System, 2002.

  6. A simple and efficient face detection algorithm for video database applications, in the Proceedings, International Conference on Image Processing, 2000.

Leave a Reply