Deep Learning Based Recognition Of Persons Using Eye And Ear

DOI : 10.17577/IJERTCONV11IS03035

Download Full-Text PDF Cite this Publication

Text Only Version

Deep Learning Based Recognition Of Persons Using Eye And Ear

DEEP LEARNING BASED RECOGNITION OF PERSONS USING EYE AND EAR

(K. Prema1, S. Aburijas2, S. Mohammedunisp, A. Syed apsar4)

Dept of Information Technology of (M.A.M. College of Engineering and Technology), Tiruchirappalli, India.

(prema.it@mamcet.com,

aburijas.it19@mamcet.com,mohammedunish.it19@mamcet.com,syedapsar.it19@mamcet.com

Abstract- Data augmentation is carried out to artificially increase the number of samples in the eye and ear database. The augmented dataset is subjected to feature extraction by deep learning and classification features for ear recognition. In this project, we propose a general framework for direction- based method named Complete Direction Representation (CDR), which reveals DR by a comprehensive and complete way. Different from traditional methods, CDR emphasizes the use of direction information with strategies of multi-scale, multi-direction level, multi-region, as well as feature selection. The input to the avg pool layer is taken as feature vectors, and the deep learning architecture InceptionV3 is used as the mechanism for extracting features. The potential efficiency of the deep network is tested on IITD- dataset and AMI ear dataset. Ear biometric finds its applications in the crime investigation, attendance monitoring, security purpose etc. Ear biometrics using Index Segmentation for gray scale images is proposed in this paper and has been implemented by using edge detection and matching techniques.by a classical model to obtain a hybrid deep learning classical model. Direction information serves as one of the most important

Keywords Biometrics, Ear recognition, Machine learning , Deep networks, Ensemble classifiers

Introduction- Ear recognition, a field within biometrics, concerns itself with the use of images of the ears to identify individuals. Much like fingerprints, ears are unique to an individual; even identical twins can have distinguishable ear. Additionally, ear-based recognition systems can efficiently extend the compatibilities of face recognition systems. For face recognition at a high roll angle, toward the side view, the face recognition performance is very low; while the ear recognition, at that angle, is generally yielding high performance. Biometrics refers to the physiological or behavioural traits of a person which can be used to identify him uniquely. Biometrics authentication (or realistic authentication) is used in computer science as a form of

identification and access control. It is also used to identify individuals in groups that are under surveillance. Today in many areas identity of a person needs to be proven, like to use an ATM, for gaining access to a restricted area. This can be done by using convectional system of using tokens, passwords, PINS, etc. But they can be lost, stolen, shared, and destroyed. Hence, we apply Biometric System for authentic identification of an individual. All the conventional security systems have some drawbacks. The prime reason behind their inclination towards ear biometrics is due to the presence of all the properties i.e., universality, uniqueness, permanence and collectable in nature in ear biometric. Also, ears are not variable in its appearance during the change in pose and facial.

Human ear recognition is an emerging biometric recognition technology. In recent years, more and more scholars have focused on it[1-3]. Compared with other biological characteristics, ear is not affected by aging and facial expression[4], and the collection of human ear data can be done without interruption. Due to the spread of the new coronaviruses in the world, people need protective measures such as wearing goggles, masks, disposable gloves, etc. in daily life. But with currently widely used identification technology, such as face recognition, fingerprint recognition, iris recognition, people need to be cooperated during the image collection stage, and some may be concerned with personal hygiene because the masks have to be removed during face recognition or because people have to touch the same image acquisition equipment. While human ears are usually exposed outside, so ear images can be collected without contact, making ear biometric another hot topic in the area of personal authentication. An ear recognition system can be divided into several stages: ear detection, ear alignment,

and ear recognition. Ear detection is the first stage of an ear recognition system, which should be robust under pose, lighting variations and partial occlusion.

Literature Survey

Biometric recognition, such as face recognition, fingerprint recognition, iris recognition, etc., has played animportant role for personal authentication in modern society. However, the spread of new coronaviruses around the world may cause some trouble for these popular biometrics, because people usually wear masks or sun-glasses. Human ear is a kind of biological feature, which has the characteristics of universality, stability and easy collection. Ear recognition can be applied under unconstrained conditions. People don't have to take off the masks or glasses if the ears are visible. For an ear recognition system, ear detection is the first important part, which makes research on ear detection a hot topic. In this paper,we apply a single-stage target detection method – the Center Netdeep learning network – for real-time ear detection. Human ear recognition is an emerging biometric recognition technology. In recent years, more and more scholars have focused on it. Compared with other biological characteristics, ear is not affected by aging and facial expression, and the collection of human ear data canbe done without interruption. Due to the spread of the new coronaviruses in the world, people need protective measuressuch as wearing goggles, masks, disposable gloves, etc. in daily life. But with currently widely used identification technology, such as face recognition, fingerprint recognition,iris recognition, people need to be cooperated during the image collection stage, and some may be concerned with personal hygiene because the masks have to be removed during face recognition or because people need protective measuressuch as wearing goggles, masks, disposable gloves, etc. in daily life. But with currently widely used identification technology, such as face recognition, fingerprint recognition,iris recognition, people need to be cooperated during the image collection stage, and some may be concerned with personal hygiene because the masks have to be removed during face recognition or because people have to touch the same image acquisition equipment. While human ears are usually exposed outside, so ear images can be collectedwithout contact, making ear biometric another hot topic in the area of personal authentication. An ear

recognition system can be divided into several stages: ear detection, ear alignment, and ear recognition.

Ear detection based on center net network adopts the anchor- free scheme. The anchor of Center Net will only appear at the current targetposition instead of spreading on the entire image.

  • Machine learning detection algorithm

    Human ear recognition methods based on edge extractionor template matching are susceptible to lighting or posture variation, and complex backgrounds. Therefore, some researchers have proposed a series of human ear detection methods based on learning algorithms.

  • Deep learning detection algorithm

    The deep learning-based ear detection methods are mainly divided into two directions: two-stage detector and one-stage detector. For two-stagedetector, Zhang applied Faster- RCNN for ear detection, experimental results on public USTB ear datasets showed satisfying

    EXISTING SYSTEM

    2D-WMBPCA algorithm is among the first of its kind to bring hyper spectral based techniques to the field of single image ear recognition. Consequently, this section is divided into two sub-sections. A brief literature review on ear recognition techniques is first presented. This includes a discussion of PCA based techniques and the current state of the art algorithms for ear recognition, which are learning based. A block diagram of the proposed 2D Wavelet based Multi-Band Principal Component Analysis (2D-WMBPCA). 2D- WMBPCA is inspired by state of the art PCA based techniques for hyper spectral images and wavelet based ear recognition algorithms. The proposed method first performs a 2D non- decimated discrete dyadic wavelet transform on the input ear image as introduced in, splitting the image into its four sub bands (LL, LH, HL, HH). Each resulting sub band is then pre- processed to improve its contrast, and is then fed to a multiple- frames generation algorithm, generating a number of frames based on the magnitude of the sub bands coefficients. Finally, Principal Component Analysis (PCA) is applied to each resulting set of multiple-frames, extracting its eigenvectors. The eigenvectors from all four wavelet sub bands are then concatenated and used for matching.

    PROPOSED SYSTEM

    Develop to recognize the ear and eye using deep learning models. The external shape of the human ear has distinguishing features that differ significantly from person to person. Research shows that even the ears of identical twins are different. Importantly, the ear shape of a person remains steady between the ages of 8 to 70. The proposed system used a various augmentation techniques such as random rotation and addition of noise so as to increase the number of images for each class and ensure that deep learning has sufficient images for training. In this project, we expand on these methodologies and effectively create a deep learning hybrid approach for ear and eye recognition by employing different techniques for network training and classification. Identifying non-cooperative individuals in unconstrained environments.

    Ear detection is a major step within the ear recognition

    them. It can provide a plan from which products can be procured, and systems developed, that will work together to implement the overall system. There have been efforts to formalize languages to describe system architecture; collectively these are called architecture description languages (ADLs).

    Eye and ear image

    mage

    Training set

    I

    algorithmic process. While conventional approaches for ear detection have been used in the past, Faster Region-based Convolutional Neural Network (Faster R-CNN) based detection methods have recently achieved superior detection performance in various benchmark studies, including those on face detection. In addition, the systems ear detection performance is high even when the test images are coming from un-controlled settings with a wide variety of images in terms of image quality, illumination and ear occlusion. With the recent significant advances in technology, communication and digital applications, there is a need for automated, advanced and secure human authentication approaches. Biometrics provides such a solution to multiple security, commercial and digital applications.

    SYSTEM DESIGN

    An allocated arrangement of physical elements which provides the design solution for a consumer system architecture or systems architecture is the conceptual model that defines the structure, behaviour, and more views of a system. An architecture description is a formal description and representation of a system, organized in a way that supports reasoning about the structures and behaviours of the system. System architecture can comprise system components, the externally visible properties of those components, the relationships (e.g. the behaviour) between

    Preprocessing

    sifica

    Clas tion Model

    MODULE DESCRIPTION

  • Input Image

  • Preprocessing

  • Feature Extraction

  • Classification

  • Eye and Ear Detection

preprocessing

learn

Feature Extraction

Deep ing CNN model

Recognition output

INPUT IMAGES

This Input stage can perform the process of collecting multiple datasets from various sources for the training and testing datasets. Then the previous works usually rely on a wide spectrum of analysis tools, from frame. However, it differs from conventional methods by using thermal images as inputs.

center

maps

Center

The academic community has achieved fruitful breakthroughs in the field of background subtraction in the past few decades. The simplest method only uses a statistical measure, like median or mean over multiple frames to model the static

The field of image digital image processing refers to the processing of digital images by means of a digital computer. A digital image is composed of a finite number of elements, each of which has a particular location and values of these elements are referred to as picture elements, image elements, pels and pixels.

background. In recent years, online subspace learning approaches have made significant progress on background subtraction from live streams of videos in a real time online fashion. These online models can greatly speed up the through updating the low-rank structure of the only one frame at a time.

Fetch Acquired data

Ear and eye image processing

Preprocessed image

The Image Input enables you to explore, configure, and acquire data from your installed and supported Input Images. So that you can change settings and see the changes dynamically applied to your image data.

PREPROCESSING

All the datasets are pre-processed in that stage. The pre- processing stage can perform the removal of blurriness; gray scale conversion. The main process of the pre-processing is used to separate the meaningful features.

FEATURE EXTRACTION

Feature extraction involves reducing the number of resources required to describe a large set of data. Feature extraction is a general term for methods of constructing combinations of the variables to get around these problems while still describing the data with sufficient accuracy. Feature extraction plays an important role in image processing. Feature extraction is a part of the dimensionality reduction process, in which an initial set of the raw data is divided and reduced to more manageable groups. These features are easy to process, but still able to describe the actual data set with the accuracy and originality. Then, classification techniques Convolutional neural network (CNN) is discussed.

Classification is proposed for detecting defects. Then the useful object is detected by the foreground gating method.

EYE AND EAR DETECTION

This method is used as a process of ear and eye detection. This method is extract exact result of finding an ear and eye detection. Then this method is used to refine a background and gating a foreground object accurately.

CONCLUSION

A hybrid method for ear and eye recognition system of deep features and using deep learning algorithm for classification. The accuracy results are obtained using proposed CNN method. A thorough analysis of the existing methods used for ear and eye identification was discussed. Furthermore, the project discussed and investigated the success of using the ear

as a primary biometric system for identification and

Preprocessed

image Inception

v3 model

Extract

Ear And Eye

features

verification. It was found that other works battled to identify the ear if pose and angle of the image were changed. For training we used a collection of ear and eye images from

various databases to avoid over-fitting an to make the system

It remove the average pooling layers from the pre-trained

model and add auxiliary convolution layers to detect large sizes of objects.

CLASSIFICATION

Feature Extracted Data

Image classification is perhaps the most important part of digital image analysis. Early computer vision models relied on raw pixel data as the input to the model. The classification method is used to classify the object easily so we are detecting the exact object accurately. Then this method is used to classify the object accurately and refining objects accurately.

Using

robust in the presence of noise, pose variation, and partial ear occlusion. Our proposed ear and eye detection system yields a maximum of correct detection when tested on various databases. In addition, a study was performed on ear identification benchmarks and their performance on other CNN models measured by standard evaluating metrics.

FUTURE ENHANCEMENT

Future work on unsupervised clustering of proposals could result in improving precision by detecting irrelevant proposal beyond providing some solutions for the unsupervised classification task. The real time video recordings are used to

CNN

Classification

detect marine animals using efficient object detection methods.

Store results

Then these stage methods do not have this refinement process. It introduce a Background Refining stage as the second stage to complete the detection framework. In this stage, we handle the misalignment problem by pair wise non-local operation between the original frames. Instead of the pixel-wise multiplication, which is sensitive to misalignments, we compute the response at a position in feature maps of the original frames as a weighted average of the features at all positions in feature maps.

REFERENCES

[1]Yu Hwan Kim and Kang Ryoung Park PSS-net: Parallel semantic segmentation network for detecting marine animals in underwater scene2022.

[2] Xi Xu, Yi Qin , Dejun Xi, Ruotong Ming and Jie Xia MulTNet: A Multi-Scale Transformer Network for Marine Image Segmentation toward Fishing 2022.

[3] Yujie Li , Chunyan Ma, Tingting Zhang Underwater Image High Definition Display Using the Multilayer Perceptron and Color Feature-Based SRCNN2019.

[4] Suchita Nanaware, Rajveer Shastri Passive Acoustic Detection and Classification of Marine Mammal Vocalizations 2014.

[5] Jasmine Lopez, Jon School maker Automated Detection of Marine Animals Using Multispectral Imaging 2014.