Sign Language Recognition using Machine Intelligence for Hearing Impairment Person

DOI : 10.17577/IJERTCONV10IS09026

Download Full-Text PDF Cite this Publication

Text Only Version

Sign Language Recognition using Machine Intelligence for Hearing Impairment Person

  1. Gowtham1, S. Karthick2, T. Karthikeyan3, 123Department of Electronics and Communication Engineering,

    Kongunadu college of Engineering and Technology, Trichy.

    Dr. P. Elayaraja4

    4Associate Professor,

    Department of Electronics and Communication Engineering, Kongunadu college of Engineering and Technology, Trichy.

    Abstract:- People with impaired speech and hearing uses Sign language as a form of communication. Disabled People use this sign language gestures as a tool of non-verbal communication to express their own emotions and thoughts to other common people. Conversing with people having a hearing disability is a major challenge. Deaf and Mute people use hand gesture sign language to communicate, hence normal people face problems in recognizing their language by signs made. Hence there is a need for systems that recognize the different signs and conveys the information to normal people. But these common people find it difficult to understand their expression, thus trained sign language expertise are needed during medical and legal appointment, educational and training session. Over the past few years, there has been an increase in demand for these services. Other form of services such as video remote human interpret using the high-speed Internet connection, has been introduced, thus these services provide an easy-to-use sign language interpret service, which can be used and benefited, yet have major limitations. To address this problem, we can implement artificial intelligence technology to analyses the users hand with finger detection. In this proposed system we can design the vision-based system in real time environments. And then using deep learning algorithm named as Convolutional neural network algorithm to classify the sign and provide the label about recognized sign with voice alert.

    Key words: Machine learning, disability application, sign language recognition, image processing, convolution neural networks, artificial intelligence

    1. INTRODUCTION

Communication is a vital tool in human existence. it's a basic and effective manner of sharing thoughts, feelings and opinions. However, a considerable fraction of the world's population lacks this. many of us are tormented by hearing disorder, speaking impairment or each. A partial or complete inability to listen to in one or each ear is understood as hearing disorder. On the opposite hand, mute could be an incapacity that impairs speaking and makes the affected folks unable to talk. If deaf-mute happens throughout childhood, their acquisition ability is hindered and leads to language impairment, additionally referred to as hearing condition. These ailments are a part of the foremost common disabilities worldwide. applied mathematics report of physically challenged kids throughout the past decade reveals a rise within the range of neonates born with a defect of handicap and creates a communication barrier between them and also the

remainder of the planet.According to the planet Health Organization (WHO) report, the quantity of individuals plagued by hearing incapacity in 2005 was close to 278 million worldwide. 10 years later, this range jumped to 360 million, a roughly 14 July increment. Since then, the quantity has been increasing exponentially. the newest report of United Nations agency disclosed that 466 million folks were tormented by hearing disorder in 2019, that quantity to five of the planet population with 432 million (or 83%) of them being adults, and thirty-four million (17%) of them arekids. The United Nations agency additionally calculable that the quantity would double (i.e., 900 million people) by 2050. In these invasive deaf-mute folks, there's a desire to interrupt the communication barrier that adversely affects the lives and social relationships of deaf-mute folks.Sign languages are used as a primary suggests that of communication by deaf and onerous of hearing folks worldwide. it's the foremost potent and effective thanks to bridge the communication gap and social interaction between them and also the in a position folks. linguistic communication interpreters facilitate solve the communication gap with the hearing impaired by translating linguistic communication into spoken words and contrariwise. However, the challenges of using interpreters are the versatile structure of linguistic communications combined with meager numbers of skilled sign language interpreters across the world. per the planet Federation of Deaf, over three hundred sign languages arevictimization by over seventy million worldwide. Therefore, the requirement for a technology-based system which will complement standard linguistic communication interpreters.

THEORETICAL BACKGROUND

    1. Machine Learning

      Machine learning algorithms [1-5] are often categorized as supervised or unsupervised. Supervised algorithms require a data scientist or data analyst with machine learning skills to provide both input and desired output, in addition to furnishing feedback about the accuracy of predictions during algorithm training. Data scientists determine which variables, or features, the model should analyze and use to develop predictions. Once training is complete, the algorithm will apply what was learned to new data. Unsupervised algorithms do not need to be trained with desired outcome data. Instead, they use

      an iterative approach called deep learning to review data and arrive at conclusions. Unsupervised learning algorithms — also called neural networks — are used for more complex processing tasks than supervisedlearning systems, including image recognition, speech-to-text and natural language generation. These neural networks work by combing through millions of examples of training data and automatically identifying often subtle correlations between many variables. Once trained, the algorithm can use its bank of associations to interpret new data. These algorithms have only become feasible in the age of big data, as they require massive amounts of training data. Machine learning algorithms are often categorized as supervised or unsupervised.

    2. Image Processing

      In imaging science, image processing is processing of images using mathematical operations by using any form of signal processing for which the input is an image, a series of images, or a video, such as a photograph or video frame; the output of image processing may be either an image or a set of characteristics or parameters related to the image. Most image-processing techniques involve treating the image as a two-dimensional signal and applying standard signal-processing techniques to it. Images are also processed as three-dimensional signals with the third-dimension being time or the z-axis. Image processing usually refers to digital image processing, but optical and analog image processing also are possible. This article is about general techniques that apply to all of them. The acquisition of images (producing the input image in the first place) is referred to as imaging. Closely related to image processing are computer graphics and computer vision. In computer graphics, images are manually made from physical models of objects, environments, and lighting, instead of being acquired (via imaging devices such as cameras) from natural scenes, as in most animated movies. Computer vision, on the other hand, is often considered high-level image processing out of which a machine/computer/software intends to decipher the physical contents of an image or a sequence of images (e.g., videos r 3D full-body magnetic resonance scans). [6-11] In modern sciences and technologies, images also gain much broader scopes due to the ever-growing importance of scientific visualization (of often large-scale complex scientific/experimental data). Examples include microarray data in genetic research, or real-time multi-asset portfolio trading in finance. Image analysis is the extraction of meaningful information from images; mainly from digital images by means of digital image processing techniques. Image analysis tasks can be as simple as reading bar coded tags or as sophisticated as identifying a person from their face.

    3. Deep Learning

In comparison to traditional algorithms, neural networks can solve moderately complex problems at a significantly lower degree of method complexity. In terms of algorithm complexity, neural networks can solve moderately complex problems at a much lower level. The neural network is designed to emulate the neural

functioning of the human brain, but with mathematical functions. The multi-layer network, as depicted in Figure 1, is one of the neural networks. The input layer, several hidden layers, and the output layer are the three layers. The data is passed through the input layer without being altered. The data is processed by hidden layers, and the output layer translates hidden layers to classification output. It takes time to collect datasets for training. As the number of configurations grows, so do the training samples.The majority of data in the world is not dispersed evenly. Figure 2 shows an image recognition model based on neural networks. [10-18].

  1. EXISTING WORK

    Gesture Recognition consists of two approaches

    (i) vision based (ii) glove based. Static gestures use hand poses and the image is captured by using cameras.The images captured are given for analysis which is done using segmentation.Glove based approach uses sensors or gloves to identify the hand gesture. Some type of flex sensors, accelerometers etc. are used in glove-based approach.

  2. PROPOSED WORK

    The hand gesture images are captured from the vision- based camera.Using background subtraction technique to separate the hand from background.Segmentation and classification technique to classify the finger posturesClassification done by using deep learning algorithm named as Convolutional neural network algorithm.Provide the label about the sign in real time video streaming with improved accuracy rate.

  3. METHODOLOGY

In this system, the framework has split into two phases such as training and testing phase. Training phase, train the finger values and in test phase, capture the hand and fingers. Then implement CNN algorithm to classify the finger postures and label the signs. Fig 3.1 shows the overall block diagram of the system. The arithmetic circuits are utilized in this methodology. [13-15]

Train the finger gestures

Gesture details

Train the alphabets with sign posture

Testing phase

Camera capture

Foreground substraction

Finger posture tracking

Classification of finger gesture

Using Convolution Neural Network Algorithm

Match with training sets

Sign Recognition Results

    1. Training Phase

      Figure 5.1 Block Diagram of the Proposed System

      Training the landmark-based dataset using the python package belongs to Machine Learning like OpenCV, tensorflow, keras and some additional packages for supporting purpose. After training the datasets, the model file has generated in .p format.

    2. Gesture Recognition

The input can be captured through the camera which are pre-processed to extract the foreground object and that can be processed to track the hand gesture to classify the finger postures. After classifying the finger postures, it can be match with the trained model file to predict the hand sign.

.6.RESULTS

CONCLUSION

Intelligent systems in sign language recognition continue to attract the interest of academic academics and industrial practitioners, thanks to recent advances in machine learning and computational intelligence methodologies. Between 2001 and 2021, this study gives a systematic examination of intelligent systems used in sign language recognition-related investigations. Based on 649 full-length research publications obtained from the Scopus database, an overview of intelligent-based sign language recognition research trends is offered. This study reveals that machine learning and intelligent technologies in sign language recognition have been expanding for the last 12 years, based on the publishing trends of the article collected from the Scopus database. This study identifies and presents nations and academic institutions with a big number of published

publications and strong international collaborations. This study is designed to provide researchers in nations with fewer collaborations with an opportunity to broaden their research collaborations.

FUTURE ENHANCEMENT

The face region, comprising head movement, eye blinking, eyebrow movement, and mouth form, is being studied further as a non-manual sign.

Words, alphabets, and numerals have all been the subject of various studies. However, additional research into sentence recognition in sign language is required in the future.

REFERENCES

[1] Camgoz, NecatiCihan, et al. (2020) Sign language transformers: Joint end-to-end sign language recognition and translation, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition.

[2] Cheok, Ming Jin, Zaid Omar. (2019) A review of hand gesture and sign language recognition techniques, International Journal of Machine Learning and Cybernetics, Vol. 10.1, pp. 131-153.

[3] D.G. Enikeev, S.A. Mustafina, (2021) Sign language recognition through Leap Motion controller and input prediction algorithm, In K. S. V Mikhailov G.A. Mikhailov G.A. (Ed.) Journal of Physics: Conference Series, Vol. 1715 (1).

[4] F. Garcia-Lamont, J. Cervantes, A. López, L. Rodriguez, (2018) Segmentation of images by color features: A survey, Neurocomputing.

[5] Huang, Jie, et al. (2018) Video-based sign language recognition without temporal segmentation, Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 32.

[6] Li, Dongxu, et al. (2020) Transferring cross-domain knowledge for video sign language recognition, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[7] Li, Dongxu, et al. (2020) Word-level deep sign language recognition from video: A new large-scale dataset and methods comparison, Proceedings of the IEEE/CVF winter conference on applications of computer vision.

[8] M.A. Adegboye, A.M. Aibinu, J.G. Kolo, I. Aliyu, T.A. Folorunso, S.H. Lee (2020) Incorporating intelligence in fish feeding system for dispensing feed based on fish feeding intensity, IEEE Access, pp. 91948-91960.

[9] Ma, Yongsen, et al. (2018) Signfi: Sign language recognition using wifi, Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Vol. 2.1, pp. 1-21.

[10] N. Aloysius, M. Geetha, (2020) A scale space model of weighted average CNN ensemble for ASL fingerspelling recognition,

International Journal of Computational Science and Engineering, Vol. 22 (1), pp. 154-161.

[11] Pu, Junfu, Wengang Zhou, and Houqiang Li. (2019) Iterative alignment network for continuous sign language recognition, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition.

[12] R.H. Abiyev, M. Arslan, J.B. Idoko, (2020) Sign language translation using deep convolutional neural networks, KSII Transactions on Internet and Information Systems, Vol. 14 (2), pp. 631-653.

[13] P. Anguraj and T. Krishnan, Design and implementation of modified BCD digit multiplier for digit-by-digit decimal multiplier, Analog Integr. Circuits Signal Process., pp. 112, 2021.

[14] T. Krishnan, S Saravanan, A. S. Pillai, and P. Anguraj, Design of high-speed RCA based 2-D bypassing multiplier for fir filter, Mater. Today Proc., Jul. 2020, doi: 10.1016/j.matpr.2020.05.803.

[15] T. Krishnan, S. Saravanan, P. Anguraj, and A. S. Pillai, Design and implementation of area efficient EAIC modulo adder, Mater. Today Proc., vol. 33, pp. 37513756, 2020.

[16] S.C. Agrawal, A.S. Jalal, R.K. Tripathi, (2016) A survey on manual and non-manual sign language recognition for isolated and continuous sign International Journal of Applied Pattern Recognition, Vol. 3 (2), pp. 99.

[17] Sincan, OzgeMercanoglu, et al. (2021) Chalearn LAP large scale signer independent isolated sign language recognition challenge: Design, results and future research, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[18] Wadhawan, Ankita, and Parteek Kumar. (2021) Sign language recognition systems: A decade systematic literature review, Archives of Computational Methods in Engineering Vol. 28.3, pp. 785-813.