Communication through the Recognition of the Sign Language

DOI : 10.17577/IJERTCONV11IS08001

Download Full-Text PDF Cite this Publication

Text Only Version

Communication through the Recognition of the Sign Language

B S Vinay Krishna1, Kruthika K P2 Prerana Chaithra3, Mulumudi Sunitha4

1Student, Department of ISE, Sapthagiri College of Engineering, VTU, Bengaluru, India

2Student, Department of ISE, Sapthagiri College of Engineering, VTU, Bengaluru, India

3 Associate Professor, Department of ISE, Sapthagiri College of Engineering, VTU, Bengaluru, India

4 Faculty, CSE Department, Vignana Bharathi Institute of Technology, Hyderabad, India

Abstract In this society if a person want to live the communication is very important part of life. The normal people can communicate by talking to each other, but the people who are deaf and dumb the mode of communication for them is highly impossible to communicate with the normal people. One of the ways where these people can communicate is through Sign Language, it is used by the people who are deaf and dumb issues. These sign languages is the only mode of communication to contact with the other common people. Different countries use different sign languages. There are many technologies now a days which help these people to communicate the one amongst them is using deep learning.

Keywords communication, sign language, deep learning

  1. INTRODUCTION

    In the worlds population almost 5% is been affected by the hearing and talking impairment. For the deaf and dumb people different sign gestures can be used to communicate. As of to the people

    who are visually impairment sign gestures like sound can be used to communicate with them. As all of us know that communication is one of the most important aspect of our lives as we can share our emotions, feelings, happiness which all these are conveyed to one and other through languages. But there are specially-abled people who are deaf and dumb and these people can communicate only through sign languages.In order to create equal opportunity especially for the specially abled people is by motivating them through their strengths.

    There are many countries which uses their own sign languages like example American Sign Language (ASL), Indian Sign Language (ISL) which are different. In formal we use both the hands to communicate and in latter we use only hand to communicate these specially abled people can be educated with these languages. The problem lies here therefore in this paper we have tried to bridge the communication gap between the physically abled sign language and normal people can easily understand. The Indian Sign Language has received

    1

    attention than the American sign language. There are many technologies that are evolved and some of them uses gloves and kincets. These help in recognizing the gestures from the photographs which have been captured from the languages. The other method which can be used is also the convolutional neural network(CNN). This method it is used to classify the images that has been captured through the sign languages of the especially abled people. This CNN it is one of the faster feature that is used to classify the images.

    The sign recognition language as shown in figure 1 is used to communicate through hands, facial gestures and also through body gestures. According to the survey almost 96% of accuracy can be achieved. The key goal of the system is to recognize these sign languages with maximum accuracy apart from the light and dark conditions.

    Figure 1:Indian Sign Language using hand gestures

  2. LITERATURE SURVEY

    This reveals the frameworks used and the attempts which have been tried to tackle these sign language recognition through the videos, images and also using various algorithms.The researches who have researched in this field [1] suggest to use the filters in the algorithm as the system which is been used currently as it has faced few problems with the tone of the skin identification. This sign language which can be converted can be achieved to almost 96%.

    According to the research[2] he found the dataset containing of forty words which are common and ten thousand images of sign language. To easily and to fastly locate these images R-CNN along with an embedded RPN module is used. According to this the performance is improved and the accuracy is also high. R-CNN it increase from 89% to 91.7% .

    The research [3] where they used YCbCr skill module to identify the skin color, used in the hand gestures. Here they have collected a dataset of 23 Indian Sign Language where the alphabets are static and they have also used 25 videos. The result of this experiment is almost 94.4% which is for static and almost 86.4% for dynamic.

    In the low cost approach all the images were been captured in the green screen background[5].so that during the image processing that green color can be subtracted easily using the RGB color space and through this the images can be converted to black and white. Here they have used the centroid

    2

    methodology and the result has been 92% successful in the correctly recognizing the images.

    Considering another research where they have used the method for transferred learning[6]. Here they have collected the datasets which have been used image net and kinetic dataset. For the training purpose they have used another two data sets that is UCF-101 and HMDB-51. The result of this research was that the UCF-101 gave almost 98% accuracy and HMDB-51 was almost 80.9% accurate.

  3. PROPOSED SYSTEM

    In the proposed system as shown in figure 2 the first thing we have to do is to collect the data. Many of the researches while experimenting they have used cameras , or the sensors to capture the movements of the hand. We can also use web cameras to capture the images. These images have to undergo certain operations to detect the background and eliminate them using the extraction algorithm.

    Figure 2 : Proposed Methodology

    i.Image Acquisition:

    It is a very good form to understand to convey or to know the information for the specially abled people. The image can be recognized using some algorithms which can be done using Convolutional Neural Network(CNN).It is achieved by using some of the software such as python, numpy, tensor flow and many other. Elimination of background can be done using particular colors by selecting the limit of a persons hand. So by eliminating the background color of the hand area the problems can be corrected.

    ii.Segmentation:

    It is the process of separating the images or the signs from the captured image this process is known as segmentation. Skin texture, subtracting of context are all used in this process. Here the movement of the hand location must be detected to identify the gestures.

    1. Feature Extraction:

      These are the elements which are crucial for the sign language recognition.The time which is required is reduced without disturbing the accuracy.There are many features which have been used such as the hand shape, hand textures, distance of the hand, orientation etc. Using geometric features they can be recognized like finger detections, finger tips. The Principal Component Analysis(PCA) it reduces the

      3

      dimensions. It works better for the static gesture than the dynamic gesture.

    2. Classifier:

      They are the user friendly as they do not require the signer to wear the gloves. They depend on the position of the camera which is been placed and they also depend on the distance of the signer from the camera. During the real time performance they have to equally balance the accuracy and complexity.

    3. Preprocessor:

    It is defined as theprocessing the pictures or the frames in order to reject some of the aspects like erosion, dilation etc. The color image from the green background can be used for reducing the data amount. In figure 3 all the above are depicted.

    Figure 3: (a) original image (b) image after detecting the skin color,(c) Morphological operations and binarizations,(d) image after extracting from the background,(e)

    after morphological operations, (f) contortion made from image c and e.

  4. RESULTS

This technology which we have used it improves the accessibility and it also provides solution for the specially abled people.

Figure 4: Hand gesture prediction

In figure 4 they have predicted best of luck using the hand gestures and the output is best of luck .

Figure 5: Predicted text

In figure 5 the predicted text is 0,1,2,3 using the hand gestures and output is 1,2,3

4

V. CONCLUSION

In this paper our aim was to use technology to help specially abled people to make communication easier using the sign languages under different conditions. We have also used CNN for the classifications of images. The advantage can be easily carried out by the deaf and dumb people. The speech for the deaf and dumb people the speech recognition is through hand gestures.

VI. REFERENCES

[1]. Areesha Gul, Batool Zehra, Sadia Shah, NazishJaved, Muhammad Imran Saleem, Twoway Smart Communication System for Deaf and Dumb and Normal People, International Conference on Information Science and Communication Technology, Issue 2020 .

[2]. Prof. Radha S. Shirbhate, Mr. Vedant

D. Shinde, Ms. Sanam A. Metkari, Ms. Pooja U. Borkar,Ms. Mayuri A. Khandge, Sign language Recognition Using Machine Learning Algorithm, IRJET, Vol.7, Issue 3, March 2020.

[3].Nabeel Siddiqui, Rosa H M, Hand Gesture Recognition Using Multiple Acoustic Measurements at Wrist IEEE transactions on Hunan-machine system, Vol.51, no.1, Issue Febuary

[4].Reddygari Sandhya Rani , R Rumana , RPrema A Review Paper on Sign Language Recognition for The Deaf and Dumb , IJERT Volume & Issue : Volume 10, Issue 10 (October 2021)

[5]. RaziehRastgoo, Sergio Escalera Sign Language Recognition: A Deep Survey in Expert Systems with Applications, 2021.

[6]. Jatinder Kaur, Sandeep Raj Segmentation and classification of hand symbol images using classifiers in Trends in Deep Learning Methodologies, 2021

[7]. A Sunitha Nandhini 1 , Shiva Roopan D 1 , Shiyaam S 1 , Yogesh S 1, Sign Language Recognition Using Convolutional Neural

Network , Journal of Physics: Conference Series 2021

[8].MehreenHurroo , Mohammad Elham, Sign Language Recognition System using Convolutional Neural Network and Computer Vision, Volume & Issue : Volume 09, Issue 12 (December 2020)

5