A Review Paper on Sign Language Recognition System For Deaf And Dumb People using Image Processing

DOI : 10.17577/IJERTV5IS031036

Download Full-Text PDF Cite this Publication

  • Open Access
  • Total Downloads : 1556
  • Authors : Manisha U. Kakde, Mahender G. Nakrani, Amit M. Rawate
  • Paper ID : IJERTV5IS031036
  • Volume & Issue : Volume 05, Issue 03 (March 2016)
  • DOI : http://dx.doi.org/10.17577/IJERTV5IS031036
  • Published (First Online): 28-03-2016
  • ISSN (Online) : 2278-0181
  • Publisher Name : IJERT
  • License: Creative Commons License This work is licensed under a Creative Commons Attribution 4.0 International License

Text Only Version

A Review Paper on Sign Language Recognition System For Deaf And Dumb People using Image Processing

Manisha U. Kakde1

Electronics & Tele-coomunication Department Chh. Shahu Engineering College,

Aurangabad (MS), India

Mahender G. Nakrani2

Assistant Professor,

Electronics & Tele-coomunication Department Chh. Shahu Engineering College,

Aurangabad (MS), India

Amit M. Rawate3

Head of Department

Electronics & Tele-coomunication Department Chh. Shahu Engineering College,

Aurangabad (MS), India

AbstractCommunications between deaf-mute and a normal person have always been a challenging task. This paper reviews a different methods adopted to reduce barrier of communication by developing an assistive device for deaf-mute persons. The advancement in embedded systems, provides a space to design and develop a sign language translator system to assist the dumb people, there exist a number of assistant tools. The main objective is to develop a real time embedded device for physically challenged to aid their communication in effective means.

KeywordsSign language identification, Hidden Morkov Model,Artificial Neural Network, Data glove, Leap motion controller, Kinectic Sensor.

  1. INTRODUCTION

    Sign language system is a way of communication between deaf and dumb people. While communicating with dumb and deaf peoples, those who have knowledge of sign language, can talk and hear properly. But untrained people cannot communicate with dumb and deaf people, because the person can communicate to dumb people by training sign language. Sign language to text system will be more useful for such a impaired people for communicate with normal people more fluently.

    Sign language is a physical action by using hands and eye with which we can communicate with dumb and deaf people. They can express their felling with different hand shapes and movement. The task is to convert that shape or their sign language into text or speech.

    Due to advancement in the field of image processing, an automatic sign language converter system is developed. Few researchers have developed tools to help to convert sign language into text or speech. Researchers in the field of sign language are broadly categorized in two ways, Data glove & Image processing. In data glove system, user needs to wear glove. Glove consists of flex sensor, accelerometer and motion tracker. Sensor output signals are sending to the

    computer for processing and analyze the gesture and convert into text or speech.

    In image processing, image is captured through web camera. Rest of this paper has organized as follows: the section 2 describes the technique for sign language into text conversion. a review on sign language converter system describes in section 2, section 3 describes the methods of sign language converter system. Finally conclusion of paper describes in section 4.

  2. LITETATURE SURVEY

    2.1 Sign Acquiring Methods:

    1. Leap Motion

      Leap Motion controller (figure 1) is a sensor which detects the hand movement and converts that signal into computer commands. It consists of two IR cameras and three infrared LEDs. LED generates IR light signal and camera generates 300 frames per second of reflected data. These signals are sending to the computer through USB cable for further processing.

      Figure 1: Leap motion controller with USB

      P. Karthick et al. [1] used model that transform Indian sign language into text using leap model. The Leap device

      detects the data like point, wave, reach, grab which is generated by a leap motion controller. Combination of DTW and IS algorithm is used for conversion of hand gesture into text. Neuron network was used for training the data.

      Leigh Ellen Potter et al. [2] used leap motion controller for recognition of Australian sign language. Leap motion controller used to sense the hand movement and convert that hand movement into computer commands. Artificial neuron network is used for training symbols. The disadvantage of that system was low accuracy and fidelity.

    2. Kinect Sensor

    Kinect is Microsoft motion sensor with Xbox 360 gaming console shown in figure 2.it consist of RGB camera , depth sensor and multi-array microphone. It recognizes facial movement and speech.

    Cao dong et al. [3] used Microsoft kinect to recognize American sign language. Depth camera is kinect sensor used to detect ASL alphabet. Distance adaptive scheme was used for feature extraction. Support vector machine and RF classifier algorithm used for classification purpose. Training of data was done using ANN network. The accuracy of the system was 90%. uan yao et al. [4] used kinect sensor for recognition of hand gesture. Firstly it detects hand movement and then matched with counter model. Second task was to locate multi colour glove and detect different colour regions. Gausian colour model used for traing data and per pixel classifier used for classification. This system has one drawback that is limited accuracy.

    system consists of flex sensor, accelerometer and tactile sensor. This sensor used to detect hand gesture and converted into code. Accuracy of that system was 90%.

    D. Vision Based

    In this method web camera used to capture images. After that, image segmentation has done. Feature like palm, finger extracted from input image. Different hand motion that is half closed, fully closed, semi closed was detected. Data is saved in vector and that vector is used for recognition of alphabets [7].

    Paulo Trigueiros et al. [8] used vision based technique for recognition of Portuguese language. For their implementation, hand gesture was captured in real time. SVM algorithm is used for classification purpose. In this system vowels recognized with accuracy 99.4% and consonants recognized with 99.6% accuracy.

    Figure 4: Sample of vision based technique

    Generally while capturing the image for experiments, head movement is also mixed with hand images. To solve this overlap between hand and head movement, camera is mounted above of signers [9]. But due to this face and body gesture lost. Nilsen et al [10] used less hand gesture for fast recognition process.

    C. Data Glove

    Figure 2: Kinect for Xbx 360

  3. METHODS FOR SIGN IDENTIFICATION SYSTEM

    1. Artificial Neural Network

      This method uses different sensor to detect hand gesture signal. Hand gesture signal is in the form of analog. ADC is used to convert analog signal into digital form. It consists of flex sensor and accelerometer. Flex sensor is used to detect bend signal[5]. Figure 3 shows the data glove.

      Figure 3: Data glove with flex sensor

      Anarbasi Rajamohan et al. [6] used data glove based method for recognition of American Sign Language. The

      An artificial neuron is a Computational model inspired in the natural neurons. The advantage of ANN is its accuracy and generality. It has ability to learn relationship from modeled data and at the same time to recognize the constraints [11]. In [12] Arabic sign language is converted into static hand gesture. To recognize that language two recurrent neural networks are used i.e. Partial recurrent network and fully recurrent network.

      In this, input image was captured through digital camera. Colored gloves wear in hands. HIS model was used for segmentation process. After that training and testing of images was done. The result of fully recurrent network was better than partially recurrent network.

      A real time 2D tracking system [13] is used for recognition of Myanmar alphabetic language. Tin Hnin implemented this system to recognize the hand gesture for MAL .Input image is digitized photographs and applied to adobe Photoshop for recognizing edges of images. Histogram is used for feature extraction. For further processing neural network is used.

      To recognize hand gesture for Japanese sign language, MLP neural network was used. Here, input was taken from data glove interface and fed to MLP neural network. Then data was trained and tested. Major drawback in this system is that data glove was unable to measure gesture direction. Shiga used this system for JSL [14].

      Gonzalo et al. [15] implemented continuous time recurrent neural network real time hand gesture recognition system. Wireless mouse and tri axial accelerometer was used for capturing hand gesture. Genetic algorithm was used.

    2. Hidden Markov Model

    Liang et al. [16] implemented two HMM models for a continuous system for the Taiwanese sign language using a data glove. It consists of grammar and semantics for matching sentences. The main aim of this model is to provide estimates of probability of a sequence of movements. Due to that it increases the recognition rate.

    British sign language recognition by using markov chain in combination with independent component analysis [17], data was captured through image technique. Feature extraction was used to extract motion and shape of hands.

    Tani bata et al. [18] proposed HMM for isolated for JSL recognition system. Baum Welch algorithm used to model

    parallel left and right hand data. Viterbi algorithm was used for verification purpose.

    The multilayer architecture in sign language recognition for the signer independent CSL recognition, in which combination of DTW and HMM are used. To solved the confusion set in vocabulary space DTW/ISODATA algorithms are used [19]. The recognition accuracy was greater than the HMM based recognition system.

    Volger [20] Proposed system for recognition of American sign language using parallel hidden markov model. In this system only phonemes were used for continuous recognition system. Two channels are used from which one channel is for left hand and other for right hand. Word is divided into fundamental phonemes as same word used in speech recognition. The accuracy of that model was high.

  4. CONCLUSION

In this review paper, different techniques of sign language recognition are reviewed on the basis of sign acquiring methods and sign identification methods. For sign acquiring methods, vision based methods and for sign identification methods, artificial neuron network proves a strong candidature.

REFERENCES

  1. P.Karthick, N.Pratibha, V.B.Rekha, S.Thanalaxmi, Transforming Indian Sign Language into Text Using Leap Motion, International Journal of Innovative Research in science, Engineering and Technology, pp. 10906-10908,2014.

  2. Leigh Ellen potter, Jake Araullo,Lewis Carter, The leap Motion Controller: A view on sign language, Proceedings of the 25th Australian Computer-Human Interaction Conference: Augmentation, Application, Innovation, Collabration.pp.1-4,2013

  3. Cao Dong, Ming C.Leu, Zhaozheng Yin, American sign language Alphabet Recognition Using Microsoft Kinect, Computer Vision and pattern Recognition workshop, IEEE conference, pp: ,2015.

  4. Yuan Yao,Yun Fu, Contour Model based Hand-Gesture Recognition Using Kinect Sensor, IEEE Transaction on Circuits and System for video Technology, pp: 1-6,2013.

  5. Prakash B Gaikwad, Dr. V.K.Bairagi, Hand Gesture Recognition for Dumb People using Indian Sign Language , International Journal of Advanced Research in computer Science and Software Engineering, pp:193-194, 2014.

  6. Anarbasi Rajamohan, Hemavathy R., Dhanalakshmi M., Deaf Mute Communication Interpreter, International Journal of scientific Engineering and Technology, pp:336-337, 2013.

  7. Shangeetha R K, Valliammai V, Padmavathi S., Computer Vision Based Approach For Indian Sign Language Character Recognition, IEEE journoul on Information Technology, pp:181,2012.

  8. Paulo Trigueiros, Ferando Ribeiro, Luis Paulo Reis, Vision Based Portuguese Sign Language Recognition System, New Perspectives in Information System and Technologies,Vollume 1,Advances in Intelligent System and Computing, Springer International Publishing Switzerland,pp-605-608,2014.

  9. H Brashear , T.Starner,P. Lukowicz,H.Junker, Using Multiple Sensor For Mobile Language Recognition, Proceedings of Seventh IEEE International Symposium on Wearable Computers, pp:45-47,2003.

  10. Elena Sanchez-Nielsen, luis Anton-Canalis, Mario Hernandez-Tejera, Hand Gesture Recognition for human Machine Interaction, ournal of WSCG,Vol.12,pp: 1-4,2004.

  11. S.K.Yewale, Artificial Neural Network Approach For Hand Gesture Recogniotion, International Journal Of Engineering Science and Technology. pp: 2603-2605,2011.

  12. M.Maraqa, R.Abu -Zaiter, Recognition of Arabic Sign Language using Reccurent Neural Network, Applications of digital Information and Web Technologies,ICADIWT 2008, PP: 478-481,2008.

  13. Tin Hninn Hninn Maung, Real time Hand Tracking and Gesture Recognition System Using Neural Networks, Proceedings of World Academy of Science, Engineering and Technology, pp:466-470,2009.

  14. Machacon H.T., Shiga S., Recognition Of Japanese Finger Spelling Gesture Using Neural Networks, Journal of Medical Engineering and Technology, pp: 254-60, 2010.

  15. Gonzalo Bailador, Dniel Roggen, Gerhard Troster, Real time Gesture Recognition Using Continuous Neural Networks, Proceedings of the ICST 2nd international Conference on Body Area Networks Article no.15, 2007.

  16. Rung Huei Liang, Ming Ouhyoung, A Real time Continuous gesture Recognition System for sign Language, Automatic gesture and face recognition, Proceedings Third IEEE international Conference on, pp: 558-561, 1998.

  17. Richard Bowden, David Windridge, Timor Kadir, Andrew Zisserman, Michale Brady, A linguistic Feature Vector for the Visual Interpretation of sign Language, 8th European conference on computer vision, prague, czech Republic, proceedings , part 1, pp: 390- 396, 2004.

  18. Nobuhiko Tanibata, Nobutaka Shimada,Yoshi aki shirai , Extraction of Hand Features for Recognition of sign language Words, computer controlled Mechanical system graduate School of Engineering Osaka University.

  19. Xiaoyu Wang, Feng Jiang, Hongxan Yao, DTW/ISODATA Alogorithm and Multilayer Architecture in Sign Language Recognition with Large Vocabulary, Proceedings IIH MSP 08 Proceedings of international conference on Intelligent Information Hiding and Multimedia Signal Processing , pp; 1399-1400,2008.

  20. Christian Vogler, Dimitris Metaxas, Handshapes and movements: Multiple channel ASL Recognition, Springer verlag Berlin Heidelberg, pp: 247-251, 2004.

Leave a Reply