Communication Interface for Mute and Hearing Impaired People

DOI : 10.17577/IJERTV3IS050601

Download Full-Text PDF Cite this Publication

Text Only Version

Communication Interface for Mute and Hearing Impaired People

*GarimaRao,*LakshNarang ,*Abhishek Solanki ,*Kapil Singh , Mrs.*Karamjit Kaur , Mr.*Neeraj Gupta.

*Amity University Haryana

Abstract – Sign language is an important tool used by the mute and hearing impaired people to communicate. To ensure seamless interaction between hearing impaired /mute people and society without translator understanding of sign language by one and all is must. To overcome this limitation, desire to develop human machine interface that recognize the sign language would have a significant impact on deaf and mute peoples social life. In the proposed work, state of art interface has been demonstrated which recognizes the sign language through flex sensors and to convert it into text and voice. The system employs PIC16f887,APR 9600andflexes sensors. The simulation part has been done using Proteus and hardware is developed accordingly. The system has been tested for various signs and optimum results have been obtained.

Keywords: embedded system, flex sensors, Proteus,

Voice Module

  1. INTRODUCTION

    To ensure seamless interaction between hearing impaired

    /mute people and society without translator understanding of sign language by one and all is must. In order to lower this barrier designing human machine interface has drawn a great attention towards its field of research. Enormous work has been reported so far introducing various aids to convert sign language text and voice message. Some of prominent solutions include B-Spline approximation, Real Time Continuous Gesture recognition (RTC), Motion Tracking Network (MTN), etc. B-spline[1] is a Vision-based recognition system for Indian sign language alphabets and numerals. Algorithm approximates the boundary extracted from the region of interest, to a B-Spline curve by taking the maximum curvature points (MCPs) as the control points. A very Large vocabulary sign language interpreter is presented with real-time continuous gesture recognition of sign language using a data glove. End-point detection in a stream of gesture input is first solved and then statistical analysis is done according to 4 parameters in a gesture: posture, position, orientation, and motion. The recognition percentage is very less and is below 90%[2]. Technology for hand gesture recognition which is based on thinning of segmented image. System works suitably for static letters of the American Sign Language. The drawback of this system was that it does not give good results under thepoor background light conditions [3].SLARTI for data acquisition it uses a robotic glove and a system based on magnetic fields. Implementation cost is very high.University of Central Florida gesture recognition system uses a

    webcam and computer vision techniques to collect the data and a neural network to classify shapes [4]. For recognition, this system requires wearing a specially colored glove to facilitate the imaging process.ASLR [5] uses a webcam and computer vision techniques to collect field data and for classification of signs. Many of these developments have been made but there are several sections to be explored so as to make system more simple user friendly and economic. Keeping in view the practical implementation and considering the need of challenges in designing machine interface to decode sign language , an effort has been put in to design the prominent finding for the literature regarding human machine interface are stated specifying the scope of improvement in these findings.

    Section1:Describe the simulation of prospered work. The result obtainedthrough simulation was satisfactory enough for initiation hardware design.

    Section 2:Describes the detailed description of hardware design followed by result obtained and possible improvement in prospered work

  2. DESIGN DEVELOPMENT AND FABRICATION Section 1

    Prior to hardware design the network was simulated through Proteus ISIS 7.6.8741. This simulation software provides professional tools for simulating embedded system.

    Figure 1 shows the flex sensor is modelled through a variable resistor and is given as the input to the Pic micro controller and corresponding output is shown on lcd.

    Speaker

    LCD

    Apr 9600

    (OUTPUT)

    Figure 1: simulation of the desired hardware design using the Proteus.

    For experimental setup, design flow and block diagram is shown in figure 2 and figure 3 respectively.

    Glove Sensor

    (Angle Variation with folding fingers)

    Resistance Varies (Given to the microcontroller)

    Signal Acquisition

    Analog to Digital Converter

    PIC Microcontroller

    Signal Conditioning

    Power Supply

    +5 v

    Section 2

    Pic Microcontroller 16f887

    (PROCESSING)

    4Flex sensor

    Figure3: Block diagram

    Signal Processing

    LCD

    Voice

    Figure 2: Design flow

    Major Component used in hardware design are flex sensors, Pic microcontroller 16f887, Apr 9600,lcd (16*2). Figure 4 shows the complete hardware setup with the no symbol detected on the lcd display.

    Figure 4: Hardware Setup

    Figure 4: Hardware Setup

    The input is provided from the flex sensor which converts the resistance change in bend to electrical resistance the more the bend more the resistance value. A bridge circuit is implemented by using a 10 k ohm sensor in series with flex sensor. Figure 5 shows voltage divider rule being applied on the flex sensors and the output voltage is given to ADC channels of Pic micro controller. The output voltage is given by

    Vout=Vin(R1/R1+R2)

    Figure 5: Working of flex

    The output voltage is converted into digital through pic 16f887 which has 10 inbuilt analog to digital convertor channels.Figure 6 shows the variation in resistance due to the bending of the flex sensor using the multimeter.

    Figure 6: Snapshot of Change in Resistance with Bending of flex sensor

    Table 1 shows that due to bending of the flex sensor (in degree) the change in the resistance value of the flex sensor with the corresponding output voltage .

    DEGREE

    RESISTANCE

    (kilo ohms )

    VOLTAGE (ANALOG)

    0

    10.536

    2.5

    10

    12.832

    2.27

    20

    14.124

    2.08

    30

    15.572

    2.01

    45

    18.000

    1.78

    60

    22.686

    1.56

    75

    26.296

    1.38

    90

    30.460

    1.25

    Table1:Degree resistance voltage with bending of flex sensor

  3. RESULT AND CONCLUSION

    The table 2 shows the different position of the 4 flex sensors and their corresponding outputs on the lcd in text and voice through APR 9600.

    Flex 1

    degree

    Flex 2

    degree

    Flex 3

    degree

    Flex 4

    degree

    Text

    Voice

    0

    Above 90

    Above 90

    0

    Why

    Above 90

    Above 90

    Above 90

    Above 90

    /td>

    Yes

    0

    0

    Above 90

    0

    Hello

    0

    0

    Above 90

    Above 90

    Who

    0

    Above 90

    0

    0

    No

    Above 90

    0

    0

    Above 90

    Bye

    Table 2: Position of sensors and corresponding output onlcd and APR 9600.

    To check the reliability and accuracy of system testing has been done by providing 25 samples in random order a few has been shown below.

    Why

    Yes

    Hello

    Who

    No

    Bye

    Figure 7: Samples taken in random order

    The proposed system aims to lower the communication gap between the hearing-impaired or mute community and the normal world. This project was meant to be a prototype to check the feasibility of recognizing sign language using sensor gloves. With this project the hearing-impaired and mute people can use the gloves to perform sign language and it will be converted in to speech so that normal people can easily understand. The main feature of this project is that the gesture recognizer is a standalone system, which is applicable in daily life.Data gloves can only capture the bending of fingers and not the shape or motion of other parts of the body example arm, elbows, face etc. So only postures are taken and moving gestures are ignored. The problem of recognizing moving gestures can be resolved using 3 axis accelerometer sensor at wrist for full capture of the wrist movement changes, while 2axis accelerometer can be used at elbow and shoulder.

  4. FUTURE WORK

    The completion of this prototype suggests that sensor gloves can be used for partial sign language recognition. More sensors can be employed to recognize full sign language. A handy and portable hardware device with building translating system and group of body sensors along with the pair of data gloves can be manufactured so that a hearing- impaired and mute person can communicate to any normal person anywhere.

  5. REFERENCES

  1. Geetha M Manjusha UC A Vision Based Recognition of Indian Sign Language Alphabets and Numerals Using B-Spline Approximation IJCSE Vol. 4 No. 03 March 2012

  2. Rung-Huei Liang, Ming Ouhyoung A Real-time Continuous Gesture Recognition System for Sign Language ieee international conference on automatic face and gesture recognition , pp 558-567 Japan 1998

  3. RajshreeRokhadeDharampalDoye Hand Gesture Recognition by Thinning Method ieee computer society, 2009

  4. J. W. Davis, J. William, and M. Shah, Gesture Recognition, Technical Report, CS-TR-93-11, Department of Computer Science, University of Central Florida, Orlando, USA, 1993

  5. Philippe Drew, David Rybach, Thomas Deselaers, MortezaZahedi, Herman Ney, Speech recognition techniques for sign language recognition system , In Interspeech, pages 2513-2516, Antwerp, Belgium, August 2007

  6. AnujaGolliwar, HarshadaPatil , RohitaWatpade , Sneha Moon, SonalPatil ,V. D. Bondre 2 Sign Language Translator Using Hand GlovesISSN (Online): pp 2347-2820, Volume -2, Issue-1, January, 2014

  7. AjinkyaRaut, Vineeta Singh, Vikrant Rajput and RuchikaMahale, Hand Sign Interpreter(IJES), Volume 1, Issue 2, Pages 19-25, 2012.

  8. J Kramer and L Leifer. The Talking Glove: A Speaking Aid for Nonvocal Deaf and Deaf-Blind Individuals, Proc. of the RESNA 12th annual Conf. (1993) pp. 471-472.

  9. Shoib Ahmed Magic Gloves Hand Gesture Recognition and Voice Conversion System for Differentially Able Dumb People V:http://www.theglobalsummit.org/wpcontent/uploads/2012/08/Shoai b-Ahmed.PDF

  10. S. Sidney Fels and Geo rey E. Hinton, Glove-TalkII: A neural network interface which maps gestures to parallel formant speech synthesizer controls, Transactions on Neural Networks, 9 (1), 205 – 212. 1998.

Leave a Reply