Sign Language Translator: A hand gesture recognition device using Arduino

DOI : 10.17577/ICCIDT2K23-231

Download Full-Text PDF Cite this Publication

Text Only Version

Sign Language Translator: A hand gesture recognition device using Arduino

Nidhin Manoj ¹ Computer Science and Engineering

Mangalam College of Engineering, Ettumanoor nidhinmanoj1406@gmail.com

Vyshak Surendran ² Computer Science and Engineering

Mangalam College of Engineering, Ettumanoor vyshaksurendran333@gmail.com

Vishnu Rathan ³ Computer Science and Engineering

Mangalam College of Engineering, Ettumanoor vishnu.rathan2000@gmail.com

Vishnu T

Computer Science and Engineering Mangalam College of Engineering, Ettumanoor vishnu.t.819@gmail.com

Ms. Sruthy Emmanuel Assistant Professor

Department of Computer Science Mangalam College of Engineering, Ettumanoor

Abstract The sign language translator explores an innovative solution for conveying one's thoughts where individuals are incapable of speaking, specifically by translating hand gestures into voice. This approach is primarily intended to assist individuals who are deaf or have difficulty speaking, and is accomplished through the use of gloves fitted with flex sensors that capture data on finger and hand movements. This data is then processed by a microcontroller to generate voice feedback. Existing systems used for sign language translation involve devices with high energy consumption and less mobility. Although these systems allow for sign recognition with high accuracy they cannot be accessed by most people and the receiving end should be independent such that any user is able to get the voice feedback.

Keywords-Arduino, Hand Gesture, Voice Feedback, C++

  1. INTRODUCTION

    Speech-impaired and hearing-impaired people only have the sign language as their principal means of communication. However, this becomes problematic to people who are not familiar with the different gestures of the sign language, thus creating a communication barrier between the impaired and not. Sign language doesnt use acoustic sounds but visually transmitted sign patterns.

    By simultaneously combining hand shapes, orientation, movement, and facial expressions, this can be used to express the speakers thoughts and it carries as much information developed and researched by various scho_ lars globally. Despite these, there hasnt been a solid and concrete process in which the translation is from text or speech to the ASL gestures thus suggesting a one-sided approach on the matter. Many other existing gesture recognition solutions rely on cameras, microphones, radio frequency (RF), or special body sensors such as Electromyography (SEMG), Electrical Impedance Tomography (EIT) sensors, and electrocardiogram (ECG) sensors. However, these methods have various limitations. For example, camera-based approaches may face occlusion and privacy issues, while microphones are susceptible to ambient acoustic noise. RF-based methods are known to be device-free, but they can be sensitive to indoor multipath effects or RF interference. On the other hand, special body sensors for gesture recognition are more robust to environmental noise, but they require additional cost and manpower for installation. Sign language gestures are particularly challenging to detect as they often involve finger-level movements without significant wrist or arm motion.

    Therefore, we propose a low-cost sign language gesture recognition system that uses flex sensors to differentiate fine finger movement.

  2. CHALLENGES

    The existing gesture recognition systems have certain limitations that require attention. RF and acoustic-based systems are energy-intensive and expensive. Additionally, these methods indirectly capture hand movements without contact, leading to a certain level of inaccuracy.

    Other problems faced by these systems include processing delays, skin tone impact, sensor location sensitivity, and the impact of intense body movements. Furthermore, it is challenging to achieve high accuracy in sign language recognition using coarse-grained sensing modalities such as PPG and motion sensors. Commodity wearable devices have limited PPG sensors, which are placed close together, limiting coverage and sensor readings, which affects the performance of gesture recognition.

    Vision-based detectors require line-of-sight monitoring, while RF-based approaches require dedicated, expensive devices that are susceptible to environmental factors. To overcome these issues, we propose a sign language recognition system that uses wearable gloves to interpret hand gestures. The gloves are embedded with flex sensors that recognize finger movements and convert them into electrical signals. An accelerometer and gyroscope are used to obtain signals that predict the position of the arms while making hand gestures. These signals are sent to a microcontroller that interprets the movements and predicts the gesture being displayed. The microcontroller used in this system is ATMEGA 328p. The application then converts the data into speech output.

  3. RELATED WORKS

    1. Communication Tool for the Hard of Hearings, Xiujuan Chai, Hanjie Wang, Fang Yin, Xilin Chen, 2015

      Key Concepts:

      The deaf community is a large and diverse group, and communication can be a challenge for those with hearing loss. Automatic Sign Language Recognition (SLR) has the potential to facilitate communication between the deaf and hearing populations. In this paper, we propose a visual communication tool for the hard of hearing, which is a sign language

      recognition system that recognizes a large vocabulary of sign language gestures based on RGB-D data input. Our approach uses a novel Grossmann Covariance Matrix (GCM) representation to capture the long-term dynamics of a sign sequence, and a discriminative kernel SVM is employed for sign classification. For continuous sign language recognition, a probability inference method is used to determine the spotting from the sequential frames' labels. We evaluate our recognition algorithms on both isolated sign words and continuous sign language sentences using our collected datasets. Our system shows promising results in recognizing sign language gestures, which can potentially bridge the communication gap between the deaf and hearing communities.

    2. A Hand Gesture Recognition Framework and Wearable Gesture-Based Interaction, Zhiyuan Lu, Xiang Chen, Qiang Li, Xu Zhang, and Ping Zhou, 2014

      Key Concepts:

      The proposed system aims to recognize gestures using acceleration and surface electromyographic (SEMG) signals. It employs a novel segmentation scheme, a score-based sensor fusion scheme, and two new features. The framework includes a Bayes linear classifier and an improved dynamic time- warping algorithm. A wearable gesture sensing device (embedded with a three-axis accelerometer and four SEMG sensors) is developed along with an application program for a mobile phone, which utilizes the proposed algorithmic framework to enable gesture-based real-time interaction. The device is worn on the forearm, allowing the user to manipulate a mobile phone using 19 predefined or personalized gestures. During testing, the developed prototype was found to respond to each gesture instruction within 300 ms on the mobile phone. The average accuracy was 95.0% in user-dependent testing and 89.6% in user-independent testing. Positive feedback from the user experience questionnaire demonstrates the usefulness of the proposed framework for gesture recognition.

    3. Wearable Sensor-Based Sign Language Recognition: A Comprehensive Review, Karly Kudrinko, Emile Flavin, Xiaodan Zhu, and Qingguo Li, 2020

      Key Concepts:

      Sign language serves as th primary mode of communication for many people who are Deaf, deafened, hard of hearing, or non-verbal. However, communication barriers often arise in their interactions with individuals who do not understand or use sign language. Recent advancements in technology and machine learning have resulted in innovative approaches to gesture recognition. This literature review examines wearable sensor-based systems used for classifying

      sign language gestures. A review of 72 studies conducted between 1991 and 2019 was conducted to identify trends, best practices, and common challenges. The review analyzed attributes such as sign language variation, sensor configuration, classification methods, study design, and performance metrics. The results of this literature review may assist in the development of robust, user-centered wearable sensor-based systems for sign language recognition.

    4. Regularization paths for generalized linear models via coordinate descent, J. Friedman, T. Hastie, and R. Tibshirani, 2010

      Key Concepts:

      Fast algorithms for estimating convex penalties. Models include linear regression, binomial logistic regression, and multinomial regression problems, with penalties including 1 (lasso), 2 (mountain regression), and a mixture of the two (elastic net). The algorithm uses a cyclic coordinate flow calculated along the control path. The method can solve large problems and work efficiently with sparse features. In the benchmark time, we see that the new algorithm is faster than competing methods.

    5. Whole-Home Gesture Recognition Using Wireless Signals, Q. Pu, S. Gupta, S. Gollakota, and S. Patel, 2013

    Key Concepts:

    WiSee, novel function of the renal system to a wireless signal (eg Wi-Fi). Allow the whole house to feel and re-the origin of human behaviour. Because it's wireless signal does not require line of sight it can go through walls, can WiSee allow you to recognize the whole attitude of the house using wireless resources. Enhancements rather, falling short of the goal requires human tools body with sensory apparatus. Author conduct a proof-of-concept WiSee mode using USRP-N210 and rates at both offices-smart and two bedroom apartment. The results show that WiSee can detect Create and categorize nine sets of instructions average accuracy is 94 percent.

  4. METHODOLOGY

    The main objective of the proposed system is to facilitate communication for people who are deaf and mute by translating sign language into speech. This project aims to bridge the communication gap by designing a portable glove that can capture the user's sign language gestures and output the translated text as speech on an Android application. The glove is equipped with flex sensors, contact sensors, and an accelerometer to measure the finger flexion, finger contact, and hand rotation. An accelerometer is used to measure the tilt of the palm. Five bend sensors are placed on the glove, four for the fingers and one for the thumb. These sensors measure the bend in the fingers, thumb, and palm, and based on the bend angle value, the Arduino Nano microcontroller understands which set of values represents which symbol. The microcontroller then transfers the appropriate outcome value to the Android app via Bluetooth, which displays and speaks the generated symbol.

    TRANSMITTER RECEIVER

    Fig 1. Architecture Diagram

  5. PROBLEM ANALYSIS

    The current systems for sign language interpretation face several challenges, including high power consumption, difficulty in implementation, and inaccuracy while interpreting hand gestures. While these issues cannot be completely resolved, efforts can be made to minimize them. Other challenges faced by existing systems include processing

    delays, skin tone impact, sensor location sensitivity, and the impact of intense body movements.

    Achieving high accuracy in sign language gesture recognition using readily available but coarse-grained sensing modalities such as PPG and motion sensors is also a challenge. Commodity wearable devices often have a limited number of PPG sensors placed close to each other, which limits the coverage on the wrist and diversity of sensor readings, impacting gesture recognition performance.

    Vision-based detectors require a continuous line of sight to monitor motion, while RF-based approaches require dedicated and expensive devices that can be easily affected by environmental factors.

  6. RESULT

    Using simplified modules for the sensors the device was able to meet the expectations of a sign language translator.

    By maximising the range of the flex sensors new sensor states can be added for quick modification of the inputs that can be accepted by the glove.

    Lastly, maintaining the cost was achieved by using locally ready materials like knitted gloves and the makeshift copper rings for the contact sensors. Makeshift flex sensors, like the ones used in the previous research mentioned above, can be also used to perform the same bending angle data collection in the translation system.

    The gloves are capable of capturing hand gestures with high accuracy. The combination of gyroscope and accelerometer inputs also reduce errors that may be caused by unintended movements.

    Fig 6. Prototype

  7. FUTURES SCOPE AND CONCLUSION

    The Processing software was used to import the translation process for use as a mobile application. Although it is recommended to develop an actual mobile app, the Processing GUI provided most of the necessary functions for the two-way translation process. Another recommendation is to use 3D image processing software to visualize real-time translation using blocks of hand shapes. A proposal is to use Bluetooth shields for wireless communication. Currently, communication works across Arduino and Processing through serial reading and writing. As the focus of the project was the registration process of the letters, it is recommended for future researchers to work on the stability of all active sensors in the glove system. The letters should be more adaptive to the sensors' threshold.

  8. ACKNOWLEDGEMENT

The Authors extend their gratitude to Principal Dr. Vinodh P Vijayan, Ms. Neethu Maria John H.O.D Computer Science Department, and Ms. Sruthy Emmanuel Assistant Professor, CSE department and for the proper guidance, valuable support, and helpful comments during the proofreading.

REFERENCES

[1]. A. Er-Rady, R. Faizi, R. O. H. Thami, and H. Housni, Automatic sign language recognition: A survey, in 2017 International Conference on Advanced Technologies for Signal and Image Processing (ATSIP). IEEE, 2017, pp. 17.

[2]. Q. Pu, S. Gupta, S. Gollakota, and S. Patel, Whole-home gesture recognition using wireless signals, in Proceedings of the 19th annual international conference on Mobile computing

& networking. ACM, 2013, pp. 2738

[3]. Z. Lu, X. Chen, Q. Li, X. Zhang, and P. Zhou, A hand gesture recognition framework and wearable gesture-based interaction prototype for mobile devices, IEEE transactions on human-machine systems, vol. 44, no. 2, pp. 293299, 2014

[4]. X. Zhang, X. Chen, Y. Li, V. Lantz, K. Wang, and J. Yang, A framework for hand gesture recognition based on accelerometer and emg sensors, IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, vol. 41, no. 6, pp. 10641076, 2011

[5]. C. Becker, R. Rigamonti, V. Lepetit, and P. Fua, Supervised feature learning for curvilinear structure segmentation, in International Confer-ence on Medical Image Computing and Computer-Assisted Intervention. Springer, 2013, pp. 526533.

[6]. Z. Wang, W. Yan, and T. Oates, Time series classication from scratch with deep neural networks: A strong baseline, in 2017 International Joint Conference on Neural Networks (IJCNN). IEEE, 2017, pp. 15781585.

[7]. X. Liu, T. Chen, F. Qian, Z. Guo, F. X. Lin, X. Wang, and K. Chen, Characterizing smartwatch usage in the wild,in Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services. ACM, 2017, pp. 385398

[8]. T. Zhao, J. Liu, Y. Wang, H. Liu, and Y. Chen, Ppg- based nger-level gesture recognition leveraging wearables, in IEEE INFOCOM 2018- IEEE Conference on Computer Communications. IEEE, 2018, pp. 1457 1465.

[9]. Z. Ren, J. Yuan, J. Meng, and Z. Zhang, Robust part- based hand gesture recognition using kinect sensor, IEEE transactions on multimedia, vol. 15, no. 5, pp. 11101120,

2013

[10]. J. Friedman, T. Hastie, and R. Tibshirani, Regularization paths for generalized linear models via coordinate descent, Journal of statistical software, vol. 33, no. 1, p. 1, 2010