PC Automation Using Hand Gestures

DOI : 10.17577/IJERTCONV12IS03075
Download Full-Text PDF Cite this Publication
Text Only Version

 

PC Automation Using Hand Gestures

Mr.R.Balusamy M.Tech., Assistant Professor, Department of Computer Science And Engineering,

Shree Venkateshwara Hi Tech Engineering College, Gobichettipalayam.

Email : sbcbalu07@gmail.com

Mr.M.Jiginesh, Student of

Computer Science And Engineering, Shree Venkateshwara Hi Tech Engineering College,

Gobichettipalayam.

Email :jigineshm2020@gmail.com

Mr.P.Pradeep, Student of

Computer Science And Engineering, Shree Venkateshwara Hi Tech EngineeringCollege,

Gobichettipalayam.

Email:pradeepcse2003@gmail.com

Mr.M.Vijayakumar, Student of

Computer Science And Engineering, Shree Venkateshwara Hi Tech EngineeringCollege,

Gobichettipalayam.

Email:vijayakumar2003ms@gmail.com

Abstract The human computer interaction platform can be implemented in a variety of different ways because to the fact that webcams and other equipment, such as sensors, are both relatively inexpensive and readily available on the market. The use of gestures as a mode of communication between humans and machines is widely regarded as the most effective approach. The employment of hand gestures as a means of communication between humans and machines or computers is particularly helpful in achieving higher levels of information conveyance. The use of hand gestures as a form of nonverbal communication is something that can be put to use in a variety of contexts. Applications that make use of hand gestures have acquired a variety of alternative methodologies, including those that are backed by sensor technology and computer vision, according to research and survey reports. Research is expanding into new territory, particularly in the areas of pattern recognition and gesture recognition. Gestures of the hand can add a new layer of complexity to the operation of contactless computers. As a result, in this project, we produce software that shows a prototype of a system that is able to automatically recognize gestures and executes pc commands in response to certain motions. This software is shown as part of this project. Python is the language that we are utilizing to construct our solution, and we are taking advantage of integrated modules such as cv2 (OpenCV, or open source computer vision, is a collection of programming functions that is primarily geared toward real-time computer vision.) & Mediapipe (detects hand and fingers data points.)

Keywords- Mediapie, OpenCV, Human Computer Interaction(HCI), Fingers,

Gestures

  1. Introduction

    In the modern world of automation, the application of hand gesture technology is not confined to the realm of gaming alone; rather, it can be found in a wide variety of contexts, including applications in the fields of medicine, manufacturing, information technology centers, banking, and so on. This project uses a similar idea of hand gesture control based on a laptop or computer as its foundation. A Human Machine Interface (HMI) is a system that helps to establish a conversation and information exchange between the user and the machine. This system is composed of both hardware and software. As a standard practice, we incorporate a wide variety of indications into HMI devices. These indicators include LEDs, switches, touch screens, and LCDs. Hand gestures are an additional innovative method that can be utilized in order to communicate with automated systems such as robots or computers. Now, in order to operate the functions of a computer or laptop, we don’t need to make use of a variety of devices such as keyboards, mice, joysticks, and so on. Instead, we may simply utilize hand moments or hand gestures. Hand gesture control based on Arduino has been incorporated in this project so that a variety of activities, including gaming, navigation, document browsing, and music and video playback, may be controlled by the user. Two ultrasonic sensors were employed, and one of them was coupled to an Arduino.

    The ultrasonic sensors are affixed to the top of the computer on either side, and their positioning is what allows the computer to determine the distance between the user’s hand and the sensor. The aforementioned procedures are carried out by making use of the following data regarding the distance that exists between the hand and the sensors. On our laptop or computer, the Python Pyautogui library is what is used to carry out the aforementioned activities. The commands come from Arduino and are transmitted to the computer through a serial port. Python reads this incoming data that is running on the computer or laptop, and based on these incoming read data, the following actions and operations are carried out. Ultimately, depending on these arriving read data.

    The level of technological development has finally progressed to the point where our hands will be able to do the work of our mouths and replace them by directly connecting with the computer or television. For instance, to delete a folder or file from the computer, place your palm on it and then throw it like a piece of paper into the trash can. This will accomplish the desired result. Even as we were baking a cake in the microwave oven, if we waved our hands in the air like a magician, the oven would obey our command. The capacity of hand gesture recognition systems to interface with machines in an effective manner has contributed significantly to their rapid development over the past several years. The hunt for a substitute for multitouch technology that does not require any touching movement on screen is part of mankind’s ongoing effort to merge human gestures into modern technology. Ever from the beginning of the computer revolution, attempts have consistently been made to better human computer interface. Because using a computer is now so ingrained in our daily routines, its operation ought to be as intuitive and uncomplicated as having a conversation with a friend. In the past, human beings could only communicate with this intelligent machine by using either a keyboard or a mouse. However, efforts are currently being made to make the contact between humans and machines feel as natural as feasible. This criterion is satisfied by the widely used touch screen technology, which is anticipated to be superseded in the not too distant future by the gesture recognition technology.

  2. Related works

    A literature study of recent research looking into the use of hand gestures for vehicle secondary controls has been conducted, and a brief summary of its findings can be found in the section that follows. This summary explains the various approaches, technology, and methods that have been utilized by various researchers. Previous studies have not placed a primary emphasis on either understanding driver behavior or the limitations of hand signals. The proposed classification of the research was arrived at as a consequence of conducting a literature survey and subsequent analysis of the generated data [1]. Various researchers have utilized a wide variety of methodologies, including those based on vision, data gloves, artificial neural networks, fuzzy logic, genetic algorithms, hidden markov models, support vector machines, and so on. Some of these methodologies include:

    The following is a selection of the work that was done in the past. After finding the skin-colored region in the input image, several researchers turned to vision-based algorithms to identify hand gestures. After that, the image containing the desired hand region had its intensity normalized, and a histogram was created for it. The Hit-Miss Transform was used for the step of feaure extraction, and the Hidden Markov Model was utilized for the recognition of the gesture [2-4]. The primary purpose of the design is to ensure that the robot and platform begin moving in response to any movement made by the operator, be it a gesture, a posture, or anything else. The platform section of the robotic arm is synced with the gestures (hand postures) of the operator, and the robotic arm itself is synchronized with the operator’s leg postures [5-6]. A gesture interface that may be used to operate a mobile robot that is also fitted with a manipulator is discussed in this work. The user is monitored via a camera, which enables the interface to interpret motions that involve the movement of the arms. The robot is able to consistently detect and follow a person through locations with shifting lighting conditions because to a tracking system that is both quick and adaptable.

    Both a template-based approach and a neural network

    contrasted here for the purpose of gesture recognition. The recognition of gestures that are characterized through arm motion is accomplished through the combination of the two using the Viterbi algorithm (inaddition to static arm poses). Results are discussed within the framework of an interactive cleanup assignment, in which a person directs a robot to various areas that need to be cleaned and gives the robot instructions to pick up rubbish [7-9]. Mobile robots are not confined to a single physical location and have the ability to roam freely within the area in which they operate. Legs, wheels, or any number of other diverse mechanisms could be used to accomplish the movement. They are advantageous over other types of locomotion systems because they require less energy and may cover ground at a faster rate. Because hand gestures are a natural and strong form of communication and may be used for the remote control of robots, hand gesture recognition systems play an essential part in the human-robot interactions. This is owing to the fact that hand gestures can be utilized for the remote control of robots. The gloves technique and the vision-based method are the two methods that are typically utilized in the process of interpreting gestures for human robot interaction. It is necessary to wear heavy contact devices and generally carry a load of cables that connect the device to a computer [10-12]. Using gloves is required for this process.

  3. EXISTING SYSTEMS

    The idea that will drive this endeavour is not a complicated one at all. We are going to put two ultrasonic (US) sensors on top of our monitor, and then we are going to use Arduino to read the distance between the monitor and our hand. Based on this value of distance, we are going to carry out a number of different procedures. Python’s pyautogui package is what we use to operate our machine using various commands. The commands that come from Arduino are transmitted to the computer via the serial port (USB). Python, which is currently running on the computer, will read this data at that point, and then an action will be carried out based on the data that was read.

    1. GESTURES

      The process of issuing commands to a robot using only computer vision, without any sounds or other media, is analogous to commanding a marching band solely via the use of visible gestures. In order for our system to function in real time, the body language that users communicate with must be straightforward, readily identifiable, and easily distinguishable from one another. The primary purpose of our system is to identify dynamic hand gestures based on continuous hand motion in real time, with the secondary goal of implementing these gestures in human-robot interaction. The motion of one’s hand can convey a wide variety of different kinds of gestures. In this system, we explain four different forms of directed gestures to one hand. These gestures include moving upward, moving downstairs, moving leftward and moving rightward individually, and moving leftward and moving rightward together as the basic conducting gesture. Therefore, if we include either one or both hands in the process of gesture invoking, we will have a total of no more than twenty-four distinct meaningful gestures based on the permutation combination of both hands. In this case, all of the possible combinations of gestures from both hands are represented by a two-dimensional table, and each of those combinations is categorised into a class by an ID that is specific to the gesture itself. By the way, it is simple to depict each and every move, and it is straightforward to incorporate new hand gestures [2]. The majority of comprehensive hand interaction systems can be thought of as having three layers, which are detection, tracking, and recognition. The detecting layer is in charge of defining and

      extracting visual characteristics that can be linked to the

      Volume 12, Issue 03

       

      approach to gesture identification are analyzePdublaisnhded by, www.ijert.org

      ISSN: 2278-0181

      presence of hands within the camera’s field of vision (s). In order for the system to always be aware of “what is where,” the tracking layer is responsible for executing temporal data association between subsequent image frames. This is done so that the system can always know “what is where.” In model-based methods, tracking also offers a way to keep estimates of model parameters, variables, and features that are not directly observable at a particular point in time. This is because tracking allows for the accumulation of data over time. In the end, the recognition layer is in charge of grouping the spatiotemporal data that was extracted in the earlier layers and assigning the labels that are linked with specific classes of gestures to the groups that were produced as a result of this grouping process. In this section, an overview will be given on the research that has been done on these three highlighted subproblems of vision-based gesture recognition

      Fig.1: Flowchart of existing system

    2. Hand Gesture Interfaces
    3. There are many different kinds of gesture interfaces, ranging from those that just identify a few symbolic movements to those that are capable of full-fledged interpretation of sign language. There is also the possibility that gesture interfaces will recognise either static hand poses or dynamic hand motion, or even a combination of the two. In every circumstance, each motion ought to be accompanied with a clear and distinct semantic meaning that may be implemented in the user interface [1]. However, the scope of this paper will be limited to discussing only one particular application of the term “gesture,” namely, the usage of hand gestures that are regarded as natural or that occur in conjunction with spoken language. Therefore, if the objective is to move away from taught, pre-defined interaction approaches in order to provide natural and safe interfaces that are free of visual demand for normal human drivers, then the emphasis should be placed on the kinds of gestures that come easily to normal humans. Therefore, the only topic that will be covered in this paper is the discussion of the use of natural, dynamic,

    and non-contact hand gestures. Furthermore, despite the fact that the primary.

    Fig 2: Workflow of existing system

  4. PROPOSED SYSTEM

    Research is expanding into new territory, particularly in the areas of pattern recognition and gesture recognition. In many of the currently deployed systems, the usage of external devices is obligatory; on the other hand, in our project, we are able to make use of a web cam. As a result, we are working on a prototype that would provide a response in an instant, without any delay caused by processing.

    Fig3: Workflow of proposed system

  5. CONCLUSION

We use gloves as an outer device for the majority of the regularly used approaches for gesture detection. The glove is outfitted with a range of sensors to offer information about an accurate hand position, orientation, and flex of the fingers. Andwith regard to our prototype, the only variable that can influence the precision of the output is the amount of distance between the two devices. Although gloves provide precise measurements of hand shape, they are awkward to wear and

are connected through cables, whereas all we require for the input is a web camera. As a result, this system functions as an interface during the communication that takes place between human beings and computers. To capture an image of an IP address, all you need is a webcam. This would usher in a new age of human computer interaction (HCI), which would eliminate the necessity for users to make direct physical contact with devices.

Within the scope of this research study, we investigated a hand-gesture based interface for controlling the movement of

a robot. Through the use of the user’s hand trajectories, the user is able to directly control the robot. In the not too distant future,we will be able to control a robot with nothing more than a cell phone equipped with an accelerometer. We also want to incorporate more hand gestures into the interface, such as the curve and the slash, so that users may control the game in a way that is both more natural and more effective. In the not too distant future, the hand gestures will be standardised in order to make it possible for the command to be carried out in any part of the world without the need to take into account cultural and linguistic nuances. In other words, the development of commands based on conventional hand gestures for use in universal applications is one of our primary objectives.

References

[1]. Carl A. Pickering, A Research Study of Hand Gesture Recognition Technologies andApplications for Human Vehicle Interaction, Automotive Electronics, 2007 3rd Institution of Engineering and Technology Conference, June 2007, pp. 1-15.

[2]. JoyeetaSingha and Karen Das; Hand Gesture Recognition Based on Karhunen-Loeve Transform, IEEE, Mobile & Embedded Technology International Conference 2013, pp.366

[3]. Mark Bayazit, Alex Couture-Beil, Greg;Real-time Motion- based Gesture Recognition using the GPU, in Proc. of the IAPR conf. on Machine Vision Applications, 2009, pp.9-12.

[4]. X. Zabulisy, H. Baltzakisy, Vision-based Hand Gesture Recognition for Human-Computer Interaction, IEEE, In: The Universal Access Handbook. LEA, 2009.

[5]. Dr. Gowri Shankar Rao, Dr. D. Bhattacharya, AjitPandey, AparnaTiwari; Dual sensor based gesture robot control using minimal hardware system, International Journal of Scientific and Research Publications, Volume 3, Issue 5, May 2013,

ISSN 2250-

3153.

[6]. Love AggarwalB.Tech (ECE), GGSIPU; Design and

Implementation of a Wireless Gesture Controlled Robotic Arm with Vision, International Journal of Computer Applications, Volume 79 No 13, October 2013, pp.39-43.

[7]. Ahmad ‘AthifMohdFaudzi, Real-time Hand Gestures System for Mobile Robots Control, IRIS 2012, Procedia Engineering 41, pp. 798-804.

[8]. Stefan Waldherr Computer Science Department Carnegie Mellon University Pittsburgh, PA, USA ; A Gesture Based Interface for Human-Robot Interaction, Autonomous Robots September 2000, Volume 9, Issue 2, pp. 151173.

[9]. Ming-Shaung Chang; Establishing a natural HRI System for Mobile Robot Through Human Hand Gestures,IFAC Proceedings Volumes Volume 42, Issue 16, 2009, pp. 723-

728.

[10]. Chang-Yi Kaoa* and Chin-ShyurngFahn; A Human-Machine Interaction Technique: Hand Gesture Recognition Based on Hidden Markov Models with Trajectory of Hand Motion, CEIS 2011 Procedia Engineering Volume 15, 2011, pp. 3739-

3743.

[11]. Amit Gupta, Vijay Kumar Sehrawat, MamtaKhosla; FPGA Based Real Time Human Hand Gesture Recognition System, Amit Gupta et al. / Procedia Technology 6 ( 2012 ), pp. 98 107.

[12]. Mu-Chun Su National Central University; A hand-gesture- based control interface for a car-robot, The 2010 IEEE/RSJ International Conference on Intelligenge robots and systems October . 2010, pp. 18-22.