AI Virtual Keyboard For Typing

DOI : 10.17577/IJERTCONV11IS04002

Download Full-Text PDF Cite this Publication

  • Open Access
  • Authors : Akshay Krishnan, Ann Treesa Raphi, Arjun Anirudh, Meera George, Detty M Panicker
  • Paper ID : IJERTCONV11IS04002
  • Volume & Issue : Volume 11, Issue 04
  • Published (First Online): 01-07-2023
  • ISSN (Online) : 2278-0181
  • Publisher Name : IJERT
  • License: Creative Commons License This work is licensed under a Creative Commons Attribution 4.0 International License

Text Only Version

AI Virtual Keyboard For Typing

AI Virtual Keyboard For Typing

Akshay Krishnan Ann Treesa Raphi

Department of Computer Science and Engineering Department of Computer Science and Engineering Mar Baselios Christian College of Engineering Mar Baselios Christian College of Engineering

& Technology & Technology

Kuttikkanam,India Kuttikkanam,India

5200@mbcpeermade.com 5201@mbcpeermade.com

Arjun Anirudh Meera George

Department of Computer Science and Engineering Department of Computer Science and Engineering Mar Baselios Christian College of Engineering Mar Baselios Christian College of Engineering

& Technology & Technology

Kuttikkanam,India Kuttikkanam,India

5336@mbcpeermde.com 5247@mbcpeermade.com

Detty M Panicker

Assistant Professor

Department of Computer Science and Engineering Mar Baselios Christian College of Engineering & Technology

Kuttikkanam,India dettympanicker@mbcpeermade.com

Abstract Today, technology-based navigation is used to make users comfortable and communicate, safety information, daily work convenience, etc. Human has changed the world and his lifestyle to meet the needs. In this case, hand gesture information offers users another way to interact with humans, machines or robots. That's why we want the character input system to use the virtual keyboard as hand analysis. Information from previous guidance is used to define the proposed model. Experimental results show that gesture recognition is best. Along with many other virtual gesture keyboards available, our keyboard is based on artificial intelligence and is more responsive than existing ones. We can also reduce the complexity of the system and make it easier to work with the system, which can make the user's work more flexible and at the same time reduce the cumbersomeness of the system. The addition of artificial intelligence is what makes it so advanced. The project was built in Pycharm software using python and OpenCV and using libraries likemediapipeand CVzone.

Keywords Virtual keyboard, CVZone, Mediapipe

I. INTRODUCTION

New human-computer interfaces have been designed to offer multiform intera ctions as the demand for computing environments changes between people and computers. Yet, the keyboa rd and mouse continue to be the prima ry means of communication between people and computers.Desktops and laptops employ a keyboa rd to enable human-computer interaction.Although the keyboa rd is not very portable as a typical and traditional human-computer interface, users

nonetheless accept it in most cases. Keyboards will continue to be the preeminent interface for text input until trustworthy natural language interfaces are made accessible.

The Virtual Keyboa rd is a cutting-edge technology that we a re showca sing here. The virtual keypad has no physical form, as its name suggests. A virtual keyboa rd is a programme that virtualizes a physical keyboa rd with severa l layouts, enabling the user to a lter the layout according to the application.The head tracking, gaze, speech recognition, and hand gesture forms of user interfaces are specifically being coupled with the technologies.Virtua l reality (VR) and augmented rea lity (AR) technology have recently been used in a variety of fields, including games, education, health care, video, and sports. Gestures are one of the features that is frequently employed ,since anyone may easily and rapidly control a machine. Along with future virtual office environments will require developing effective and practica l text entry systems. In this paper we propose a virtual keyboard that detects gestures,cla rifies and applies gestures to the interface of virtual keyboard layout.

The rest of this a rticle is organized as follows. Section II provides the litera ture review. Proposed methodology in section III. CVzone is presented in Section IV and mediapipe in Section V.Details about user interface is given in Section VI. Finally, Section VII concludes this article.

  1. LITERATURE REVIEW

    In order to solve the issue that a colour-based GR method does not perform well when the background colour is comparable to the skin color, the VKB algorithm uses a

    DL-based GR approach.To get over the problem of the finger travel time, it expands a previous one-hand VKB layout to an ambidextrous VKB layout.To be more precise, only the index figure is moved while keeping the click gesture if the current key is close to the prior key[1].The proposed system describes the design, implementation and eva luation of a text input system called Air Typing, which requires only a standa rd camera and enables a user to type The steps in the proposed system include Image Capturing,Pre-processing,InvertImage,Connected,Compone nt detection, Determining the Edges[2].To prepare the image for edge recognition and forthcoming processing, unwanted pixels are removed using morphologica l procedures like erosion and dilation. The fingertip required for key selection is likewise kept apa rt from the virtual keyboard using this technique[3].

    When typing is done with both the left and right hands, an ambidextrous la yout is suggested.Here is a new virtual keyboard that recognises finger gestures in an effort to speed up typing by utilizing the freedom of hand mobility.[4].The computer's camera will be able to read a picture of various hand movements made by a person.The computer's Mouse or pointer will move in accordance with the movements of the gestures; you can even use sepa rate gestures to accomplish right and left clicks[5].This method is a new type of virtualkeyboa rd that allows users to type at any level on any device. The virtualkeyboard is customized and printed on pla in paper so you can pla ce it on an incline or tape it to the wa ll. Capture the complex movements of human hand gestures using 3D hand models[6].Depending on the gesture movement, the computer mouse or cursor may move and even perform right and left clicks with different gestures. The only ha rdwa re in the project is the webcam and the coding is done in Python using the Anaconda platform. [7].

    provides a clean, touchless interface, is very customizable and versatile, and can be used with a variety of ha rdwa re configurations.The system has the potentia l to significantly modify how we interact with computers and other technology, making research and development in this area fascinating and productive.

    With the use of CVzone and Mediapipes AI virtual keyboard with hand tracking, users may type on a keyboa rd without a physical input device, which is a novel and cutting-edge technological advancement. To transform the users hand gesture into virtual keystrokes, the system tra cks them usingcomputer vision algorithms.

    In comparison to conventiona l physical keyboards, this technology offers a number of advantages,including a touchless and hygienic interfa ce that can be especia lly helpful in settings where hygienic conditions a re crucia l, such as hospita ls and public places. It is also very flexible as and has configurable gestures.

    Depending on the particula r use case and specifications, the AI virtual keyboa rd can be implemented utilizing a variety of hardware configurations, such as single-camera or multi- camera setups. To precisely detect and track hand movements and convert them into virtual keystrokes, the system employs Mediapipes hand tra cking and landmark estimation models, as well as a hand gesture

    recognition algorithm.

  2. PROPOSED METHODOLOGY

A result of developments in computer vision and machine ea rning technology, AI virtual keyboa rd is using hand tra cking with CVzone and Mediapipe. Modern algorithms and models are used to quickly and accurately translate the

Fig. 1. Hand Landmarks

IV. CVZONE

users hand movements into virtual keystrokes by properly detecting and tracking those movements in real-time.

One of the main advantages of the AI virtualkeyboard with handtrackingis its adaptability. The system can be altered to support a number of hand gestures and input methods, including finger tapping, finger swiping, and even hand motions for virtual reality or gaming applications. It may therefore be customized to satisfy a range of needs and use cases because it is a versatile and adaptable technology.

Any webcam-equipped device, whether a laptop, tablet, or smartphone, may be used with the system, which is also quite portable. It becomes a convenient and user-friendly mobile input technique as a result.

Modern technology offers various advantages over conventiona l input methods, such as the AI virtual keyboa rd that combines handtracking with CVzone and Mediapipe. It

A variety of computer vision tools and functions are available for developers through the open-source computer vision libra ry known as CVzone. With the use of a combina tion of skin color segmentation and template matching approaches, the hand tracking module in CVzone is a well-liked tool that allows programmers to follow hand movements in real-time video streams. We will give a thorough description of the algorithm utilized in the CVzone hand tracking module in this article.

Step 1 : Segmenting skin tones

Skin color segmentation on the input video stream is the initial sta ge in the CVzone hand tracking algorithm. By doing this, the areas of the ima ge that are most likely to conta in a hand are located. By transforming the input video frame from the RGB color space to the HSV color space, the skin color segmentation is carried out. The areas of the image that match the skin color a re then found by lookingat the hue, satura tion, and va lue (HSV) components of the picture. This is commonly accomplished by applying a

threshold to the ima ges HSV components to sepa rate the regions with skin tones.

Step 2 : Morphological operations

The next stage is to carry out morphologica l procedures to eliminate noise and fill in gaps in the segmented image after the skin color regions have been detected. A group of ima ge processing methods known as morphologica l operations work on the structure and form of the ima ge. Erosion and dilation are the morphologica l procedures that are most frequently utilized in hand tracking systems. Small, isola ted portions of the ima ge are removed using erosion, and spaces between sections are filled by dilation.

Step 3 : Manual template matching

The segmented ima ge is compared to a hand template as the next stage in the CVzone hand tracking algorithm to determine where the hand is. The shape and organization of a hand are represented by the hand template, a bina ry picture. Using a correla tion-ba sed matching technique, the segmented ima ge is compared to the template. The algorithm looks for the region that most closely resembles the templa te by comparing it to various segments of the segmented ima ge. The location of the hand is established once a match is discovered.

Step 4 : Monitoring hand motions

Tra cking the hands motions in succeeding video frames is the la st step of the CVzone hand tra cking algorithm. This is accomplished by combining Ka lman filtering with optical flow techniques. A computer vision method called optical flow monitors the movement of pixels between subsequent video frames. Ka lman filtering is a statistical method that infers an objects current position and speed from its prior position. By combining these methods, it is possible to determine where the hand is in each video frame and follow its movements over time.

In conclusion, the CVzone hand tracking module detects and tracks hand movements in real-time video streams by combining skin color segmentation, morphological processes, hand templa te matching, and tracking algorithms. With the help of this algorithm, programmers may crea te hand tracking applications for a variety of industries, such as gaming, virtual reality, and augmented reality.

V. MEDIAPIPE

An extensive selection of computer vision and machine lea rning techniques, including a hand tracking module, a re offered by the well-known open-source libra ry known as Mediapipe. The Media pipe hand tracking module detects and tracks hand movements in real-time video streams by combining Convolutiona l Neura l Networks (CNNs) and geometric reasoning techniques. The algorithm employed by the Mediapipe hand tra cking module will be thoroughly described.

Step 1 : Detection of a palm

In the input video stream, the Mediapipe hand tracking algorithm sta rts by seeing the hands pa lm.Using a detector built on CNN that has been tra ined to recognise the characteristics of the hand palm, this is accomplished. The

CNN model is capable of reliably detecting the pa lm of the hand even in challenging lighting circumstances because it was trained on a sizable dataset of hand photos.

Step 2 : Estimating hand landmarks

In order to determine the locations of hand la ndmarks like the fingertips and the base of the thumb,the hand pa lm must first be identified. Usinga second CNN model that has been tra ined to identify the landmarks based on the location of the pa lm, this is accomplished. Even when the hand is pa rtia lly obscured or moving, the landma rk estimation algorithm is able to determine the locations of the landma rks with accuracy.

Step 3 : Calculating the hand pose

Estimating the hands posture, or the hands orientation and location in three dimensions, comes after the landmarks for the hand have been determined. A geometric reasoning technique is used to accomplish this, fusing the known geometric structure of the hand with the estimated landma rk positions. Even when the hand is in complex positions or orientations, the algorithm is still able to predict the hand pose accurately.

Step 4 : Hand tracking

These video frames a re tracked as pa rt of the Mediapipe hand tracking algorithms la st sta ge. For this, the hand pose in each frame is continuously approximated, and the estimated pose is used to track the hands progress. The hand can be accurately tracked by the tracking algorithm even when it is moving swiftly or changing orientation, and it can deal with occlusions and variations in lighting.

Finally, the Mediapipe hand tracking module detects and tra cks hand motions in real-time video streams by combining CNN-based landma rk estimation, detection, and tra cking algorithms. In a variety of industries, such as gaming, virtual reality, and augmented rea lity, this algorithm gives programmers a potent set of tools for creating hand tracking applications.

VI . USER INTERFACE

The evolution of technology has changed the way we interact with our devices, and one notable advancement is the arrival of AI virtual keyboa rds. Powered by a rtificia l intelligence, these virtual keyboards have revolutionized the text input user interface on a variety of devices, including smartphones, tablets, and computers. Here we will explore the features and benefits of the AI virtual keyboa rd user interface.

The user interface of AI virtual keyboa rds is designed to be highly intuitive and user-friendly. These keyboa rds use the power of AI algorithms to analyze user input and provide accurate predictions, corrections, and suggestions in real time.The interface is typically minima list, with a clean and elega nt design that favors ease of use. The keys are usually large and well-spaced, so users can comfortably type with precision even on smaller screens.

The keyboard layout, theme, and other visual elements are typically customizable by the user. With gesture typing, users may connet letters to form words by moving their finger across the keyboa rd. This feature makes it possible for users to type on larger devices like tablets using just one hand and without taking their finger off the screen.

In addition to the above features, the AI virtual keyboa rd user interface a lso favors accessibility. These keyboa rds often come with built-in accessibility options such as la rger keys for users with motor disabilities, haptic feedback for users with visual impairments.

Webcam and Mediapipe hand tracking a re used to create an easy-to-use user interface for a basic AI virtual keyboa rd used for typing.The user's screen usually displa ys the interface, which consists of a virtual keyboard that can be operated using hand motions detected by the webcam.

The user moves their hand in front of the webcam to use the virtual keyboa rd, and Media pipe's hand tra cking technology recognises the hand's location and movements. The user can then control the cursor location and choose keys by moving their hand in the desired direction after the virtual keyboa rd has been displayed on their screen.

The virtual keyboa rd can be altered to support various keyboard layouts and linguistic systems, making it usable across a wide range of nations and geogra phic a rea s. Many hand gestures and input modalities, including finger tapping, finger swiping, and even hand movements for virtual reality or gaming apps, can be supported by the interface with bespoke customization. The user is allowed to select the input technique that seems most natura l and comfortable to them.

Essentia lly, the Mediapipe basic AI virtual keyboard's user interface is designed to be easy to use and relies on webcam and hand tra cking. Users may text on a virtual keyboa rd without a physical input device thanks to its touchless, hygienic interface, which is highly customizable and adjustable to satisfy a variety of use cases and needs.

VII . CONCLUSION

This study suggests a virtual keyboard that recognises hand gestures and takes the place of a physical keyboa rd.Using this keyboa rd it is able to print a lphabets and other capabilities by gestures. The skin segmentation technique is used to isolate the hand's color and picture from the background. The full body being taken into the camera can be resolved using the remove a rm technique. The suggested approa ch has the ability to detect and understand hand gestures, allowing it to control keyboard functionalities and produce a real-world user interface.This project can be simply implemented and used in a variety of fields where calculation is necessary.

ACKNOWLEDGMENT

The Mar Baselios Christian College of Engineering and Technology Computer Engineering department has supported our study and the creation of our project in relation to this problem statement.

REFERENCES

[1] Tae-Ho Lee , Sunwoong Kim , Taehyun Kim , Jin-Sung Kim , and Hyuk-Jae Lee, Virtual Keyboards With Real-Time and Robust Deep Learning-B ased Gesture Recognition, IEEE Transactions on Human-Machine Systems, vol.52, no.4, pp. 725-735, Aug. 2022, doi:10.1109/thms.2022.3165165.

[2] aslav Livada, Miro Proleta, KreÝimir Romi, Hrvoje LeventJI. Beyond the Touch: a Web Camera based Virtual Keyboard, International Symposium ELMAR, Sep. 2018, doi:10.23919/elmar.2017.8124432

[3] Tae-Ho Lee, Hyuk-Jae Lee, Ambidextrous Virtual Keyboard Design with Finger Gesture Recognition, International Symposium on Circuits and Systems, May. 2018, doi:10.1109/iscas.2018.8351485.

[4] Alexandre Henzen, Percy Nohama, Adaptable Virtual Keyboard and Mouse for People with Special Needs, Future Technologies Conference, Dec. 2016, doi:10.1109/ftc.2016.7821782.

[5] Rishikesh Kumar, Poonam Chaudhary, User defined custom virtual keyboard, Int ernational Conference on Information Science (ICIS), Aug. 2016, doi:10.1109/infosci.2016.7845293.

[6] Y. Zhang, W. Yan and A. Narayanan, A Virtual Keyboard Implementation Based on Finger Recognition, Image and Vision Computing New Zealand, Dec. 2017, doi:10.1109/ivcnz.2017.8402452.

[7] Sugnik Roy Chowdhury , Sumit Pathak, M.D. Anto Praveena, Gesture Recognition Based Virtual Mouse and Keyboard, International Conference on Trends in Electronics and Informatics (ICOEI)(48184), June. 2020, doi:10.1109/icoei48184.2020.9143016.

[8] Tim Menzner,Alexander Otte, Travis Gesslein, Philipp Gagel, Daniel Schneider, Jens Grubert, A Capacitive-sensing Physical Keyboard for VR Text Entry, IEEE Virtual Reality Conference, March. 2019, doi:10.1109/vr.2019.8797754.

[9] Amal Alshahrani, Mohammed Basheri, Personalized Virtual Keyboard for Multi-Touch Tables, 2019 2nd International Conference on Computer Applications & Information Security (ICCAIS), May. 2019, doi:10.1109/cais.2019.8769528.

[10] Chinnam Datta Sai Nikhil, Chukka Uma Someswara Rao, E.Brumancia, K.Indira, T.Anandhi, P.Ajitha, Finger Recognition and Gesture based Virtual Keyboard, International Conference on Communication and Electronics Systems, June. 2020, doi:10.1109/icces48766.2020.9137889.

[11] Tushar Naykodi, Abhilash Patil, Kiran Sase, Sitesh Singh, Reconfigurable Virtual Keyboard based on Image processing, International journal of engineering research and technology, April. 2018, doi:/10.1109/ICCES48766.2020.9137889.

[12] Tomas Bravenec, Tomas Fryza, Multiplatform System for Hand Gesture Recognition, International Symposium on Signal Processing and Information Technology, Dec. 2019, doi:10.1109/isspit47144.2019.9001762.

[13] Daewoong (D.) Choi, Hyeonjoong (H.) Cho, Kyeongeun (K.) Seo, Sangyub (S.) Lee, Jae-Kyu (J.), Lee, Jae-Jin (J.) Ko, Designing Hand Pose Aware Virtual Keyboard With Hand Drift Tolerance, July. 2019, doi:10.1109/access.2019.2929310.