Human Computer Interaction using Hand Gesture Recognition

DOI : 10.17577/IJERTV3IS040341

Download Full-Text PDF Cite this Publication

Text Only Version

Human Computer Interaction using Hand Gesture Recognition

Mayur V. Gore Dr. Anil R. Karwankar

Department of electronics Department of electronics Government college of engineering Aurangabad. Government college of engineering Aurangabad.

Abstract -Operating computer in virtual environment is increasingly getting attention in recent time. In this paper method of operating computer in real time using hand gesture technique is described. Three main application performed to operate computer are mouse operation using hand gesture, controlling Media player & third is creating shortcuts using static hand gesture .In static gesture recognition each gesture is assigned to a specific application such as opening word file or opening control panel. For static gesture recognition principal component analysis (PCA) technique is used. For mouse & media player controlling, two algorithms are proposed to extract features from region of interest. Hand gesture give effective alternative than other interfacing devices to operate computer or laptop.

Keywords: Human computer interaction (HCI), principal component analysis (PCA), recognition

INTRODUCTION

Gesture communication was oldest method used by human being for communication. Now this is a time to use gestures to communicate with electronic devices. Gesture is nothing but movement of any part of body .To operate computer or laptop we had used many interfacing devices like keyboard, mouse, joystick .Now sound recognition technique is very common one to operate computer using speech command .Operating computer in real time using gesture is challenging task. Various techniques were used to recognize gesture information for example face region , hand region extraction . Bui and Nguyen [1] used hand gloves to recognized gesture. Microelectronic mechanical system accelerometers and sensors are mounted over hand gloves and according to joint angle, distance between fingers, gestures were recognized. But there is problem of hardware requirement, energy consumption and working with wires attached to gloves is inconvenient. Vision based gesture recognition is another important & famous method for gesture recognition. Various algorithms were implemented using vision based method. Gupta and Agarwal [2] used color caps over fingers for performing mouse operation using color recognition but it adds restriction to users to use caps every time. Oikawa and Takahashi [9] performed mouse operation depending upon finger position on keyboard. As per position of thumb,

mouse click is performed. Different modes are assigned to finger tip position. Hand gesture extraction using edge oriented histogram and bilinear interpolation proposed by Pansare et al.[3] includes problem of background skin color blob elimination and background edges elimination. Method of tracking tip of finger and analyzing trajectories for recognition of gesture is proposed by Davis and Shah [4]. Nachamai [5] used SIFT algorithm for gesture recognition which compare illumination, space rotation and size invariance found in images. For static hand recognition Angel and Neethu [6]used k curvature algorithm. This method provides peak and valley of palm region. Limitation of this method is number of gestures are limited. Nazrul H. ADNAN [7], Khairunizam proposed data glove based finger movement tracking using pca algorithm in which resistance increased by finger bending is sensed by flex sensor .Sensor output is fed to matlab which uses pca algorithm for clustering of finger position. But this system also have problem of external hardware requirement. In our system, the algorithm used to perform mouse operation is based on palm region color extraction. From first three frames color information is extracted which is used for further operation. We have tested each mouse mouse click operation separately. Same method is used for media player control. For static hand gesture recognition principal component analysis technique is used. Principal component analysis is primary method for purpose of recognition of image. Principal component analysis is used for data compression. This method shows degree of change in the direction and amount by which change occurs using eigen vector and eigen values. This paper is divided into three parts each for mouse operation media player operation and static gesture command operation.

MOUSE OPERATION USING HAND GESTURE

This operation is divided into two parts A] Hand detection and tracking

B] Command to operating system

A] Hand Detection & Tracking

To use hand gesture, extraction of hand region information from image frame is first step. Many methods are there to

extract hand region using vision based technique like using depth camera in which object nearer to camera can be extracted that is hand region can be extracted. Skin color detection technique is one more method used for

X = pixel position on x coordinate

Y= pixel position on y coordinate

recognition of hand gesture. While using this technique, removing face region and other background object having

=

Xavg X N

=

Yavg Y N

skin color is challenging task. In this paper very simple and effective technique to extract the hand region information is used .We have used camera of resolution 320×240 pixels. For first 3 frames we have created square of 40×40 pixels that is for first three frames square will appear on preview window .Keep the hand in such a way that hand region will come in square. Crop the square from third frame to find color information of hand .Fig. 1 show first two frames in which square of 40×40 is formed on

N=Total number of pixel having value 1.

Average value of x and y coordinate represent position of hand on 320×240 figure window. We have to control position of cursor on screen of computer /laptop which is 1366×768.Screen window size is three times of figure window size i.e. there are 1:3 ratios. Position of mouse cursor on main screen is obtained from following equations

X=X1- [ X1 Xavg]

L

preview window and Fig. 2 shows cropped square which contain hand region information.

Y=Y1-

Y1

[ Yavg ]

K

Figure 1: First two frames

Figure 2: Third frame & feature color extraction

Find the red, green & blue component from cropped square area .Calculate average value of red, green & blue component. Find the precision value of these three components.

Rmax = Ravg + 15 & Rmin = Ravg -15 Gmax = Gavg + 15 & Gmin = Gavg -15 Bmax = Bavg + 15 & Bmin = Bavg -15

These R, G, B values are used to filter the data for next frames. For next frames extract the pixels whose value is in between maximum and minimum limit of R, G, B component .These pixels obtained from this filtering will represent hand color region in that frame. Assign these pixel value to 1.calculate total number of pixels whose value is one. Cursor of mouse will move as per centroid of hand .To find center of hand area following method is used Center of hand is calculated by finding average value of x & y co-ordinates of the obtained pixels.

X1=size of main screen along x axis

Y1= size of main of screen along y axis L= size of figure window along x axis K=size of figure window along y axis

These X and Y will represent co-ordinate of cursor on main screen. For each frame number of palm region pixels is counted and this value is compared with number of palm region pixels in previous frame. If number of pixels are reduced that is when we bend palm, command to perform event is generated. This event can be sigle click or double click or wheel scrolling. For each frame this procedure is repeated & command to operating system is generated. To perform click operation we have to bend our palm so that number of skin color pixels will get reduced and click event will be generated. Fig. 3 shows complete dotted hand region and reduced skin pixels when hand is bend is shown in Fig.4.

Figure 3: Palm region pixels 4th frame

Figure 4: Reduced skin color pixels in next frame

B] COMMAND TO OPERATING SYSTEM

To move cursor to (X, Y) co-ordinate JAVA.AWT.ROBOT class is used. This class is used to

generate system events to control mouse, keyboard function and test automation. AWT is Abstract Window Toolkit.AWT provide interface between java and mouse event. MATLAB imports java.awt.robot class to perform mouse operation. For mouse controlling action this class generates three input events

java.awt.event.inputevent.button1_MASK java.awt.event.inputevent.button2_MASK java.awt.event.inputevent.button3_MASK

Button1_Mask is used for left click, Button2_Mask is used for middle mouse wheel and Button3_Mask for right click. For single click of mouse following java instruction is used robot.mouse(java.awt.event.inputevent.button1_mask) robot.mouse(java.awt.event.InputEvent.button1_mask)

For double click following following java instruction is used robot.mousePress(java.awt.event.inputevent.button1_mask) robot.mouseRelease(java.awt.event.inputevent.button1_ma sk) robot.mousePress(java.awt.event.inputevent.button1_mask) robot.mouseRelease(java.awt.event.inputevent.button1_ma sk)

For scrolling of mouse following java instruction used is Mouse.mouseWheel(inputevent.button2_mask)

Fig. 5 shows flowchart of mouse operation .

Figure 5: Flowchart of system

MEDIA PLAYER CONTROLLING USING HAND GESTURE

In media player controlling, two operations are performed. One is play song & another is stopping the song according to hand gesture recognition .Move hand from left to write to play song on media player. Opposite action is assigned for stop operation. Operation includes extracting hand

region pixel from current frame similar to mouse operation. Average value of X and Y coordinates is calculated. If value of X co-ordinate of obtained palm region in the current frame is greater than value in previous frame then controlling action to play video song is generated.

For controlling multi media player ACTIVEX control is used in MATLAB. ActiveX control generates windows media player in current figure window i .e . it provide GUI of media player in figure window and controls it as per instruction. We can write program in MATLAB to control media player & use this .m files in ActiveX control for controlling action. In ActiveX control we can adjust size of media player .We can select starting point of media player window & we can create our own events by assigning/attaching call back matlab files. Activex selection panel and control window are shown in Fig. 6 and Fig. 7.

Figure 6: Activex control panel

Figure 7: Activex parameter control window

OPERATING COMPUTER USING STATIC HAND GESTURE

We have used static hand gesture to operate computer. Each gesture is assigned to specific application that is recognition of static hand gesture will open word file, for another gesture, control panel will get open and for third gesture calculator will get open .For hand gesture recognition principal component analysis (PCA) technique is used. Kyungnam Kim [8] used principal component analysis for face recognition. Principal component analysis allows us to check principal direction in which there is change in data. PCA approach is pixel oriented that is it is sensitive to position, orientation and size of hand region in image.PCA technique gives us difference in two or more

data sets. Direction in which there is change in dataset is represented by eigenvector and amount of change is represented by eigen values. Eigen vector with largest eigen value is direction of greatest variation, second largest eigen value is direction with next highest variation.

Hand gesture recognition using principal component analysis is divided into three parts

1] Training of database images 2] Testing of new input image

3] Command to operating system

1] TRAINING OF DATABASE IMAGES

For training purpose database of different gestures is maintained which contain four images of each gesture. Each image have resolution of 320×240.In each image there is little orientation of each gesture to increase accuracy at the time of testing.

Figure 8: Gestures used

Following steps are included

1A. All database images are two dimensional (2D).First steps in principal component analysis is to convert 2D image into 1D vector. This 1D vector is obtained by concatenating each column of 320×240 size image. So single column will represent one image. Our database contains 12 images of different gestures, so we get 2D matrix with 12 columns such that each column represents one image.

1B. In second stage , mean of 2D matrix is calculated .This mean is calculated row wise which gives one column vector representing mean of 12 database images. Let X is the column vector and size of 2d matrix is (M x N)

Figure 9:Eigen vector

Figure 10:Eigen values of database images

Eigenvector matrix contain large values many of which are not useful as shown in Fig. 9. Remove small values from eigenvector matrix to increase accuracy. Select N highest eigenvectors from eigenvector matrix. Multiply these eigenvectors with mean of database image which gives matrix of weight for each image. This weight of image is useful while testing stage. Gestures are recognized by comparing weight of input image with database image. Minimum weight will represent correct match.

=

Where p=1, 2, 3, 4, 5,

1

=1

1C. Subtract this mean from 2D matrix obtained from first stage. This subtraction should be row wise. Let this matrix is A. Calculate covariance matrix using this matrix A.

Covariance matrix= A*A`

We know covariance give amount by which two variables vary. Covariance of two variables is positive if they differ in same direction else it is negative. From covariance matrix obtain eigen vector and eigen values .

Figure 11: Weights of training images

2] TESTING OF INPUT IMAGE

Testing of new input gesture includes following steps

2A. Convert 2D input image into 1D vector by concatenating each column of image. Find mean of input image

2B. Multiply mean with N highest eigenvectors obtained during training which gives M different weight. In training stage weight of each database image is calculated. Find Euclidean distance between weight of input image and weight of database image. Minimum Euclidean distance will represent correct match for input gesture.

3D. If obtained weight of input gesture is greater than threshold value then we can say hand gesture is not defined.

3] COMMAND TO OPERATING SYSTEM

Next step after gesture recognition is giving command to operating system for opening different application like opening of word file, notepad or control panel, and calculator. We have generated batch files for each operation which we can use in matlab. Dos command can also be used for handling these application. Each application is assigned to one gesture. Command to operating system is generated after gesture recognition. Fig. 12 shows static gesture recognition and application will get open after recognition of that gesture.

Figure 12 : Hand gesture recognition

td>

Third Gesture

Gestures

Number of database images

Correct recognition

Accuracy percentage

First Gesture

20

19

95%

Second

Gesture

20

19

95 %

20

20

100 %

Table 1: Testing result for different gestures

CONCLUSION

The system presented in this paper for mouse operation, media player controlling and static gesture based computer operating is performed successfully in matlab 2009 version. In mouse operation we have to perform each mouse click event separately that is left click, right click and scrolling. Grouping of three operations in single program is future scope. In static gesture based computer operating we have used black background to increase accuracy. We have tested static gesture without black background and got correct recognition but condition is background of database image and background of input image during testing time should be same. If we keep same background there is no problem of any skin color region like face and other body parts. Testing results of different gestures are shown in Table 1.

REFERENCES

  1. D.Bui ,L.T.Nguyen. Recognizing postures in Vietnamese sign language with mems accelerometer. IEEE sensors journal Vol 7,No 5 ,pages 707-712,2007

  2. P.Agrawal, K.Gupta. Mouse movement through finger by image grabbing using sixth sence technology. International journal of engineering science and advanced technology, volume-2, Issue-2, pages 245 249,2012

  3. J.Pansare, H.Dhuma, S.Babar, K.sonawane, A.sarode. Real Time Static Hand Gesture Recognition System in Complex Background that uses Number system of Indian Sign Language . International Journal of Advanced Research in Computer Engineering &Technology (IJARCET) Volume 2, Issue 3, March 2013

  4. J.Davis and M.shah. Visual gesture recognition. IEEE proceeding vision image signal processing.Vol 141,pages 101-105,1994

  5. Nachamai.M. Alphabet recognition of American sign language :A hand gesture recognition approach using SIFT algorithm. International Journal of Artificial Intelligence & Applications (IJAIA), Vol.4, No.1, pages 105-115 January 2013

  6. Angel, Neethu.P.S. Real-Time Static and Dynamic Hand Gesture Recognition. International Journal of Scientific & Engineering Research Volume 4, Issue3, March-2013

  7. Nazrul.H, Khairunizam WAN, Shariman AB, Juliana A, A. Bakar,

    A.A. AZIZ. PCA-based Finger Movement and Grasping Classification using Data Glove Glove MAP International Journal of Innovative Technology and Exploring Engineering (IJITEE) ISSN: 2278-3075, Volume-2, Issue-3, February 2013

  8. K. Kim. Face Recognition using Principle Component Analysis.

  9. H. Oikawa and M. Takahashi. Pointing system using fingers on keyboard. SICE Annual Conference 2008

Leave a Reply