Mouse Control Via Live Eye Gaze Tracking

DOI : 10.17577/IJERTCONV4IS17002

Download Full-Text PDF Cite this Publication

Text Only Version

Mouse Control Via Live Eye Gaze Tracking

Najla P R

Department of Electronics and communication KMCT College of Engineering,

Calicut, India

Jayasree T C

Department of Electronics and communication KMCT College of Engineering,

Calicut, India

Abstract The field of Human-Computer Interaction (HCI) has witnessed a tremendous growth in the past decade. The invention of tablet PCs and smart phones allowing touch- based control has been hailed warmly. The researchers in this field have also explored the potential of eye-gaze as a possible means of interaction. Some commercial solutions have already been launched, but they are as more expensive and offer limited usability. This paper strives to present a low cost real time system for eye-gaze and also concentrates on a human computer interaction application based on eye-gaze tracking. Human eyes carry much information which can be extracted and can be used in many applications. Eye gaze reflects a persons point of interest means it is possible to say that what they are thinking based on where they are looking. Eye gaze tracking is aimed to keep track of human eye-gaze. Eye movements can also be captured and used as control signals to enable people to interact with interfaces directly without the need for mouse or keyboard input. This can be achieved by employing computer vision and image processing algorithms. In the proposed method, rst track the human face in a real-time video sequence to extract the eye regions using a webcam. Then, find the facial feature characteristics to extract the eye region to obtain the gaze point. Then corresponding to each gaze point the cursor movement on the monitor screen is done.

KeywordsDesktop environment,webcamera, point of gaze, head pose estimation, calibration

  1. INTRODUCTION

    Nowadays personal computer systems are playing a huge role in our everyday lives. They are used in areas such as work, education and entertainment. In all these applications have in common is that the use of personal computers is mostly based on the input method via mouse and keyboard. While this is not a problem for a healthy individual, this may be discomfort for people with limited freedom of movement of their limbs, like people with disabilities or suffering from brainstem strokes, and other handicapped problems. In these cases it would be preferable to use input methods which are based on motor abilities of the head region such as head or eye movements.

    So in order to provide such alternative input methods a system was made which follows a low-cost approach to control a mouse cursor on a computer system. It consists of an eye tracker and a head tracker which are both attached to a head mount for locating gaze point corresponding to the eye movement. The eye tracker is based on the images which are recorded by a modified webcam to acquire the eye movements. These eye movements are then mapped to a computer screen to position a mouse cursor accordingly.

    Face is the index of mind and eyes are the window to the soul. Eye movements provide a rich and informative window

    into a persons thought and intentions. So the study of eye movement may determine what people are thinking based on where they are looking. Eye tracking is the measurement of eye movement or activity and gaze (point of regard) tracking is the analysis of eye tracking data with respect to the head or visual scene. Eye tracking is mostly used in the applications like drowsiness detection, diagnosis of various clinical conditions or even iris recognition and also it will help the physically disabled people to use the computer without any external help.

    Eye tracking is the process of measuring either the point of gaze that is where one is looking or the motion of an eye relative to the head. An eye tracker is a hardware device for measuring eye positions and eye movement. Eye trackers are used in research on the visual system, in psychology, marketing, as an input device for human computer interaction, and in product design. There are a number of methods for measuring eye movement. The most popular method uses video images from which the eye position is extracted.

  2. RELATED STUDY

    There are several methods to track the motion of the eyes. The most direct method is the fixation of a sensor to the eye. The fixation of small levers to the eyeball belongs to this category, but is not recommended because of high risk of injuries. A safer way of applying sensors to the eyes is using contact lenses. The big advantage of such a method is the high accuracy and the nearly unlimited resolution in time. For this reason, medical and psychological research uses this method.

    Another method is electrooculography (EOG) where sensors attached at the skin around the eyes measure an electric field. Originally, it was believed that the sensors measure the electric potential of the eye muscles. It turned out that it is the electric field of the eye which is an electric dipole. The method is sensitive to electro-magnetic interferences but works well as the technology is advanced and exists already for a long time. The main advantage of the method is its ability to detect of eye movements even when the eye is closed, e.g. while sleeping.

    Both methods explained so far are obtrusive and are not suited well for interaction by gaze. The last and preferred method for eye-gaze interaction is video. The central part of this method is a video camera connected to a computer for real-time image processing. The image processing takes the pictures delivered from the camera and detects the eye and the pupil to calculate the gazes direction. The main advantage of video-based eye tracking is the unobtrusiveness.

    Consequently, it is the method of choice for building eye- gaze interfaces for human-computer interaction. Especially for physically disabled people to control the mouse with the movement of their eyes.

    1. Video-Based Eye Tracking:The task of a video- based eye tracker is to estimate the direction of gaze from the image delivered by a video camera. All video-based eye- tracking methods need the detection of the iris center in the camera image. This is a task for image recognition, typically edge detection, to estimate the elliptical contour of the pupil. As the cornea has a nearly perfect sphere shape, a glint stays in the same position for any direction of gaze while the pupil moves. There are eye trackers which also track the rotational movement of the eye but such systems are not very common. A nice application for such eye trackers is a head camera controlled by the eye movements using the motion stabilization of the eye to get a motion stabilized camera image.

    2. Types of Video-Based Eye Trackers:The most common mechanical setup is a stationary eye tracker. Such systems are commercially available as a laboratory tool and are typically used in medical research or marketing research. These systems comprise of as a desktop computer with integrated eye tracker and with a software package for analyzing and visualizing eye gaze data.

    To accomplish the task of gaze tracking, a number of approaches have been proposed. The majority of early gaze tracking techniques utilize intrusive devices such as contact lenses and electrodes, requiring physical contact with the users; such a method causes a bit of discomfort to users. Tracking the gaze with a head-mounted device such as headgear is less intrusive, but is inconvenient from a practical viewpoint. In contrast, video-based gaze tracking techniques that could provide an effective nonintrusive solution are more approprate for daily usage.

    The video-based gaze approaches commonly use two types of imaging techniques: infrared imaging and visible imaging. The former needs infrared cameras and infrared light sources to capture the infrared images, while the latter usually utilizes high resolution cameras for images. As infrared-imaging techniques utilize invisible infrared light sources to obtain the controlled light and a better contrast image. Eye trackers are the devices which are used to track the eye movement and the position of eye. There are mainly two types of eye trackers, devices are connected to the human body, and devices have no contact with the body.

  3. PROPOSED METHOD

    The hardware based equipment used for the eye gaze tracking made a discomfortness to the people who were physically disabled. Because some devices need a physical contact with the user. So a method has been proposed to develop an eye gaze tracking system that helps the physically disabled people to interact with computer with a better accuracy. Thats why it help operate the mouse effectively, access the internet and mails, entertain with games etc. The basic block diagram for this mouse control is shown in figure below. First take the real time video sequences of the person who was sitting in front of the camera. Then transform it

    from RGB to gray. The most notable gaze features in the face image are the iris center and eye corner. The eye ball moves in the eye socket when looking at different positions on the screen.

    Fig.1. Basic block diagram

    The eye corner can be viewed as a reference point, and the iris center in the eyeball changes its position that indicates the eye gaze. Therefore, the gaze vector formed by the eye corner and iris center contains the information of gaze direction, which can be used for gaze tracking. However, the gaze vector is sensitive to the head movement and produces a gaze error, while the head moves. Therefore, the head pose should be estimated to compensate for the head movement.

    The three-phase feature-based eye gaze tracking approach uses eye features and head pose information to enhance the accuracy of the gaze point estimation shown in Fig. 2. In Phase 1, extract the eye region that contains the eye movement information. Then, detect the iris center and eye corner to form the eye vector. Phase 2 obtains the parameters for the mapping function, which describes the relationship between the eye vector and the gaze point on the screen. In Phases1and 2, a calibration process computes the mapping from the eye vector to the coordinates of the monitor screen. Phase 3 entails the head pose estimation and gaze point mapping. It combines the eye vector and head pose information to obtain the gaze point. The basic block diagram for representing the proposed method is shown below. It consist of mainly three phases that operating effectively to obtain the final gaze point.

    Fig.2. Three-phase feature-based eye gaze tracking method.

    The figure above shows the proposed method for obtaining gaze point on the monitor screen. This having three phases based on the feature characteristics of eye. That is the iris center and eye corner. In this inner eye corner is taken as the reference because it is insensitive to the eye movement. And also when there is a head movement occurs it also introduces some errors on the gaze point. So in order to avoid this problem head pose estimation is also taken into consideration.

    Algorithm1: Eye Gaze Tracking System

    Initialization:

    -Extracting facial features using ASM

    -Extracting the head pose estimation

    -Get calibration mapping function Tracking the gaze through all the frames: Input: Video from camera

    Step 2: Extract the eye region Step 3: Detect the iris center piris

    Step 4: Detect the eye inner corner pcorner

    Step 5: Eye vector is obtained: g = pcorner piris

    x

    x

    Step 6: Get static gaze point ( , u) byumyapping function

    1. Eye region detection

      To obtain the eye vector, the eye region should be located rst. Traditional face detection approaches cannot provide accurate eye region information in uncontrolled light and with free head movement. Therefore, an efcient approach should be needed for illumination and pose problems. Here, present a two-stage method to detect the eye region. In the rst stage, utilize local sensitive histograms to cope with various lighting problems. Compared with normal intensity histograms, local sensitive histograms embed spatial

      Fig.3. (a) Input images . (b) Results using local sensitive histograms

      It operates in a way similar to conventional image histograms. However, instead of counting the frequency of occurrences of each intensity value by adding ones to the corresponding bin in normal histogram, a oating-point value is added to the corresponding bin for each occurrence of an intensity value. The oating-point value declines exponentially with respect to the distance to the pixel location where the locality sensitive histogram is computed. Thus, the proposed histogram is more suitable for applications such as visual tracking for cursor control which assigns lower weights to pixels further away from the target center.

      In the second stage, use the active shape model (ASM) to extract facial features on the image, through which the illumination changes are eliminated. Active Shape Models (ASMs) are aimed at automatically locating landmark points that dene the shape of any statistically modelled object in an image. When modeling faces the land mark points of interest consist of points that lie along the shape boundaries of facial features such as the eyes, lips, nose, mouth and eyebrows. At the testing stage, the Viola Jones face detector is used for locating the face in an image. Once the face has been detected, the mean face is scaled, rotated and translated using a similarity transform to roughly t on top of the face in the test image.

      1. Shape model:An object is described by points, referred to as landmark points. The landmark points are determined manually in a set of training images. From these collections of landmark points, a point distribution model is constructed

        as follows. The landmark points (x1, y1, …………xn , yn ) are stacked in shape

        information and decline exponentially with respect to the

        X (x , y

        …………x , y )T

        (1)

        distance to the pixel location where the histogram is calculated.

        Examples of the utilization of local sensitive histograms are shown in Fig.3, in which three images with different illuminations have been transformed to ones with consistent illumination with the local sensitive histograms.

        1 1, n n

        Principal component analysis (PCA) is applied to the shape vectors X by computing the mean shape.

        s

        s

        X = 1/s1 xi

        (2)

        E1 (I Sr )

        (7)

        x

        x

        y

        y

        2

        2

        i 1

        E (g 2

        g 2 )(1 / 2)

        (8)

        The covariance,

        s

        (x

        • X )(x

        • X )T

          where I is the eye region, and Sr is a circle window with

          the same radius as the iris. gx and gy are the horizontal and vertical gradients of the pixel, respectively. Fig. 4 illustrates

          S=1/(s-1)

          i i

          i 1

          (3)

          the results of iris center detection, in which Fig.4 (a)(c) are in the same video sequence.

          The eigenvectors corresponding to the t largest eigenvalues are retained in a matrix =(1 2.)t . A shape can now be approximated by

          That is, Fig. 4(a) is the rst frame, where the iris center could be accurately detected using the proposed algorithm. Accordingly, supposing the radius of the iris does not change with respect to the large distance between the user and the computer screen, then detect the iris center of eye images as

          shown in Fig. 4(b) and (c).

          X X

          b

          (4)

      2. Eye corner detection: Usually, the inner eye corner s viewed as a reference point for gaze estimation because it is

        where b is a vector of elements containing the model parameters, computed by

        insensitive to facial expression changes and eye status , and is more salient than the outer eye corner. Therefore, it is better to detect the inner eye corner to guarantee the gaze direction accuracy. A template matching record has been used for our

        b T ( X

        • X )

        (5)

        When fitting the model to a set of points, the values of are

        constrained to lie within the range of

        3 i

        , for the

        purpose of generating a reasonable shape. Then make model shapes t the new input shape by translation t, rotation , and scaling s, that is

        y TX ,t ,s, ( X

        b)

        (6)

        Where y is a vector containing the facial features. The eye region can be extracted using an improved version of ASM. In Fig.4, the eye region in each image is detected under the different illumination and head pose.

    2. Eye features detection

      In the eye region, the iris center and eye corner are the two notable features, by which to estimate the gaze direction. Accordingly, the following two sections focus on the detection of iris center and eye corner.

      1. Iris Center Detection: Once the eye region is extracted using the steps discussed above, the iris center will be detected in the eye region. First estimate the radius of the iris. Then, a combination of intensity energy and edge strength information is utilized to locate the iris center. In order to estimate the radius, rst smooth the eye region, which can remove noisy pixels and preserve the edges at the same time. Subsequently, the iris center can be obtained by the colour intensity. The eye region obtained by canny edge detection. Then by using this edge and center the radius of iris will be calculated.

        Finally, combine the intensity energy and edge strength to locate the iris center. And denote the intensity energy and the edge strength by E1 and E2, respectively

        Fig. 4. (a) ASM results on the gray image. (b) Mapping ASM results to the original images and extracting the eye region.

        eye corner detection. Having a number of different positions of eye corner, then by template matching can determine the exact eye corner corresponding to our gaze point.

    3. Eye vector and calibration

      By looking at the different positions on the screen plane, while keeping our head stable, the eye vector is dened by the difference between iris center p iris and the eye corner p corner. That is g = p corner -p iris. It provides the gaze information for the cursor control to obtain the screen coordinates by a mapping function. A calibration procedure is to present the user with a set of target points at which to look, while the corresponding eye vectors are recorded. Then, the

      relationship between the eye vector and the coordinates on the screen is determined by the mapping function. The second-order polynomial function is a good compromise between the number of calibration points and the accuracy of the approximation for the gaze mapping. In our calibration stage, the second-order polynomial function is utilized and the user is required to look at nine points shown in Fig.6. Then the eye vectors are computed and the corresponding screen positions are known. Then, the second-order polynomial can be used as mapping function, which calculates the gaze point on the screen, i.e., scene position, through the eye vector. Then the eye vector coordinates can be represented as,

    4. Head pose estimation

      This section discusses about the facial feature tracking and the head pose estimation algorithm in video sequences. Usually, the human head can be modeled as an ellipsoid or cylinder for simplicity, with the actual width and radii of the head for measures.

      To improve the estimation of head pose, utilize an Sinusoidal Head Model (SHM) to simulate the 3-D head shape because the ellipsoid and cylinder do not highlight facial features. The SHM can better approximate the shape of different faces with different facial features. 2-D facial features can be related to 3-D positions on the sinusoidal surface. When the 2-D facial features are tracked in each video frame, the 2-D 3-D conversion method can be utilized

      ux a0 a1 gx

      uy b0 b1 gx

      • a g a g g a g 2 a g 2

        2 y 3 x y 4 x 5 y

        2 y 3 x y 4 x 5 y

        (9)

        2 y 3 x y 4 x 5 y

        2 y 3 x y 4 x 5 y

      • b g b g g b g 2 b g 2

        (10)

        to obtain the head pose information. Pose from orthography and scaling with iterations (POSIT) is a 2-D3-D conversion method to get the pose (i.e., rotation and translation) of a 3-D model given a set of 2-D image and 3-D object points. For a better estimation of the head pose, propose the AWPOSIT

        Where (ux, uy) is the screen position, (gx, gy) is the eye vector, (a1,..,a5) and (b1,……….,b5) are the parameters of mapping function that can be solved using the least square method

        algorithm because the classical POSIT algorithm estimates the head pose of the 3-D model based on a set of 2-D points and 3-D object points by considering their contribution or significance uniformly. As for the 2-D facial features, they have different signicance with respect to reconstructing the pose due to their reliability. If some features are not detected accurately, the overall accuracy of the estimated pose may decrease sharply with the classical POSIT algorithm. The proposed AWPOSIT can obtain more accurate pose estimation using key feature information. The implementation details follow.

        Algorithm 2-AWPOSIT

        Input: P2D, P3D, w, f

        1: n=size (P2D, 1); c=ones (n, 1)

        2: u=P2Dx/f; v=P2Dy/f

        3: H=[P3D,c]; O=pinv(H)

        4: Loop

        5: J=O.u; K=O.v

        6: Lz1/(1/ J ).(1/ K ))1/2

        Fig.5.: First row: eye regions, and the second row: eye corner detection results.

        7: M1=J.Lz; M2=K.Lz;

        8: R1=M1(1:3); R2=M2(1:3)

        9: R3 (R1 / R1 ) (R2 / R2 )

        10: M3=[R3;Lz]

        11: c=H.M3/Lz

        12: uu=u; vv=v

        13: u=c.w.P2Dx; v=c.w.P2Dy

        14: ex=u-uu; ey=v-vv 15: I f e < then

        16: M4=[M1(4),M2(4),Lz,1]T; Exit loop

        17: End if

        18: end loop Output M

        Fig.6: Nine positions on the screen.

        The SHM assumes that the head is shaped as a 3-D sine wave in Fig.7 and the face is approximated by the sinusoidal surface. Hence, the motion of the 3-D sine is a rigid motion that can be parameterized by the pose matrix M at frame Fi .

        The pose matrix includes the rotation matrix R and translations matrix T at the ith frame, i.e

        (x , y , z )

        when head movement occurs. The

        M R

        T T

        T [M M

        1 1

        2 M 3 M 4 ]

        corresponding parameters R and T are estimated by the

        AWPOSIT.

        where R R

        3×3

        is the rotation matrix, and T R3×1

        1. x

          is the y

          x0

          R y0 T

          translation vector with T =( ti x, t i y, t i z)T , and M1 to M4 are z z

          column vectors. The head pose at each frame is calculated 0

          with respect to the initial pose, and the rotation and

          (14)

          translation matrix can be set at 0 for the initial frame (i.e., standard front face). The ASM model is used on the initial

          Therefore, the displacement (ux,uy) can be calculated by

          0

          0

          0

          0

          frame to obtain the 2-D facial features. Since these facial features are related to the 3-D points on the sinusoidal model, the movements of which are regarded as summarizing the head motion, and utilize the perspective projection through the camera model for establishing the relation between the 3D points on the

          ux =

          uy =

          f (x / z ) u f ( y / z ) v

          (15)

          (16)

          The eye vector is extracted and the calibration mapping

          function is adopted to obtain the gaze point (ux,uy) on the screen. Finally combining the gazedirection from the eye vector and the displacement from the head pose, the nal gaze point (sx,sy) can be obtained with

          sx ux

          sy uy

      • ux

      • uy

      (17)

      (18)

      The implementation procedure has been summarized in below algorithm.

      Algorithm 3: Pseudocode Of Eye Gaze Tracking System

      Fig.7. Perspective projection of 3-D point p onto the image plane

      Sinusoidal surface and their corresponding projections on the 2-D image plane.Fig.7 shows the relation between the 3-D point p =(x,y,z)T on the sinusoidal surface and its projection point q =(u,v)T on the image plane, where u and v are calculated by

      u= f(x/z) (12)

      v= f(y/z) (13)

      With f being the focal length of the camera. The Fig.8 shows the locations of facial features. The 2-D facial points are

      Initialization:

      • Extracting 2-D facial features using ASM

      • Initialize the 3-D SHM for the head pose M

      • Get calibration mapping

      Tracking the gaze through all the frames:

      1: Input from video

      2: Extract the eye region

      3: Detect the iris center p iris

      4: Detect the eye inner corner p corner

      5: Eye vector is obtained: g = p corner p iris

      6: Get gaze point (ux, uy) using mapping function 7: Track the face features P2D

      8: Obtain the head pose M = AWPOSIT(P2D, P3D, w, f) 9: Get the displacement (ux,uy)

      10: Obtain the nal gaze point (sx,sy) 11: end

      denoted as P2D and the 3-D points on the sinusoidal model

      are denoted as P3D. When the head pose algorithm is available, it estimates the head pose and computes the corresponding displacement (ux, uy) caused by the head movement. Suppose that the initial 3-D coordinate of the head is denoted as(x0,y0,z0), and its position of projection on the image plane is (u0,v0).The coordinate of the head is

    5. Point Of Gaze Calculation Algorithm

    Point of gaze can be referred to as the point of interest of user in Test Area where he/she looking at or gazing at. Users point of interest i.e. PoG can be calculated by extracting some important eye features. Firstly it is necessary to find the

    reference point. It can be helpful in PoG calculations because fewer calculations will be required to translate pupil movements in eyes into cursor movements on screen. Centre

    PoGx (Wscreen / 2) Rx rx

    (25)

    of Eye can act as a desired candidate for reference point. Eyes movable region has already been computed during calibration stage, a simple averaging of x and y co-ordinates can result in

    PoGy

    (hscreen / 2) Ry ry

    (26)

    Centre of Eye calculation. This is obtained by using equation

    (19) and (20).

    Where, PoGx and PoGy represent x and y coordinates of Point of Gaze respectively and rx denotes pupil distance in the x –

    COEx (TopRightCornerx TopLeftCornerx ) / 2

    COEy (TopRightCornery TopLeftCornery ) / 2

    (19)

    (20)

    TABLE I. EYE MOUSE CONTROL OPERATIONS

    Where, COEx and COEy denote x and y coordinates of center point of eyes movable region respectively. TopRightCorner, TopLeftCorner, BottomRightCorner and BottomLeftCorner construct a rectangular region which represent eyes movable region.

      1. Calculating The Scaling Factor:This is the first step in point of gaze calculation. In this step the cursor movements and pupil movements were interrelated i.e. it was to be found that how many pixels a cursor will traverse for a single pixel movement of the pupil. For this calculation width and height

        direction from reference point and ry denotes pupil distance in y direction from reference point and they can be computed by using Equation (27) and Equation (28)

        of eyes were associated with the width and height of the screen. Screen width and height is constant, but eyes movable region width and height is subject to change in different scenarios. Eyes movable region width and height can be computed using Equation (21) and Equation (22).

        rx COIx COEx

        ry COIy COEy

        (27)

        (28)

        Weye TopLeftCornery TopRightCornery

        heye TopRightCornery BottomRightCornery

        (21)

        (22)

        Where, COI represents pupil location.

  4. EVALUATION

    An evaluation of eye feature detection, head pose estimation, gaze estimation and point of gaze calculation is presented

    Where, Weye and heye represent width and height of

    eyes movable region respectively. Now scaling factor is to be computed for x and y coordinates with help of Equation

    (23) and Equation (24).

    A. Eye Center Detection

    The detection of the eye center is a difcult task in eye features detection. The accuracy of eye center detection directly affects the gaze estimation.

    Rx Wscreen / Weye

    (23)

    All the three phases of eye gaze tracking has been done on a video sequence and the corresponding gaze point of each frame has been determined perfectly. Then the result that

    Ry hscreen

    / heye

    (24)

    representing the gaze point of a single frame and its corresponding changes in x and y axis along with average accuracy plot is given in the following figure.

    Where, Wscreen and hscreen denote width and height of Test Area. Rx and Ry represent scaling factor for x and y coordinates respectively.

      1. Computing The PoG:This is the final step of POG calculation as well as Gaze Pointer algorithm. This stage will realize the significance of reference point. It translates the pupil movements in eyes into cursor movements in Test Area. Taking assumption that reference point in eye corresponds to center point in Test Area, pupil movements can be simply translated into cursor movements using Equation (25) and Equation (26).

    At first the gaze point estimation of a single video frame is shown in the figure below. Then by using this gaze point the corresponding point on the monitor screen are detected and then move the cursor by moving our eyes.

    1. Gaze Point

      Fig.8. Eye gaze point corresponding to a single frame

    2. Head pose estimation Plot: X-direction

      Fig.9. Head pose estimation in X-direction

    3. Head pose estimation Plot: Y-direction

      Fig.10. Head pose estimation: Y-direction

    4. Average accuracy Plot

      Fig.11. Average accuracy

    5. Mouse control operations

    TABLE II. MOUSE OPERATION RESULTS

  5. CONCLUSION

    A model for gaze tracking has been constructed using a web camera in a desktop environment. Its primary novelty is using intensity energy and edge strength to locate the iris center and eye corner detector to detect the eye vector. Further, have proposed an algorithm to improve the estimation of the head pose. Therefore, the combination of the eye vector formed by the eye center, the eye corner, and head movement information can achieve improved accuracy and robustness for the gaze estimation. After finding the gaze point corresponding to each eye movement our cursor has been moved into different positions on the screen, and is done by point of gaze algorithm. And also the mouse selection means right click, left click, double click can also done in accordance with our eye gaze. The experimental results have shown the efcacy of the proposed method.

  6. ACKNOWLEDGMENT

I am thankful to my project guide Asst. Prof Mrs Jayasree T C and Head of the Department Mrs. Nishidha for their support, remarks and suggestions, which were essential to carry out the survey. I am also grateful to all the staff members of Electronics & Communication Engineering Department of KMCT College of Engineering, Calicut for their assistance and support in improving the review paper significantly. I also thank my family and friends for all the support given by them.

REFERENCES

  1. Yiu-ming Cheung, Senior Member, IEEE, and Qinmu Peng, Member,IEEE Eye Gaze Tracking With a Web Camera in a Desktop Environment ieee transactions on human-machine systems, vol. 45, no. 4, august 2015

  2. Muhammad Usman Ghani, Sarah Chaudhry, Maryam Sohail, M. Nafees Geelani GazePointer: A Real Time Mouse Pointer Control Implementation Based on Eye Gaze Tracking Journal of Multimedia Processing and Technologies Volume 5 Number 2 June 2014

  3. Chaudhari.Sonali.A, Madur.Neha Virtual Mouse Using Eye Tracking Technique International Journal of Emerging Research in Management &Technology ISSN: 2278-9359 (Volume-4, Issue-2)

    February 2015

  4. Prajakta Tangade1, Shital Musale1, Gauri Pasalkar1 Miss. Umale M.D., Miss. Awate S.S.A Review Paper on Mouse Pointer

Movement Using EyeTracking System and Voice Recognition International Journal of Emerging Engineering Research and Technology Volume 2, Issue 8, November 2014,

[5] N. Ramanauskas Department of Electronics, iauliai University Calibration of Video-Oculographical Eye-Tracking System electronics and electrical engineering medicine technology

  1. A. T. Duchowski, A breadth-rst survey of eye-tracking applications,Behavior Res. Methods, Instrum., Comput., vol. 34, no. 4, pp. 455470, 2002

  2. E. D. Guestrin and E. Eizenman, General theory of remote gaze estimation using the pupil center and corneal reections, IEEE Trans. Biomed. Eng., vol. 53, no. 6, pp. 11241133, Jun. 2006.

  3. R.Valenti,N.Sebe,andT.Gevers,Combining head pose and eye location information for gaze estimation, IEEE Trans. Image Process., vol. 21, no. 2,pp. 802815, Feb. 2012

  4. Ramesh R, Rishikesh M (2015) Eye Ball Movement toControl Computer Screen. J Biosens Bioelectron 6: 181.doi:10.4172/2155 6210.1000181

Leave a Reply