Advanced Mouse Cursor Control And Speech Recognition Module

DOI : 10.17577/IJERTV1IS9267

Download Full-Text PDF Cite this Publication

Text Only Version

Advanced Mouse Cursor Control And Speech Recognition Module

K.Prasad1, PG Scholar,

B.Veeresh Kumar2,M.Tech, Asst.Professor,

Dr. B. Brahma Reddy3,M.Tech,Ph.D Head of the department,

VBIT Engg.colle,

VBIT Engg.college,

VBIT Engg.college,

Abstract We constructed an interface system that would allow a similarly paralyzed user to interact with a computer with almost full functional capability. A real-time tracking algorithm is implemented based on adaptive skin detection and motion analysis

The clicking of the mouse is activated by the user's eye blinking through a sensor. The keyboard function is implemented by voice recognition kit.

Keywords Embedded ARM7 Processor, Mouse pointer control, Voice Recognition .

I.INTRODUCTION

The clicking of the mouse is activated by the user's eye blinking through a sensor. The keyboard function is implemented by voice recognition kit. Eye blinking as the selection mechanism.

we constructed an interface system that would allow a similarly paralyzed user to interact with a computer with almost full functional capability. That is, the system operates as a mouse initially, but the user has the ability to toggle in and out of a keyboard mode allowing the entry of text. This is achieved by using the control from a single eye, tracking the

position of the pupil for direction, and using blinking as an input. The block diagram is shown in below figure(1).

Voice recognition

ARM7TDMI-S

microprocessor

Signal conditioning section

Eye Blink sensor

Transceiver

Web cam

trans/receiver Personal computer

Figure 1. Module wise Block Diagram

Cursor can be moved with the help of finger movements. MATLAB will send the movement direction to controller. Controller then passes the actual information to encoder. Information encoded then sends using TX .

Receiver will decode the received information. Controller sends to PC through RS232 cable. It will perform the operation. Same

operation for selecting any documents with the help of eye blink.

II .Overview of the System

Easy Input is a keyboard and mouse input device for paralyzed users. This study describes the motivation and the design considerations of an economical head-operated computer mouse. In addition it focuses on the invention of a head-operated computer mouse that employs tilt sensors placed in the headset to determine head position and to function as simple head operated computer mouse. One tilt sensor detects the lateral head motion to drive the left or right displacement of the mouse.

  1. Eye blink sensors

    Eye blink sensor figure is shown in below figure(2)

    Figure 2.Eye blinking sensor

    This switch is activated when the user blinks their eye. It allows individuals to operate electronic equipment like communication aids and environmental controls hands-free. Each blink of the eye is detected by an infrared sensor, which is mounted on dummy spectacle frames.

    The eye blink switch can be set up to operate on either eye and may be worn over normal glasses. The sensitivity of the switch can be adjusted to the users needs and involuntary blinks are ignored. The sensor is connected to a hand-held control unit with a rechargeable battery.

  2. IR SENSOR

IR LED at 900nm-GaAlAs Infrared Light Emitting.Diode-Shines invisible IR light on the users eye IR 900nm sensor-Light Detector- Detects reflected IR light we decided to use blinking as we wanted the device to be functional for non-vocal or ventilated users (blowing or sucking was another option). Our first idea,and the one we implemented, was to use a led/photodiode pair to reflect light off the eye. We found that Optek Inc. makes a round receiver, consisting of a LED and aphoto transistor mounted on the same unit. This detected a strong increase in signal upon blinking. We were worried about detecting the difference between normal and intentional blinks, but we found that for most users the intentional blinks produced a much stronger signal, and they were always much longer the

~300ms normal blink duration.

C.Speech recognition

Speech recognition kit processes voice analysis, recognition process and system control functions 40 isolated voice word voice recognition system can be composed of external micro-phone, Keyboard, 64K SRAM and some other components Here we are using HM2007 IC for voice recognition

  1. Feature of voice

    • Single chip voice recognition CMOS LSI with 5V power supply

    • Speaker dependent isolates-word recognition system

    • External 64K SRAM can be connected directly

    • Maximum 40 words can be recognized for one chip

    • Maximum 1.92 sec of word can be recognized

    • Multiple chip recognition is possible

    • Microphone can be connected directly

    • Two control mode is supported

      • Manual mode

      • CPU mode

    • Response time: less then 300 ms

  2. Functional modes

    Manual Mode

    • Keypad, SRAM and other components can be connected HM2007 to build simple recognition system

    • Type of SRAM can be used is 8K-byte memory Power on mode

    • When the power is on HM2007 will starts initialization process

    • If WAIT pin is L, HM2007 will do the memory check to see whether 8K-byte SRAM is perfect/not

    • If WAIT pin is H, HM2007 will skip the memory check process

    • After initial process is done, HM2007 will then moves into recognition mode

    • Recognition mode

      RDY is set to low and HM2007 is ready to accept the voice input to be recognized

      When the voice input is detected, the RDY will return to high and HM2007 begins its recognition process

  3. Classification of speech recognition

    1. Speaker Dependant

      Speaker dependent systems are trained by the individual who will be using the system. These systems are capable of achieving a high command count and better than 95% accuracy for word recognition. The drawback to this approach is the system only responds accurately only to the individual who trained the system.

      dependent however high accuracy can still be maintained within processing limits. Industrial requirements more often need speaker independent voice systems, such as the AT&T system used in the telephone systems.

  4. Module view

Speech recognition module is shown in below figure (3).

Figure 3. Speech recognition module overview

  1. Hand Tracking

    We design the hand-tracking algorithm to be simple, efficient and fast so that it can be applied to real-time applications. The algorithm is based on the detection of motion and skin color. Motion is indicated by the change in the pixel values.

    Gray scale images

    Frame

    Differenc Thresholding

    This is the most common approach employed in software for personal computers.

    Dilation(10-15 times)

    Erosion

    (Applied twice)

    1. Speaker Independent

    Speaker independent is a system trained to respond to a word regardless of who speaks. Therefore the system must respond to a large variety of speech patterns, inflections and enunciation's of the target word. The command word count is usually lower than the speaker

    Figure 4. Fowchart for obtaining a region representing the location of the hand based on motion.

    The area detected may be larger than expected. This can be due to the movement of the arm region. Taking this fact into consideration, the following algorithm is used to find the center of the hand region. First, the

    entire image for each frame is scanned and the boundary of the motion detected region is noted. Flow chart for the wireless mouse is shown in the below figure.

    Start

    Initialize and Configure Port Settings for both

    Wait for Signals from the IR signals

    NO

    Is the get signals

    ?

    Figure 5. Flowchart for wireless mouse

  2. Finger Tracking

We design the finger-tracking algorithm to be simple, efficient and fast so that it can be applied to real-time applications. The algorithm is based on the detection of motion and skin color.Motion is indicated by the change in the pixel values. The frames are first converted into greyscale images. Then, a frame-differencing algorithm is used to analyze the region where the movement has taken place. Equation (1) gives the image which has non-zero pixel values in the regions where motion has taken place. A thresholding algorithm based on equation (2) gives a binary image with white pixels indicating the region of motion. The threshold of 30 is chosen by monitoring the frame difference in different lighting conditions. The value of 30 gives enough white pixels to track the location of the hand. It is assumed that the only moving object performing the gesture is the hand. Since the video is captured from a regular webcam, random camera noise causes large variations in certain pixel values in successive frames. These variations result in white pixels in the threshold images.

YES

YES

Open the Folder and text will be type

NO

If Data get from Eye blink sensor

Figure 6.hand tacking

VB curser will be move

Finding the motion history in the video frame place an important role in finding the direction of movement .The motion gradient of the difference frames over a predefined interval indicates the direction of motion the trajectory of motion is drawn on the basis of motion history of images (MHI). History of the difference images forms a silhouette. The gradient of this silhouette is updated with every frame. The amount of time for which the previous images stay in the silhouette is 0.3s. The frame capture rate in webcams normally varies from 10 to 20 frames per sec. Thus, an approximate time of

0.3s will capture between 3 to 6 frame. Any pixel that is older than this time stamp duration is set to 0. The motion gradient is found by applying 3×3 sobel operator on the MHI silhouette [5]. This procedure gives us a mask in which each non-zero value of the image indicate motion in that particular direction. Using these motion gradient values, a global motion vector is computed.

Figure 7. finger tracking

RESULTS

In this paper we are observed the outputs. we created VB window in matlab.with out mouse we controlled cursor point in pc. By using eye blinking sensor we opened the folders in the pc. By using speech recognition module we displayed the words on separate LCD display.

III. CONCLUSION

In this work, we tested the effectiveness of pointing and scrolling using IR sensor on wireless device interfaces. The results indicate that pointing and scrolling can be effectively done using finger moments. Fits law is found to fit the experimental data for both of the tasks. Opened the folder by using eye blinking sensor and display the words on separate LCD board. The results also showed that wrist tilting is relatively easier around the thumb than along it. We think that tilting interaction provides an alternative way of interaction that needs only one hand rather than both hands compared to using the stylus. We noted that users prefer tilting using their non dominant hand, which make the dominant hand free for handling the environment. The result introduced in this work

can help in the design of device interfaces especially when only one hand is available for the interaction. Now in the speech recognition module we are impleted for only isloted words, for the continue words and connected words it is implementing stage.

REFERENCES

  1. Jonathan Alon, Vassilis Athitsos, Quan Yuan, and Stan Sclaroff. A Unified Framework for Gesture Recognition and Spatiotemporal Gesture Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), Vol. 31, No. 9, September 2009, pp. 1685-1699.

  2. Mark Nixon, Alberto Aguado, Feature Extraction & Image Processing", Elsevier Ltd.

    Second Edition 2008, pp. 104-109

  3. Farhad Dadgostar and Abdolhossein Sarrafzadeh, "An adaptive real-time skin detector based on Hue thresholding: A comparison on two motion tracking methods", Pattern Recognition Letters, Volume 27, Issue 12, September 2006, pp. 1342-1352.

  4. Feng-Sheng Chen, Chih-Ming Fu, Chung- Lin Huang, "Hand gesture recognition using a real-time tracking method and hidden Markov models", Image and Vision Computing, Volume 21, Issue 8, 1 August 2003, pp. 745-758.

  5. Rafael C. Gonzalez, Richard E. Woods, Digital Image Processing, Second Edition, Prentice Hall, 2002, pp. 136-137.

  6. Gary Bradski, Adrian Kaehler, Learning Open CV, O'Reilly Media, Inc., September 2008: First Edition, pp. 341-345.

Leave a Reply