- Open Access
- Total Downloads : 137
- Authors : Manisha Chahal, B. Anil
- Paper ID : IJERTV3IS051778
- Volume & Issue : Volume 03, Issue 05 (May 2014)
- Published (First Online): 03-06-2014
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Gesture Recognition Techniques And Applications
Manisha Devi 1
Metch from Department of EC Engineering, Lingayas University, Faridabad, India
-
Anil Kumar 2
Asst. Prof in Department of ECE/EE Engineering, Lingayas GVKS IMT, Faridabad, India
Abstract Gestures are the soul of visual interpretation which can accomplish human machine interaction (HCI). We use various gestures to express our intentions in day to day life. These gestures can be defined as a sequence of states in a measurement or configuration space. Gesture is said to be recognized successfully when the sequence of state are measured effectively. To identify and recognize the different gestures use the concept of Gesture Recognition (GR) Technology. In this paper we proposed the techniques which are used very often. The paper summarizes the concept behind Gesture Recognition System (GRS), and explains the various methods that make gesture recognition possible. In this paper we explore the models which are vigorously used for gesture recognition such as Hidden Markov Model (HMM), Principal Component Analysis (PCA) for feature extraction and Artificial Neural Network (ANN). The implementation uses the eigen space which is determined by processing the eigen values and eigenvectors of the image set. These network models achieve a recognition rate (training as well as generalization) of up to 100% over a number of test subjects. Recommended algorithms based on past research will be detailed, classification of gesture, exploitation of gestures in experimental systems, future scope of gesture on recognition system, importance and real time applications. The experimental result from ANN method gives more recognition rate than PCA and HMM.
Keywords HCI, HMM, ANN, PCA, Gesture Recognition
-
INTRODUCTION
In todays tech savvy world communication between the user and the computer has became an active research area. Gestures are easily used for daily human interactions while human computer interactions still require understanding and analyzing signals to interpret the desired command that made the interaction sophisticated and unnatural. Gesture is the physical expression of mental concepts. The notion of gesture is to embrace all kinds of instances where an individual engages in movements whose communicate intent is paramount, manifest and openly acknowledged. In recent years the GR concept is developing vigorously. Generally gesture is a form of non-verbal communication in which visible bodily actions communicate particular messages, either in place of speech or parallel with words. It is a stochastic process. [1] To identify and recognize the different gestures use the concept of Gesture Recognition Technology [6, 7]. GR Technology is a category of perceptual interface. It combines Kinesics and computing, to create a digital
environment that is seemingly driven by logical instinct human motions. The identification and recognition of posture, gait, and human behaviors also the subject of gesture recognition techniques. Recently the designing of special input devices witnessed great attention in this field for human computer interaction more convenient [2]. The combination of traditional devices mouse and keyboard with the new designed interaction devices such as gesture and face recognition, sensors, and tracking devices provides flexibility in text editing, robot control interfaces, video games [4]. It can also be defined as a physical movement of hands, arms, or body that delivers an expressive message, to convey information, interacting with the environment [6]. Some functional role of gesture is: Semiotic (to communicate meaningful information), Argotic (to manipulate the environment), Epistemic (to discover the environment through tactile experience) [1]. GR technology offers a new medium for human-computer interaction (HCI) that can be both efficient and highly intuitive. While many researchers have documented methods for recognizing gestures from instrumented gloves or vision based approach at high levels of accuracy.
-
RELATED WORK
Gesture research and recognition systems date back to 1960s, however, gesture recognition are still in their infancy and are able to perform only basic functions. The first gestures that were applied to computer interactions date back to the PhD work of Ivan Sutherland [11], who demonstrated Sketchpad, an early form of stroke-based gestures using a light pen to manipulate graphical objects on a tablet display. Many methods by gesture recognition using visual analysis have been proposed for hand gesture recognition. C.W.Ng gives a vision based system able to recognize 14 gestures in real time via gesticulation within graphical interface [12]. Hasanuzzaman et al. presented a real time hand gesture using skin color segmentation and multiple-feature based template-matching techniques.[13] Xia Liu and Kikuo Fujimura have proposed the method by depth data for hand gesture recognition.[14] R.S Jadon , G.R.S. Murthy used supervised feed-forward neural net based training and back propagation algorithm for classifying hand gestures into different categories and achieve upto 89%correct results.[11] Nielsen et al. proposed a real time vision system which uses fast segmentation process to obtain the moving hand from the whole image and hand posture is recognized by GR process. They used Hausdroff distance
approach for robust shape comparison. Their system recognitions 26% hand postures and achieved 90% recognition average rate.[15]Kulkarni recognize static posture of ASL using neural networks algorithm and the features are extracted using the histogram technique and Hough algorithm. Feed forward Neural Networks with three layers are used for gesture classification where for each 26 character in American Sign Language, 3 samples are used for testing , 5 samples for training, the system achieved 92.78% recognition rate.[16]Stergiopoulou suggested a new Self-Growing, Self- Organized Neural Gas Network used for detecting hand shape morphology and Gussian distribution model for recognition.[17], Another very important method is suggested by Francis k.h. Quek, Meide Zhao, and Xindong Wu ,they used AQ Family Algorithms and R-MINI Algorithms for detection of Hand gestures [18]. Waldherr et al. proposed a vision based interface that instructs a mobile robot using pose and motion gestures in an adaptive dual-color tracking algorithm.[19]Chen et al. proposed a system with four modules : real time hand tracking and extraction, feature extraction, Hidden Markov model training and gesture recognition. Chen recognized continuous gestures before stationary background.[20]
-
GESTURE CLASSIFICATION
Gesture classification system was first discussed in Wexelblat [33]. The system has five major categories:
-
Symbolic/modalizing: Symbolic gestures are hand postures used to represent an object or concept, and are always directly related to a particular meaning: for instance, the
thumbs up posture means that everything is okay.
-
Pantomimic: Pantomimic gestures involve using the hands to represent a task or interaction with a physical object. Users making this type of gesture mimic an action they would do if they were actually interacting in the real world: for example, making a swinging gesture with ones hands to indicate hitting a baseball with a bat.
-
Iconic: Iconic gestures are gestures that represent an object. The hands become the object or objects discussed. These gestures usually are performed to act out a particular event in which the representative object is the focal point suh as someone pretending to drive a car.
-
Deictic/lakoff: Deictic gesture or pointing gestures are used to indicate a particular object. The other type of gesture included in this category are Lakoff gestures[4], associated verbal utterances that specify a particular metaphor such as happiness or anger. A gesture usually accompanies these utterances to show the directionality of the metaphor.
-
Beat/Butterworths/Self-adjusters: The last category contains three types of gestures: beats, Butterworths, and self- adjusters. Beats are gestures used for emphasis, especially when used with speech. Beat gestures can help speakers emphasize particular words or concepts and also help direct the listeners attention. Butterworth gestures[19] are similar to beats except they are primarily used to mark unimportant events. The classic example of a Butterworth gesture is hand waving as a placeholder for speaking when one is still thinking about how to say something. Finally, self-adjusters are gestures people make when they fidget: for example, when one taps a finger or moves a foot rapidly.
The gesture classification plays the important role during gesture recognition as the input gestures are used as per the
application requirement. The operation of the GRS proceeds in four basic steps:
-
Input image is the initial step and the user can use visual approach, glove based approach, Instrumented approach for providing the input to the system.
-
Background subtraction (with the help of subtraction method) here the input image is subtracted from the background noise.
-
Image processing and data extraction (Gradient technique and PCA for extraction)
-
Decision tree generation/parsing (initial training of the system requires the generation of a decision tree, however subsequent use of the system only requires the parsing of the decision tree to classify the image).The system uses two phase like in the fig. 1 where the Neural Network is used as classification technique. The GRS has two phase: Training phase and Testing phase. First the system is train with some parameters and then test the result output with the database or store result.
Image Proprocessing
Vision
Feature Extraction
Input data(1,2,3..
System Training
System Testing
Vision
System Database
Figure 1 Architecture of Gesture Recognition System
-
-
GESTURE TYPES
In computer interfaces, the two types of gestures are discriminate. We generally consider online gestures, which pertains to direct manipulations like scaling and rotating, offline gestures are usually processed after the interaction is finished. The main classification of gesture is:
-
Static Gesture: It can be described in terms of hand shapes, hand postures. Posture is the combination of hand position, orientation and flexion observed at some time. Static gestures are not time varying signals. E.g. Facial information like a smile or angry face. [4]Freeman and Roth use an orientation histogram as a feature vector for interpolation and gesture classification; the system recognizes ten gestures.[5]For obtaining the image from noisy background
the user have to perform some operation on the data set, as shown in the fig. 2 the initial step is preprocessing then apply some Feature extraction and classification methods for gesture recognition.
-
Database Description: Gesture is produced as it is a static gesture .The system works offline recognition ie. We give input test image from the database and system tells us which gesture image we have given as input. The system is purely data dependent. Each gesture is performed at various scales, translations, and a rotation in the plane parallel to the image-plane.
-
Preprocessing: Preprocessing is very much required task to be done in hand gesture recognition system. Preprocessing is applied to images before we can extract features from hand images. Preprocessing consist image acquisition, segmentation and morphological filtering methods. Different algorithm is used for segmentation purpose and gray scale images is converted into binary image consisting hand or background .Morphological filtering techniques are used to remove noises from images so that we can get a smooth contour using four operations: dilation, erosion, opening and closing.
-
feature extraction: Feature extraction is very important in terms of giving input to a classifier. In feature extraction first we have to find edge of the segmented and morphological filtered image. Then a contour tracking algorithm is applied to track the contour. One should know that a gesture recognition systems efficiency depends on the features extracted. Now the features extracted to be considered good should match certain criteria like 1) easily computable and 2) they should not be getting replicated.
-
Classifier: After extraction outcome result fed to different classifiers. We can use different classifiers to classify hand gesture images and to achieve high accuracy rate and high recognition aim.
Figure 2 Static Gesture Recognition
-
-
Dynamic Gesture: It can be described according to hand movements. Gesture is sequence of postures connected by motions over a short time span. Dynamic gesture require the use of tracking to track the body movement.In video signal the individual frames define the posture and the video sequence define the gesture. E.g.Taking the recognized temporal to interact with machine. [5]Adrian Bulzacki, Lian Zhao, ling Gaun and Kaamran Raahemifar proposed a dynamic gesture recognition system by using the Hidden Markov Model.[6].In the fig. 3 the HMM Model is used for dynamic gesture recognition where the vision based approach is used for the input. After the background subtraction and get the exact
temporal for HCI by tracking, the feature extractor will be done. Finally the HMM as the classifier recognize the gesture.
Figure 3 Dynamic Gesture Recognition System
-
-
GESTURE RRECOGNITION MODEL:
The recognition of gesture involves several concept of Image processing, pattern recognition, machine learning, motion detection and analysis. Different tools and techniques are used for image processing, pattern recognition.
-
Hidden Markov Model
HMM is a stochastic process. HMM is a doubly stochastic model and is appropriate for coping with the stochastic properties in gesture recognition. Instead of using geometric features, gestures are converted into sequential symbols. Before going into depth, we must be aware of the term Markov Process. A Markov process is a type of random process where probability of the future state determined by the most recent state. HMM is a collection of finite states connected by transitions and number of random functions. HMMs are employed to represent the gestures, and their parameters are learned from the training data. Each state is characterized by two sets of probabilities: a transition probability, and either a discrete output probability distribution or continuous output probability density function which, given the state, defines the condition probability of emitting each output symbol from a finite alphabet or a continuous random vector.HMM system topology is represented by one state for the initial state, set of output symbols and set of transitions state. With HMM the current and future state are unknown but need to be somehow predicted, therefore HMM use the only values it might know, old values and indirect variables. So HMM be a learning algorithm. HMM is mostly used in sign language recognition and speech recognition algorithms. Nam and wohn proposed
a system for recognizing space-time hand movement patterns with a HMM[7]. Keskiin C. et. Al. presented HCI interface based on real time hand tracking and 3D gesture recognition using Hidden Markov Model[8]. The HMM based approach uses shape and motion inforation for recognition of the gesture. HMM is the common approach to dynamic GRS. The basic framework for our recognition engine is the following:
-
The image sequence goes through several preprocessing steps such as low-pass filtering to reduce the noise, background subtraction to extract the moving objects, and binarization of the moving objects.
-
To generate blobs. The blobs roughly represent the poses of the human. The features are the amounts of object (black) pixels. These features are vector quantized, such that the image sequence becomes a sequence of vector quantized( VQ-labels), which are then processed by a discrete HMM.
HMM can be used in solving three basic problems: the evaluation problem, the decoding problem, and the learning problem. The HMM-based gesture recognition approach can be described as follows:
-
Define meaningful gestures :To communicate with gestures, meaningful gestures must first be specified. For example, a certain vocabulary must be specified for a sign language, and certain editor symbols must be given in advance if the gestures are to be used for editing text files.
-
Describe each gesture in term of an HMM :A multi- dimensional HMM is employed to model each gesture. A gesture is described by a set of N distinct hidden states and r dimensional M distinct observable symbols. An HMM is characterized by a transition matrix A and r discrete output distribution matrices B;, i = 1,. . , r. Note that only the structures of A and B are determined in this step and the values of elements in A and B will be estimated in the training process.
-
Collect training data:In the HMM-based approach, gestures are specified through the training data. It is essential that the training data be represented in a concise and invariant form. Raw input data are preprocessed before they are used to train the HMMs. Because of the independence assumption, each dimensional signal can be preprocessed separately. In the prototype system discussed later, the preprocessing involves the short-time Fourier transform and vector quantization techniques.
-
Train the HMMs through training data :Training is one of the most important procedures in a HMM-based approach. The model parameters are adjusted in such a way that they can maximize the likelihood P(0IX) for the given training data. No
analytic solution to the problem has been found so far. However, the Baum-Welch algorithm can be used to iteratively estimate model parameters to achieve the local maximum.
-
Evaluate gestures with the trained model :The trained model can be used to classify the incoming gestures. The Viterbi algorithm can be used to classify isolated gestures.
HMMs are only piecewise stationary processes. Regarding with gestures, all terms from HMM are transient. For accurate Recognition rate PHMM (Partly Hidden Markov Model) a second order model was introduced because in some cases HMMs are not always suitable for gesture recognition. If Markov condition is violated then HMM fails. Then come with new idea known as Coupled Hidden Markov Models (CHMM) is used.
-
-
-
Principal Component Analysis
Principal Component Analysis is a dimensionality reduction technique based on extracting the desired number of principal components of the multidimensional data. It is a stastical technique that has application in different fields (Face recognition, image compression, hand gesture recognition). Before realizing a description of this method, we will introduce mathematical concepts that will be used in PCA.[9] Mathematical Background:
-
Standard Deviation: In statistics, we generally use samples of population to realize the measurements. Standard deviation and variance only operate on 1 dimension. The average distance from the mean of the data set to point. To compute the squares of the distance from each data point to the mean of the set, add them all up.
-
Covariance: The covariance matrix that is formed of the how much the dimensions vary from the mean with respect to each other. Covariance is always measured between
2 dimensions. For calculating the covariance between one dimension & itself we get the variance. Let suppose we had the data set(X, Y, Z) with 3 dimensions. Then we can measure the covariance between the X & Y dimensions, the Y & Z dimensions and X & Z dimensions. Calculating the covariance between X and X dimension or Y and Y dimension or Z and Z dimension, we get the variance of the X, Y and Z dimensions respectively.
-
Eigen vectors: Eigen vector is a vector that is scaled by linear transformation. It is a property of matrix. When a matrix acts on it, only the vector magnitude is changed not the direction. Eigen vectors can only be found for square matrix.
-
Eigen value: Eigen values are a product of multiplying matrices however they are as special case. Eigen values are found by multiples of the covariance matrix by a vector in 2 dimensional space (i.e. a Eigenvector).
PCA was first applied in the computer vision community to face recognition by Kirby and Sirovich and later extended by Turk and Pentland[30]. Birk et al. and Martin independently developed the first two systems using PCA to recognize hand postures and gestures in a vision-based system. Birks system was able to recognize 25 postures from the International Hand Alphabet, while Martins system was used to interact in a virtual workspace. Birks system first performs PCA on sets of training images to generate a posture classifier that is then used to classify postures in real time. Each set of training
images can be considered a multivariate data set: each image consists of N pixels and represents a point in N-dimensional space. Birks recognition system works well but there is little indication that PCA compresses the data set significantly beyond a naive approach.[31][32]
To perform PCA several steps are undertaken:
-
Create the smallest Dataset for best resolution, Subtract the Mean from each of the data dimensions.
-
Calculate the covariance matrix of the database.
-
Calculate Eigenvectors and Eigen values from the covariance matrix. This give us the principal orientation of the data.
-
Select the good components and make the Feature Vector.
-
Multiply the transposed Featured Vectors by the transposed adjusted data.
-
Compare the outcomes result with the help of Euclidian distance method by measuring the coefficients.
-
-
-
Artificial Neural Network
Artificial neural network (ANN), called neural network (NN), is a computational model or mathematical model that is inspired by the structural aspects of biological neural networks. A neural network consists of an interconnected group of artificial neurons, and it processes information using a connectionist approach tocom putation. In most cases an ANN is an adaptive system that changes its structure based on external or internal information that flows through the network during the learning phase. Modern neural networks are non-linear statistical data modeling tools. They are usually used to model complex relationships between inputs and outputs or to find patterns in data. The ability of neural networks to discover nonlinear relationships in input data makes them ideal for modeling nonlinear dynamic systems such as the stock market [10]. Artificial neural networks are parallel computer algorithms which have been used for pattern recognition. Neural networks perform mapping functions from an n -dimensional input space to an M -dimensional output space. They learn this mapping from sample inputs through a learning process which are explained below in table no. 1 with applications.
Neural networks generally have two basic learning models: Feedforward Structure (network has no loop), Feedback Structure (loops occur in the network because of feedback connections. The neural network uses the node as its fundamental unit, the nodes are connected by links, and the links have an associated weight that can act as a storage mechanism. The first component is the input function which computes the weighted sum of its input values; the second is the activation function, which transforms the weighted sum into a final output value. Many different activation functions can be used; the step, sign, and sigmoid functions being quite common due to easy to use.
This method has training phase and testing phase, with using MATLAB 7.01 the nntool is the mostly used for gesture recognition and calculating the effective percentage result. In
nntool We train the network use of input layer, hidden layer and output layer. Steps to execute the task
-
First you have to make data set. (learning environment)
-
Deciding an algorithm to train the network depending on the network architecture.
-
Give matlab access to the data sets providing particular path of the folder on your system, and make GUI for image interaction and morphological operations.
-
Now you have to train the network with input data applying bias and weights.
-
Here you need to provide an algorithm that will set such a threshold value that will decide whether the test image is accurate or NOT.
-
Test the network and get the different plots with some parameters.
Table 1. ANN Method
-
-
APPLICATIONS
-
Sign language recognition: Sign language is used for interpreting and explanations of certain subject during the conversation. For example ASL recognition using boundary histogram, dynamic programming matching, MLP neural network [21]. Arabic sign language(0-9) using two different types of Neural network, Partially and Fully Neural network [22].
-
Tele presence: Tele presence is that area of technical intelligence which aims to provide physical operation support that maps the operator arm to the robotic arm to carry out the necessary task, for instance the real time ROBOGEST system
[29] constructed at University of California, San Diego presents a natural way of controlling an outdoor autonomous vehicle by use of a language of hand gestures. The prospects of tele presence includes space, undersea mission, medicine manufacturing and in maintenance of nuclear power reactors. -
Robotics: This application is of particular interest in the field of robotics for identifying the context of statement. E.g. accelerometer based gesture recognition system where the
robot is moving as per the command, stroke rehabilitation [23].
-
Advance Technology: Gestures used to control devices, interaction with video games for more interactive or immersive. In [24] a set of hand gesture used to control the TV activities, such as increasing and decreasing the volume, muting the sound, channel switching. Replacing the traditional keyboard and mouse setup by using the Hidden Markov Model.
-
Virtual Environment: VEs the most popular application in gesture recognition for communication media system [25]. The 3D pointing gesture recognition for HCI in real-time binocular views. The proposed system is accurate and independent of environmental changes and user characteristic [26].
-
Affective computing: For recognizing and identifying emotional expression required biometric technique from GR technology.
-
Graphic Editor Control: Graphic editor control system requires the hand gesture to be tracked and located as processing operation [27]Dynamic gestures for editing graphic system and drawing. Shape for drawing are: triangle, move, rectangular, circle, delete, swap, and close.[28]
-
-
CONCLUSION
In this paper we propose an original automatic gesture recognition architecture via a novel classification scheme incorporating Markov chains besides using Prinicipal component analysis used for features extraction. Using all the gesture instances, for both the training and the testing phases of the system, in an attempt to validate the systems learning capabilities, resulted in high recognition percentages. Algorithm selection for the gesture recognition depends on the application need. The ANN is the vigorously used term which is the self learning method gives 96% accuracy. In this paper application areas for the gesture system are presented.
-
FUTURE ENHANCEMENT
-
Our proposed framework is not limited to gesture recognition. Future work will focus on exploring regression models on manifolds, other virtual reality application. We can modify and enhance our system by including more gestures to completing different operations on system, and can used the proposed methods for controlling novel gesture recognition system that leverages wireless signals (e.g., Wi- Fi) to enable whole-home sensing and recognition of human gestures.
REFERENCES
-
Gesture Recognition System@2010International Journal of Computer Applications(0975-8887)Volume 1-No. 5.
-
IEEE Transactions On Systems, Man, And CyberneticsPart C: Applications And Reviews, Vol. 37, No. 3, May 2007.
-
Sanjay Meena, 2011 A Study On Hand Recognition Techniques, Master Thesis Department pf Electronic and Communication Engineering, National Institute of Engineering INDIA.
-
Hand And Gesture Recognition System For Robotic Application, International Journal of Computer and Information System-Vol2 No.1 ISSN: 0976-1349 July-Dec 2010 by P. Vijaya, N.R.V. Praneeth and Sudheer .V.
-
An Approach To Glove-Based Gesture Recognition Farid Parvini, Dennis Mcleod, Cyrus Shahabi, Bahareh Navai, Baharak Zali, Shahram
Ghandeharizadeh Computer Science Department University Of Southern California Los Angeles, California 90089-0781, [Fparvini,Mcleod, Cshahabi, Navai, Bzali, Shahram]@Usc.Edu.
-
A Static Hand Gesture Recognition System Using a Composite Neural Network by Mu-Chun Su, Woung-Fei Jean, and Hsiao-Te Chang,0- 7803-3645-3/96 IEEE.
-
Y. Nam and K. Wohn, Recognition of space time Hand Gesture using Hidden Markov Model, ACM Symposium on Virtual Reality Software and Technology, pp. 51-58, Hong Kong ,1996.
-
C. Keskiin, A. Erkan, L. Akarun, 2003,Real Time Hand Tracking and 3D gesture Recognition for interactive interfaces using HMM. In Proceedings of International conference on Artificial Neural Network.
-
Lindsay I Smith, A tutorial on Principal Components Analysis and Syed
-
Rizzvi , P. Jonathan Phillips and Hyeonjonn Moon, The FERET verification testing protocol for face recognition algorithms.
-
-
Tin Hninn Maun, Real Time Hand Tracking and Gesture Recognition System Using Neural Networks World Academy of Science, Engineering and Technology 50,2009.
-
Hand Gesture Recognition Using Neural Networks by G.R.S Murthy,R.S. Jadon, 978-1-4244-4791-6/10 IEEE.
-
C.W.Ng and S.Ranganath,Real time gesture recognition system and application Image Vis. Comput., Vol. 20,No. 13-14, PP. 993-1007,2002.
-
Md. Hasanuzzaman, V. Ampornaramveth, Tao Zhang, M.A. bhuiyan, Y.Shirai and H. Ueno, Real-time Vision based Gesture Recognition for Human Robot Interaction.In the Proceedings of the IEEE International Conference on Robotics and Biomimetrics, Shenyang China, 2004.
-
Chris Joslin, Ayman EL-Sawah, Qing chen Dynamic Gesture Recognition, Proc. Of the Instrumental and Measurement Technology Confrerence, pp 1706-1710, 2005.
-
Elena Sanchez-Nielsen, Lius Anton-Canalis and, Mario Hernandez- Tejera,Hand Gesture Recognition for Human-Machine Interaction, journal of WSCG, Vol. 12, No. 1-3,Plzen, Czech republic, 2003.
-
V.S. Kulkarni, S.D Lokhande, (2010) Appearance Based Recognition of American Sign Language Using Gesture Segmentation,International Journal on Computer Science and Engineering(IJCSE), Vol. 2(3),pp. 560-565.
-
Mahmoud e., Ayoub A., and Brend M., (2008) Hidden Markov Model Based Isolated and Meaningful Hand Gesture Recognition, World Academy of Science, Engineering and Technology 41.
-
Meide Zhao, Francis k.H. Quek,Member, IEEE, and Xindong Wu, Senior Member, IEEE: RIEVEL: Recursive Induction Learning in Hand Gesture recognition, IEEE Transactions on Pattern Analysis and machine Intelligence, Vol. 20,No. 11, November 1998.
-
S.Waldherr, R. Romero, and S.Thrun,: A gesture-based interface for human-rodot interactionVol. 9, No. 2, pp. 151-173,2000.
-
F.S.Chen , C.M.Fu and C.L. Haung, Hand gesture recognition using a real-time tracking and Hidden Markov models. Image vision computer, pp. 745-758,2003.
-
Kouichi M., Hitomi T. (1999)Gesture Recognition using Recurrent Neural Networks ACM Confrence on Human factors in computing systems: Reaching through ttechnology (CHI 91),, pp. 237-242. DOI 10.1145/108844.108900.
-
Manar Maraqa, Read Abu-Zaiter (2008).Recognition of Arabic Sign Language (ArSL) Uing Recurrent Neural Networks, IEEE First International Confrence on the applications of Digital Information and Web Technologies, 9ICADIWT), pp. 47-48,DOI: 10.1109/ICADIWT 2008.4664396
-
Malima, A. Ozgur, E. Cetin, A Fast Algorithm for Vision Based Recognition For Robot Control, IEEE 14th conference on Signal Processing and Communications Applications, pp. 1-4, 2006 (1659822)
-
Freeman, W. T., Weisman, C. D. Television Control by Hand Gestures. IEEE International Workshop on Automatic Face and Gesture Recognition.(1995)
-
Joseph J.LaViola Jr. A Survey of Hand Posture and Gesture Recognition Techniques and Technology,Master Thesis, Science and Technology Centre for Computer Graphics and Scientific Visualization,USA.(1999)
-
Gaun,Y. Zheng, Real Time 3D pointing gesture recognition for natural HCI. IEEE Proceedings of the 7th World Congress on Intelligent Control and Automation WCICA, 4593304. (2008)
-
S. Mitra and T. Acharya Gesture Recognition: A Survey IEEE Transactions on Systems, Man and Cybernetics, Part C: Applications and reviews, Vol. 37(3), pp. 311-324,TSMCC. 893280. (2007)
-
Min b., Yoon, H., Soh, J., Yangc, Y., &Ejima, Hand gesture recognition using HMM. IEEE International Confrence on Computational Cybertnatics and Simulation. Vol. 5,ICSMC- 637364.(1997)
-
A. Smola and B. Schölkopf, Sparse greedy matrix approximation for machine learning, in Proc. 17th Int. Conf. Mach. Learn., San Francisco, CA, pp. 911918, 2000.
-
Turk, M., and A. Pentland. Eigenfaces for Recognition. Journal of Neuroscience, 3(1):71-86, 1991.
-
Birk, Henrik, Thomas B. Moeslund, and Claus B. Madsen. Real-Time Recognition of Hand Alphabet Gestures Using Principal Component Analysis. In Proceedings of The 10th Scandinavian Conference on Image Analysis, 1997.
-
] Martin, Jerome, and James L. Crowley. An Appearance-Based Approach to Gesture Recognition. In Proceedings of the Ninth International Conference on Image Analysis and Processing, 340-347, 1997.
-
Wexelblat, Alan. A Feature-Based Approach to Continuous-Gesture Analysis. Masters thesis, Massachusetts Institute of Technology, 1994.