- Open Access
- Total Downloads : 603
- Authors : Umesh Kumar, Poonam Dabas
- Paper ID : IJERTV3IS061465
- Volume & Issue : Volume 03, Issue 06 (June 2014)
- Published (First Online): 30-06-2014
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Recognizing Numeric, Alphabets and Special Characters by Pattern Recognition using Neural Network
Mrs. Poonam Dabas
Asst Professor Department of Computer Engineering
-
, Kurukshetra University
Umesh Kumar Student, M.Tech
Department of Computer Engineering U.I.E.T, Kurukshetra University
Abstract For the process of Pattern Recognition neural network is found to be an effective tool. Different combinations of neural networks are used for this technique. Earlier works are based on only back propagation network but the proposed work is based on the combination of Hopfield network and Back propagation network. The success rate for recognizing known and unknown pattern is relatively very high with this combination of networks when compare to other techniques. As the fault tolerance of the Hopfield network is more than back propagation so the %age accuracy of output of the new combined network should be more as compared to only back propagation network. The objective of the neural network is to transform the inputs into meaningful outputs.The signals are transmitted by the means of connections links. It is inspired by the biological nervous system, such as the brain process information.
Keyword-Pattern Recognition; Hopfield network; Back Propagation Network; Training Set, Fault Tolerance .
-
INTRODUCTION
A Biological NN consists of a bunch or teams of with chemicals connected or functionally associated neurons. One somatic cell could also be connected to several alternative neurons and therefore the total range of neurons and connections in an exceedingly network could also be in depth. Connections known as synapses are sometimes fashioned from axons to dendrites. Natural somatic cells receive signals through synapses set on the dendrites or membrane of the neuron. Once the signals received area unit robust enough to surpass an exact threshold the somatic cell is activated and emits a sign although the axon. This signal may well be sent to a different conjugation, and would possibly activate alternative neurons. In most cases a NN is an adaptive system ever-changing its structure throughout a learning part. NN are used for modeling complicated relationships between inputs and outputs or to search out patterns in knowledge.
Figure 1 Biological neuoron
Basically a biological neuron receives inputs from alternative sources and combines them in how to perform a typically nonlinear operation on the result so output the ultimate result. The figure one shows a simplified biological neuron and also the relationship of its four parts. Within the human brain a typical neuron collects signals from others through a bunch of fine structures referred to as dendrites. The neuron sends out spikes of electrical activity through a protracted and skinny stand referred to as a nerve fibre that splits into thousands of branches. At the top of every branch there's a structure referred to as a colligation converts the activity from the nerve fibre into electrical effects that inhibit or excite activity within the connected neurons. Once a neuron receives excitant input that's sufficiently massive compared with its repressive input it sends a spike of electrical activity down its nerve fibre.
Different networks
-
Single layer feed forward network
The simplest quite NN could be a Single Layer Perceptron (SLP) network, which consists of one layer of output nodes; the inputs square measure fed on to the outputs via a series of weights. The total of the merchandise of the weights and also the inputs is calculated in every node. During this single-layer feed forward NN, the network's inputs are directly connected to the output layer perceptrons. Input nodes or units square measure connected to a node or multiple nodes within the
next layer. A node within the next layer takes a weighted total of all its inputs:
Summed input =
-
Multilayer network
A multilayer feed forward NN is an interconnection of perceptrons within which knowledge and calculations flow during a single direction, from the input file to the outputs. A multilayer feed forward NN consists of a layer of input units, one or a lot of layers of hidden units, and one output layer of units. A NN that has no hidden units is termed a Perceptron. The structure includes a layer of process units i.e. the hidden units additionally to the output units. These networks are referred to as feed forward as a result of the output from one layer of neurons feeds forward into consequent layer of neurons.
-
Feed Forward neural network
A feed forward NN could be a biologically impressed classification rule. It consists of an oversized variety of easy neuron-like process units organized in layers. Each unit during a layer is connected with all the units within the previous layer. During this network the data moves solely in one direction i.e. forwards: From the input nodes knowledge goes through the hidden nodes and to the output nodes. There are not any cycles or loops within the network. These associations aren't all equal; every connection could have a special strength or weight. The weights on these connections cipher the data of a network. Usually the units during a NN are referred to as nodes. Information enters at the inputs and passes through the network layer by layer till it arrives at the outputs. Throughout traditional operation there's no feedback between layers. This is often why they're referred to as feed forward neural networks. Feed forward networks will be created from differing kinds of units e.g. binary McCulloch- Pitts neurons the best example being the perceptron. Continuous neurons oft with colon activation are employed in the context of back propagation of error.
-
Recurrent network
A repeated Neural Network (RNN) could be a category of NN wherever connections between units type a directed cycle. This creates an interior state of the network that permits it to exhibit dynamic temporal behaviour. Not like feed forward NN, RNNs will use their internal memory to method capricious sequences of inputs. The human brain could be a RNN i.e. a network of neurons with feedback connections. It will learn several sequence process tasks or algorithms or programs that aren't learnable by ancient machine learning ways. A RNN includes a cyclic of its connections. Numerous characteristics of RNN are:
-
All BNN square measure repeated mathematically, they implement energizing systems.
-
Several varieties of coaching algorithms.
-
Theoretical and sensible difficulties are prevented by sensible applications to date.
-
Pattern Recognition
The term pattern recognition encompasses a large vary of knowledge process issues of nice sensible significance, from speech recognition and also the classification of written characters, to fault detection in machinery and diagnosing. The act of recognition will be divided into 2 broad categories: recognizing concrete things and recognizing abstract things. The popularity of concrete things involves the popularity of spatial and temporal things. Pattern recognition will be done each in traditional computers and neural networks. This technique can provide result as affirmative or NO. Thats why this is often an easy technique. The speedily growing and out there computing power, whereas sanctioning quicker process of big knowledge sets has conjointly expedited the utilization of elaborate and numerous ways for knowledge analysis and classification [4]. At an equivalent tie demands on automatic pattern recognition systems are rising staggeringly owing to the provision of enormous databases and rigorous performance needs i.e. speed, accuracy, and cost.
-
-
LITERATURE REVIEW
-
Husam Ahmed Al Hamad [1] steered that using an economical neural network for recognition and segmentation improved the performance and accuracy of the results, additionally to cut back the efforts and prices. This paper investigates and compared results of 4 completely different artificial neural network models. The neural techniques calculate the boldness values for every Prospective Segmentation Points (PSP) using the planned classifiers so as to acknowledge the higher model that increased the recognition results of the written scripts. Electronic equipment times and accuracies were conjointly according. Kauleshwar Prasad et al., [2] steered that recognition of written text had been one in all the active and difficult areas of analysis within the field of image process and pattern recognition. It had various applications that embrace reading aid for blind, bank cheques and conversion of any hand written material into structural text type. During this paper the author targeted on recognition of English alphabet during a given scanned text document with the assistance of Neural Networks. The author use character extraction and edge detection rule for coaching the neural network to classify and acknowledge the written characters. Yingqiao Shi et al. [3] introduced the applying standing of the ANN technology within the print character recognition so elaborate on the technology of normal BPN. By formula derivation the author showed that customary BPN exists some defects within the application so the author take the approach by adding a momentum term to enhance the Network and will increase the coaching speed. Second the author at random elect two hundred written number-characters and fifty written letter-characters as a sample of the improved BPN experiments the results showed that the strategy of the number-character
recognition rate more than the alphabetic characters the performance of convergence speed and recognition was higher. Binu P. et al., [4] deled with the popularity of hand written South Dravidian character using wavelet energy feature (WEF) and extreme learning machine. The riffle energy (WE) could be a new and sturdy parameter and was derived using riffle remodel. It might cut back the influence of different varieties of noise at different levels. The author tends to might replicate the WE distribution of characters in many directions at completely different scales. The standard learning algorithms of various classifiers square measure so much slower than needed. Therefore the author had used a very quick learning rule referred to as ELM for single hidden layer feed forward networks (SLFN). This rule learned a lot of quicker than ancient common learning algorithms for feed forward neural network. Fajri Kurniawan et al., [5] conferred a sturdy rule to spot the letter boundaries in pictures of at liberty written word. The planned rule was supported vertical contour analysis. The planned rule was performed to come up with pre-segmentation by analysing the vertical contours from right to left. The unwanted segmentation points were reduced using neural network validation to enhance accuracy of segmentation. The results showed that the planned rule capable to accurately locating the letter boundaries for at liberty written words. Jayanta Kumar Basu et al.,[6] planned that among the varied ancient approaches of pattern recognition the applied math approach had been most intensively studied and employed in observe. a lot of recently, the addition of artificial neural network techniques theory are receiving vital attention. The look of a recognition system needs careful attention to the subsequent issues: definition of pattern categories, sensing surroundings, pattern illustration, feature extraction and choice, cluster analysis, classifier style and learning, choice of coaching and take a look at samples and performance analysis. The target of this review paper was to summarize and compare a number of the well- known ways employed in numerous stages of a pattern recognition system using ANN and establish analysis topics and applications that were at the forefront of this exciting and difficult field. Dawei vitality et al., [7] planned that the sting detection downside during this paper was developed as an optimisation method that sought-after the sting points to attenuate an energy perform. The dynamics of Hopfield neural networks were applied to resolve the optimisation downside. An initial edge was initial calculable by the strategy of ancient edge rule. The grey worth of image constituent was represented because the neuron state of Hopfield neural network. The state updated until the energy perform bit the minimum worth. The ultimate states of neurons were the result image of edge detection. The novel energy perform ensured that the network converged and reached a near-optimal answer. Taking advantage of the collective procedure ability and energy convergence capability of the Hopfield network, the noises were removed. The experimental results showed that the planned technique might get a lot of vivid and a lot of correct edge than the standard ways of edge detection. Zaheer Ahmad et al., [8] planned that Urdu compound
Character Recognition could be a scarcely developed space and needs sturdy techniques to develop as Urdu being a family of Arabic script was cursive right to left in nature and characters amendment their shapes and sizes once they were placed at initial, middle or at the top of a word. The developed system consists of 2 main modules segmentation and classification. Within the segmentation part pixels strength is measured to find words during a sentence and joints of characters during a compound/connected word for segmentation. within the next part these segmental characters were fed to a trained Neural Network for classification and recognition wherever Feed Forward Neural Network was trained on fifty six completely different categories of characters every having a hundred samples. The most purpose of the system was to check the rule developed for segmentation of compound characters. The image of the system has been developed in MATLAB presently achieves seventieth accuracy on the typical. Manu Pratap Singh et al., [9] steered that the aim of this study was to analyse the performance of Back propagation rule with dynamical coaching patterns and also the second momentum term in feed forward neural networks. This analysis was conducted on 250 completely different words of 3 little letters from country alphabet. These words were conferred to 2 vertical segmentation programs that were designed in MATLAB and supported parts (1/2 and 2/3) of average height of words, for segmentation into characters. These characters were clubbed along once binarization to create coaching patterns for neural network. Network was trained by adjusting the association strengths on every iteration by introducing the second momentum term. This term alters the method of association strength quick and expeditiously. The conjugate gradient descent of every conferred coaching pattern was found to spot the error minima for every coaching pattern. The network was trained to be told its behaviour by presenting each of the five samples (final input samples having twenty six × five
= a hundred thirty letters) a hundred times to that, so achieved five hundred trials indicate the many distinction between the 2 momentum variables within the knowledge sets conferred to the neural network. The results indicate that the segmentation supported 2/3 portion of height yields higher segmentation and also the performance of the neural network was a lot of merging and correct for the educational with recently introduced momentum term. Kothari et al. [10] planned that the rough set approach may be applied in pattern recognition at 3 completely different stages are pre-processing stage, coaching stage and within the design. This paper planned the applying of the Rough Neuro Hybrid Approach within the pre-processing stage of pattern recognition. During this project a coaching rule has been initial developed supported Kohonen network. This was used as a benchmark to match the results of the pure neural approach with the Rough Neuro hybrid approach and to prove that the potency of the latter was higher. Structural and applied math options are extracted from the photographs for the coaching method. The numbers of attributes were reduced by hard and core from the first attribute set that result into reduction in convergence time.
Conjointly the higher than removal in redundancy will increase speed of the method reduces hardware quality and so enhances the potency of the pattern recognition rule.
III IMPLEMENTATION
Figure 2 Proposed model
The Proposed work is presented in 4 main stages:
-
Training the Dataset
-
Input the distorted pattern
-
Matching with the training dataset
-
Compare the result of both networks
The training process will be done only once and after training a feature analysis based database will be created. This dataset is used for matching the input distorted pattern to recognize the correct pattern as output. The actual match is performed on the basis of this combined network based trained dataset. The input pattern is fed to the HP for recognition process. The input is matched with the trained dataset according to the network. The matching output will show how much the input contains error. The output of this network is fed as input to the next network. The total output is compared with the output of BPN.
change in the results while comparing with the results of inputs provided to the network without distortion bits.
Figure 3 Graphical analysis between output of BP and HP+BP for Alphabet A with different distortion bits
The above figure shows the graphical analysis of our dataset when the input is taken as distorted.
HP+BP |
BP |
Pattern |
100% |
84% |
A |
100% |
97% |
B |
100% |
96% |
C |
100% |
82% |
D |
100% |
94% |
E |
100% |
92% |
F |
100% |
82% |
G |
100% |
96% |
H |
HP+BP |
BP |
No. of bits of distortion |
100% |
89% |
1 |
100% |
82% |
4 |
48% |
46% |
6 |
IV RESULTS
Table 1 Dataset with distortion for Alphabet A
The above graph shows the relation between the output of BP and the hybrid network (HP+BP) for one alphabet. This describes that the efficiency of hybrid network to detect error is much more as compared to BP. The capacity to produce
efficient results is more in hybrid network than BP. In this while providing input to the network the user add distortion bits which will complicate the input and shows a major
T
a b l
e 2 Dataset without distortion inputs
Figure 4 Graphical analysis of dataset without distortion bits
Figure 4 shows the graphical analysis of different alphabets using both types of networks. The input contains no distortion bits while transferring to the desired network. The distortion bit will complicate the results. There occurs a major difference between the results with distortion bits and without distortion bits.
V CONCLUSION AND FUTURE SCOPE
In this present work we have implemented the Combined Network i.e. BPN and HP for pattern recognition of input patterns. In this work we have taken a sample of alphabet patterns to perform the Pattern recognition. As the initial step the image dataset is being maintained to represent different kind of character patterns. These images are trained using HP. The numbers of hidden layers are not fixed and are dependent on the complexity of the input. As fault tolerance of HP is more than BPN so the error calculating capability is more in HP. So the new defined network of HP and BPN is most suitable for recognizing the input pattern as compared to BPN. With distortion the accuracy level of output is more in new defined network as compared to only BPN. The output we get is similar to the trained dataset. Presented Model is the best option for pattern recognition of characters. For better results further it can be extended to work with some other networks. As in this model we have considered HP and BPN so in place of HP we can use BAM network. Because the fault tolerance of the BAM is very high as HP. Also we can change the type of input pattern such as images, facial features, and cartoon characters.
REFERENCES
-
Husam Ahmed Al Hamad Use an Efficient Neural Network to Improve the Arabic Handwriting Recognition International Conference on Systems, Control, Signal Processing and Informatics, Page no.269 – 274, 2013
-
Kauleshwar Prasad, Devvrat C. Nigam, Ashmika Lakhotiya and Dheeren Umre Character Recognition using Matlabs Network Toolbox International journal service, Science and Technology Vol. 6, No. 1, February, 2013
-
Yingqiao Shi, Wenbing Fan, Guodong Shi, "The Research of rinted Character Recognition Based on Neural Network," International Symposium on Parallel Architectures, Algorithms and Programming, Page no 119-122, 2011.
-
Binu P., Chacko, Vimal Krishnan, G.Raju Handwritten character ecognition using wavelet energy and extreme learning machine International Journal of Machine Learning and Cybernetics Volume 3, Issue 2, Page no. 149-161, June 2012
-
Fajri Kurniawan, Mohd. Shafry Mohd. Rahim, Nimatus Sholihah, Akmal Rakhmadi & Dzulkifli Mohamad Characters Segmentation of Cursive Handwritten Words based on Contour Analysis and Neural Network Validation ITB J. ICT, Vol. 5, No. 1, 2011
-
Jayanta Kumar Basu, Debnath Bhattacharyya, Tai-hoon Kim Use of Artificial Neural Network in Pattern Recognition International Journal of Software Engineering and Its Applications Vol. 4, No. 2, April 2010
-
Dawei Qi, Peng Zhang, Xuejing Jin and Xuefei Zhang Study on Wood Image Edge Detection Based on Hopfield Neural Network, Proceedings of the International Conference on Information and Automation, Page no. 1942 1946, IEEE, 2010
-
Zaheer Ahmad, Jehanzeb Khan Orakzai, Inam Shamsher, "Urdu compound Character Recognition using feed forward neural networks," iccsit, 2nd IEEE International Conference on Computer Science and Information Technology Page no.457-462, 2009.
-
Manu Pratap Singh, V.S. Dhaka Handwritten Character Recognition Using Modified Gradient Descent Technique of Neural Networks and Representation of Conjugate descent For Training Patterns IJE Transactions A: Basics Vol. 22, No. 2, June 2009
-
Kothari, A.G., Keskar, A.G. ; Gokhale, A. ; Deshpande, R. ; Deshmukh, P. Rough Set Approach for Feature Reduction in Pattern Recognition through Unsupervised Artificial Neural Network, Emerging Trends in Engineering and Technology, ICETET '08. First International Conference on 16-18 July 2008, Page no 1196 1199, 2008