- Open Access
- Authors : Mahadevaprasad Y N, Chethan H K, M. Rajashekara
- Paper ID : IJERTCONV10IS11127
- Volume & Issue : ICEI – 2022 (Volume 10 – Issue 11)
- Published (First Online): 30-08-2022
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Kannada Text Detection In Scene Images using Feed Forward Neural Network
Mahadevaprasad Y N
Information Science and Engineering MIT, Thandavapura
Mysuru, India
Chethan H K
Computer Science and Engineering MIT, Thandavapura
Mysuru, India
-
Rajashekara
Dept. of Computer Science, University of Mysore, Mysuru, Mysuru, India
Abstract Text is an invention of humankind that transfers rich and accurate high level semantics and delivers human thoughts and emotions. Text is corresponding to other visual hints, like contour, color and texture. Two generally used approaches for this problem are stepwise methods and integrated methods, while this task is additionally divided into text detection and localization, classification, segmentation and text recognition. Important methods used to undertake these phases and their corresponding advantages, disadvantages and applications are presented in this paper. Numerous text associated applications for imagery are also presented over here. This research performs comparative analysis of basic processes in text detection.
KeywordsText localization; text detection;Feed Forward Neural Network(F2N2).
Kannada is one of the official languages in India, and it is primary (native or mother tongue) language in Karnataka state. Approximately 50-60 million people speak Kannada around the world and takes 27th position in most spoken language in the world.
-
INTRODUCTION
Text visualization in natural images is classically distributed into two tasks: text detection, and word recognition. Text detection consist of creating candidate bounding boxes that are possible to have lines of text, while word recognition proceeds each candidate bounding box, and attempts to accept the text labeled within it, or hypothetically reject the bounding box as a false positive detection.
Analysis of text in regular scene images rises the problem of identifying words that appear on, e.g., bill boards and road signs. If such words are regularly recognized, they will be used for an enormous range of applications: content-based image retrieval, sign translation, intelligent driving assistance, and navigation aid for the visually impaired and robots. For all these reasons, scene text detection has increased interest from the community, in recent years.
The scene text image analysis for Indian languages like kannada poses several challenges because of specific features of the lettering. The South Dravidian languages have a large set of characters consisting of vowels, consonants, and consonant conjuncts. The listing also includes multiple characters that are shaped by mistreatment the essential ciphers. A distinct way to perform Kannada scene text recognition is to segment words and characters from the manuscript then do recognition. However Kannada language takes the larger listing, the associate degree such an approach can have several categories to recognize. One more methodology is to segment the character into basic ciphers and so perform recognition of the fundamental ciphers. The glyptography similar to the kannada character has primarily 2 parts: consonant and vowel modifiers. Main aim of this research is facilitate automation to reduce human efforts.
Fig 1. Evolution of Indian characters from the ancient Brahmi script.
-
PROPOSED METHODOLOGY
The proposed method uses region-wise horizontal and vertical profile-based features to recognize kannada characters in mobile camera and digital camera-based images. The planned system contains several phases such as Preprocessing, Feature Extraction, and Construction of Knowledge Base for the Training model, Training and Character Recognition with Classifier. The general framework for the planned model is given in Fig 1. A detailed explanation of individual stage is given in the following sections.
Fig. 2 General framework for proposed methodology.
-
Pre-processing
The scene text images have problems like lighting effects, shadowing, blur, color degradation and size etc. The
importance of this stage is to create the images to expected size and eliminate complex backgrounds to make easier for further processing. Pre-processing method contains a number of steps, which are as detailed below;
Binarization
The input text image is converted into binary image that takes only two possible values for each pixel represented by either 0 or 1. Character word image is resized to fixed size based on length.
Thinning
Thinning denotes the process of decreasing the width of a line like entity from a number of pixels wide to just single pixel. This method can eliminate indiscretions in letters and in turn, creates the simple recognition algorithm to operate on a character stroke, which is only one pixel wide.
Bounding Box
Generation Before analyzing any character of the character image, it is essential to identify the (pixel) boundaries of that character. Thus, a bounding box is created around the image.
-
Feature extraction
Features are taken out from the pre-processed image, each image is separated into 15 vertical zones and 15 horizontal zones, where the size of each horizontal zone is 2*30 and the size of each vertical zone is 30*2. Then the quantity of all pixels in every zone is determined. Finally, we get 30 features that are stored in a feature vector. The feature vector is shown in equation (1).
= where 1 <= <= 30 (1)
Where is the Feature Vector, is the features of the ith
zone.
-
Neural Network Training
The Feed Forward Neural Network (F2N2) is used for training the network. The features created from the training database are used to train the model. Input vectors and the equivalent target vectors are used to train a network until it can estimate a function, sub input vectors with specific output vectors, or properly categorize input vectors according to the database. Each input is weighted with a suitable weight matrix. The amount of the weighted inputs and the bias, form the input to transfer function. The Neural Network used for the training has 30 input neurons as it has 30 input features, 38 hidden neurons, and 6 output neurons. Figure 3 shows the Neural Network overview.
-
Character Recognition Model.
The trial image is processed to attain zone wise horizontal and vertical profile-based features, which are added to Neural Network for the recognition.
Fig. 3. Neural Network Model.
-
-
EXPERIMENTAL RESULTS AND DISCUSSIONS
The dataset is collected from government office display boards, traffic boards and boards written on various buildings in Karnataka. The dataset consists, more than 400 images of basic Kannada characters. The suggested methodology for character recognition system has been assessed for various samples dealing with various issues for different images. The method accomplishes recognition accuracy of 94%. The proposed method is efficient and insensitive to the variation in size,font, noise, blur and other degradations.
The investigational results of testing various character images with changeable font styles, size and backgrounds are given below.
Fig. 4. Kannada character in unusual font style.
The selected text image from the database is given in figure 4. The image has plenty of challenges like unusual fonts and size, blur, etc. Pre-processing phase is accomplished to create the images in standard size, it remove complex backgrounds and
make them easier for further processing in word recognition. In Pre-processing stage te color image is converted into a grayscale image, then into a binary image. The binary image is resized and then put on thinning process and bounding box. Then the features are extracted from each word and in the next step testing is performed.
The proposed methodology is based on zone-wise horizontal and vertical profile-based features and neural network as a classifier for Kannada Character Recognition. The system works on training phase and testing phase. Thorough experiments are done for the analysis of zone-wise horizontal and vertical profile-based features. The system effectively and efficiently processes the camera-based images having challenges such as variable lightning environments, noise, blur, unusual fonts, etc. The methodology is tested with more than 400 trials and gives a recognition accuracy of 94%. The proposed method can be stretched for word recognition by considering a novel set of features and classification algorithms.
Table 1.Output Pattern for character images
Character Image
Corresponding output pattern
Recognized character fom the input image
[([[1, 9], [117, 9], [117, 121], [1, 121]] ''
'
',
[([[16, 40], [146, 40], [146, 68], [16,
68]],
0.38741455790827334)
]
CONCLUSION
Indian languages have a huge collection of characters and creates the difficulty for recognition. The Kannada language has many characters shaped from several basic characters. From the literature survey, we get to know that only a few works have been done on printed Kannada text recognition in scene images, identification, and classification using computer vision and machine learning techniques. So we have
ACKNOWLEDGMENT
-
Mahadeva Prasad Y N holds master's degree (2010) in Computer Network Engineering fromVisvesvaraya Technological University, Belgaum. Currently he is working as an Assistant Professor in the Dept. Of IS&E, MIT, Thandavapura, Mysuru.He is pursuing his Ph.D. in Image Processing at MIT, Mysore (Research Centre) under Mysore University. His research interests include Image processing,
planned to take up this as the research work and we will address the challenges mentioned.
This work attempts toward a novel methodology that aids pre-processing and recognition of Kannada Characters from camera-based images. This unique methodology is tested with 490 samples and provides a recognition accuracy of 94%. The method can be enhanced for word recognition.
Data Mining, Artificial Intelligence, Machine Learning and Wireless Sensor Networks.
-
Dr.Chethan H K, Professor at MIT, Thandvapura
.Mysuru.Received Doctoral from Mysore University. Published many papers and journals in Image Processing and other area. His research interests include Digital Image Processing, Artificial Intelligence, Machine learning, etc.
-
M. Rajashekara holds master's degree (2014) in Computer Science from Mysore University, Mysore. Currently
-
-
he is working as a Lecturer in the Dept. Of Studies in Computer Science, Mysore University,Mysuru.
REFERENCES
[1] T V Ashwin and P S Sastry,A font and size independent OCR system for printed Kannada documents using support vector machines. Journal of Sadhana, 27, Part 1, 35-58, 2013. [2] U Pal, B B Choudhuri.Indian Script Character Recognition: A Survey.Pattern Recognition. 37, 1887-1899, 2004.
[3] Dr. Siddhaling Urolagin, Segmentation of Inflected Top Portions of Kannada Characters using Gabor Filters. 2011Second International Conference on Emerging Applications of Information Technology 978- 0-7695-4329-1/11 $26.00 © 2011 IEEE DOI 10.1109/EAIT.2011.66 [4] Jian Zhang, Renhong Cheng, Kai Wang, Hong Zhao, Research on the text detection and extration from complex images, Fourth International Conference on Emerging Intelligent Data and Web Technologies. Vol. 10, 2013, Page no. 708-713. [5] Abdulllah, M., Agal, A., Alharthi, M., & Alrashidi, M. Retracted: Arabic image validation using neural network classifier. Journal of Fundamental and Applied Sciences, 10(4S), 265-270.2018. [6] Abe, S. Support Vector Machines for Pattern Classification. Berlin, Germany: Springer Science & Business Media.2010. [7] Aggarwal, C. C.Neural Networks and Deep Learning: A Textbook.Basingstoke, England: Springer.2018.
[8] Matteo Brisinello , Ratko Grbi, Mario Vranje, Denis Vranje (2019). Review on text detection methods on scene images. 61st International Symposium ELMAR-2019, 23-25 September 2019, Zadar, Croatia 978- 1-7281-2181-9/19/$31.00 ©2019. [9] Chaitanya R. Kulkarni, Ashwini B. Barbadekar (2017) Text Detection and Recognition: A Review. International Research Journal of Engineering and Technology 2017. [10] Puneet Shetteppanavar and Aravinda Dara (2017). word recognition of kannada text in scene images using neural network Int. J. Adv. Res. 5(11), 1007-1016. [11] Shahzia Siddiqua, Naveena C and Sunilkumar S Manvi Recognition of Kannada Characters in Scene Images using Neural Networks Fifth International Conference on Image Information Processing (ICIIP)- 2019. [12] Amritha S Nadarajan and Thamizharasi A A Survey on Text Detection in Natural Images.International Journal of Engineering Development and Research , ISSN: 2321-9939© 2018. [13] Pawan Kumar Ganjhu, Pallavi Pratik, Saurav Kumar, B V Gowravi and Dr. Purohit Shrinivasacharya An Automatic System to RecognizeKannada Natural Sign Board Characters.International Journal of Scientific & Engineering Research Volume 11, Issue 6, June- 2020 360 ISSN 2229-5518. [14] Udit Roy.Text Recognition and Retrieval in Natural Scene Images Center for Visual Information and Technology,International Institute of Information Technology,Hyderabad – 500 032, INDIA,December 2015. [15] Xiyan Liu, Gaofeng Meng and Chunhong Pan. Scene text detection and recognition with advances in deep learning: a survey International Journal on Document Analysis and Recognition (IJDAR),https://doi.org/10.1007/s10032-019-00320-5,© Springer- Verlag GmbH Germany, part of Springer Nature 2019. [16] Asghar Ali.Deep Learning Based Cursive Text Detection and Recognition in Natural Scene Images School of Engineering and Information Technology,The University of New South Wales,Australia,April 2020. [17] K. Indira and S. Sethu Selvi .Kannada Character Recognition System: A Review https://www.researchgate.net/publication/45898220-2014. [18] Xiaoxue chen and Lianwen jin.Text Recognition in the Wild: A Survey. South China University of Technology, China and SCUT-Zhuhai Institute of Modern Industrial Innovation YUANZHI ZHU, CANJIE LUO, and TIANWEI WANG, South China University of Technology, China ACM Computing Surveys, Vol. 54, No. 2, Article
42. Publication date: March 2021.
[19] Shangbang Long , Xin He and Cong Yao.Scene Text Detection and Recognition: The Deep Learning Era Received: 14 April 2020 / Accepted: 8 August 2020,© Springer Science+Business Media, LLC, part of Springer Nature 2020. [20] Mengkai Ma, Qiu-Feng Wang , Shan Huang, Shen Huang, Yannis Goulermas and Kaizhu Huang. Residual attention-based multi-scale script identification in scene text images. Elsevier https://doi.org/10.1016/j.neucom.2020.09.015 0925-2312/_ 2020. [21] Maroua Tounsi a,x, Ikram Moalla a,y, Frank Lebourgeoisz and Adel M. Alimi a Multilingual Scene Character Recognition System using Sparse Auto-Encoder for Efficient Local Features Representation in Bag of Features. lebourgeois@liris.cnrs.fr, adel.alimi@ieee.org. [22] Saad Bin Ahmed, Saeeda Naz, Muhammad Imran Razzak and Rubiyah Yusof.Arabic Cursive Text Reognition from Natural Scene Images.Appl. Sci. 2019, 9, 236; doi:10.3390/app9020236. [23] B. Shi, X. Bai, and C. Yao, An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition, IEEE Trans. Pattern Anal. Mach. Intell, vol. 39, no. 11, pp. 22982304, 2017. [TPAMI-2017]. [24] Z. Cheng, Y. Xu, F. Bai, Y. Niu, S. Pu, and S. Zhou, AON: Towards arbitrarily-oriented text recognition, in Proceedings of CVPR, 2018, pp. 55715579, [CVPR-2018]. [25] Y. Gao, Y. Chen, J. Wang, M. Tang, and H. Lu, Reading scene text with fully convolutional sequence modeling, Neurocomputing, vol. 339, pp. 161170, 2019, [NC-2019]. [26] Yuan T L, Zhu Z, Xu K, et al. A large chinese text dataset in the wild[J]. Journal of Computer Science and Technology, 2019, 34(3): 509- 521,[JCS&T-2019]. [27] Long S, He X, Yao C. Scene text detection and recognition: The deep learning era[J]. International Journal of Computer Vision, 2020: 1-24, [IJCV-2020]. [28] https://github.com/Canjie-Luo/MORAN_v2. [29] Sahana K Adyanthaya. Text Recognition from Images: A Study. International Journal of Engineering Research and Technology,[IJERT- 2020]. [30] Vishnuvardhan and Dhanalakshmi Miryala. Scene Text Recognition of Indian Languages in Natural Scene Images. Internatioanal Journal of Advanced research in engineering and Technology [IJARET-2020]. [31] Basavaraj S. Anami, Deepa S. Garag. A Semi-automatic Methodology for Recognition of Printed Kannada Character Primitives Useful in Character Construction. Recent Trends in Image Processing and Pattern Recognition. [Springer Singapore-2019].G. Eason, B. Noble, and I.N. Sneddon, On certain integrals of Lipschitz-Hankel type involving products of Bessel functions, Phil. Trans. Roy. Soc. London, vol. A247, pp. 529-551, April 1955. (references) [32] J. Clerk Maxwell, A Treatise on Electricity and Magnetism, 3rd ed., vol.2. Oxford: Clarendon, 1892, pp.68-73.
[33] I.S. Jacobs and C.P. Bean, Fine particles, thin films and exchange anisotropy, in Magnetism, vol. III, G.T. Rado and H. Suhl, Eds. New York: Academic, 1963, pp. 271-350. [34] K. Elissa, Title of paper if known, unpublished. [35] R. Nicole, Title of paper with only first word capitalized, J. Name Stand. Abbrev., in press. [36] Y. Yorozu, M. Hirano, K. Oka, and Y. Tagawa, Electron spectroscopy studies on magneto-optical media and plastic substrate interface, IEEE Transl. J. Magn. Japan, vol. 2, pp. 740-741, August 1987 [Digests 9th Annual Conf. Magnetics Japan, p. 301, 1982]. [37] M. Young, The Technical Writers Handbook. Mill Valley, CA: University Science, 1989.