SENSOVISION – An Aid for the Visually Impaired

DOI : 10.17577/IJERTCONV4IS22068

Download Full-Text PDF Cite this Publication

Text Only Version

SENSOVISION – An Aid for the Visually Impaired

S Navya Sri

UG Scholars

KSIT, Dept of ECE,Bengaluru, India

Pooja R

UG Scholars

KSIT, Dept of ECE,Bengaluru, India

Sumithra B

UG Scholars

KSIT, Dept of ECE,Bengaluru, India

Sushma S N

UG Scholars

KSIT, Dept of ECE,Bengaluru, India

Jayasudha B

Asst. Professor

KSIT, Dept of ECE,Bengaluru, India

Abstract – The last decades have developed many wearable and portable devices to aid the visually impaired. This paper gives an implementation of one such device which can be a boon to the visually impaired people to read text as well as Braille. The crux of this Design Implementation is the conversion from the basic Braille language and normal English text to speech using FPGA – Spartan 3E. The blind can switch easily between the two modes. The two modes are: Braille to Speech and English Text to Speech. In this work, an approach has been attempted to extract and recognize text from images and convert that recognized text into speech. This task can definitely be an empowering force in a visually challenged person's life.

Keywords – Visually impaired, Braille, FPGA – Spartan 3E.

  1. INTRODUCTION

    Millions of people have visual impairments worldwide, including 39 million who are blind that can affect their ability to perform activities of daily life. People with visual impairment find it difficult to access text documents in different situations.

    Blindness can be of two types, acquired and congenital (blind from birth). In either cases of blindness they find it difficult to access text and hence the Braille system was introduced.

    Over time, the Braille system has been used by the visually impaired for communication and contact with the outside world. Braille is a system used by a visually impaired where each letter is represented in different combination of dots in a 3×2 matrix. The dots of Braille cells are embossed and can be easily felt in absence of eyesight.

    Fig 2 : Blind school

    As every individual is an integral part of the society, betterment of their lives becomes our responsibility. Due to their disabilities, they find it difficult to access any written text, documents, internet and many more.

    Consequently this has led to a decline in economic, social, commercial, and educational ventures in the society. Though there are texts available in various forms, in this paper we focus on the challenge of reading printed text. Hence, to help this section of the society, a system has been developed which will help them read both English text and Braille.

    Fig 1: Braille system

  2. LITERATURE SURVEY

    Many wearable and portable devices to aid the visually impaired have been developed in the recent years. A number of portable reading assistants system have been designed particularly for visually challenged persons.

    Zaghloul et al. proposed a system who worked on Arabic Braille with a large database of documents that include multi size and resolution digitized documents [1]. They utilized

    preprocessing, cell detection, and an interpretation stages. Wong et al. used a probabilistic neural network with simple image processing to recognize the letters [2]. Antonacopoulos et al. used an inexpensive flatbed scanner with little user interaction [3]. They performed segmentation by an efficient two-point thresholding method to obtain background, light, and dark. They constructed resilient grid of potential point position. Braille cells were recognized and converted into the normal printed text with a dictionary based error detection final step. Falcón et al. presented the development of BrailLector to verbally speak the recognized letters [4]. Mihara et al. utilized a portable camera to design a Braille recognizer [5]. However, this system is for small scale letters such as letters found on an elevator or an apartment number. Similarly, Murray et al. designed another portable device to convert embossed Braille into normal text using a CCD camera [6].

    Al-Salman et al. proposed another Braille characters recognition system using a flatbed scanner [7]. They used a cropped grayscale image after elimination of black and white frames. For segmentation, they used a two-point threshold method to obtain: background, dark, and light. Then they applied de-skewing for the digitized document using a Binary Search Algorithm (BSA). After that the system performs a preliminary definition of Braille points before the final Braille characters recognition.

    Tai et al. proposed a high-efficient Braille documents' cell detection approach to estimate indentations, the skewness, and distances in both vertical and horizontal directions [8]. They estimated the obliqueness of the images by using Radon Transform. Abdelmonem et al. used a flat scanner for acquisition of Arabic Braille documents [9]. Their detection includes both a full and a partial dot.

    Nobuo Ezaki, Marius Bulacu, Lambert Schomaker, implemented a system that reads the text encountered in natural scenes with the aim to provide support to visually impaired persons. This paper describes a novel text-detection method geared for small text characters. This method uses Fisher's discriminant rate (FDR) to make a decision whether an image area should be binarized using local or global thresholds [10].

    Shehzad Muhammad Hanif, Lionel Prevost implemented a texture based technique to detect text in grey level natural scene images. It is a wearable system to make possible navigation and to assist the blind and visually impaired persons in real world. It has three parts, a bank of stereovision, a processing unit for visual perception and a handheld tactile surface. The textual/symbolic information interpretation module to the vision system of the Intelligent Glasses will recognize the text from the captured scene and textual and/or symbolic information will be displayed on the handheld tactile [11].

    Kumar J.A.V. Visu A. , Raj M.S. , Prabhu M.T. implemented an automated text to audio converting pen . If a person would like to read/understand any portion of text then that particular text can be converted to an audio signal. This

    audio signal is transmitted to the person's ears through wireless technology such as ZigBee [12].

    Oi-Mean Foong, Nurul Safwanah Bt Mohd Razali presented a signage recognition framework for Malaysian Visually Impaired People. Their proposed framework captures an image of a public signage and transforms it into a text file using Otsus OCR method. The text file reads by a speech synthesizer that tells the visually impaired people what the image is. This framework does not need huge database of the signage but only the character database [13]. Krishnan K.G., Porkodi C.M., Kanimozhi K. successfully presented a method where a blind person can get information about the shape of an image through speech signal. The novelty of this work is to covert the image to sound using the methodology of edge detection [14].

    Hangrong Pan, Chucai Yi, Yingli Tian designed a computer vision-based system to detect and recognize bus information from images captured by a camera at a bus stop. This system is able to notify the visually impaired people in speech, the information of the coming bus, and detect the route number and other related information which is depicted in the form of text. For bus detection, histogram of the oriented gradient (HOG) descriptor is in use to extract the image based features of bus facade. Cascade SVM model is applied to train a bus classifier to recognize the existence of a bus in sliding windows. I bus route number recognition they design a text detection algorithm on the basis of layout analysis, text learning and then recognize the text codes from detected text regions for audio announcement [15].

    Michael R.T.F., RajaKumar B., Swaminathan S., Ramkumar M. proposed a system that will be helpful for the visually challenged people. This model provides the opportunity to visually challenged person to operate the mobile devices without using the keypad [16].

  3. METHODOLOGY OF PROPOSED SYSTEM

    The image is first captured and then preprocessed to remove noise and sharpen the required region. Each word and sentence is then segmented which is an essential step in image analysis, object representation, visualization, and many other image processing tasks.

    The segmented images are then compared with the predefined templates. Based on this mapping, character is recognized. Similarly even sentences can be recognized from images acquired. The MATLAB code corresponding to this process is downloaded to FPGA – Spartan 3E and the output is given as input to the speech synthesizer to get the audio output thus assisting the visually impaired.

    Fig 3: Block diagram of proposed system

    The image is captured using external camera and then read. This then enters into a selection mode to switch between English printed text recognition and Braille recognition. Braille conversion is performed when mode selected is 1 and English conversion is performed when mode is 0. Segmentation is performed on each part. Text is recognized based on pattern and Braille is recognized based on position and location of dots. This is then given to the speech synthesizer to produce the audio.

    Fig 4: Flow chart of the proposed system

  4. TESTS AND RESULTS

    1. SOFTWARE IMPLEMENTATION

      The design and programming of the model is done in an MatLab Environment. We use Simulink to create graphical model and finally bit file is generated using Xilinx, which in turn is used to download it to FPGA. Various parameters are fixed while capturing the image such as height of camera and brightness. The various simulation windows are shown below.

      Fig 5: Input images

      Fig 6: Snapshot of Braille image to text conversion

      Fig 7: Snapshot of English image to text conversion

      Fig 8: Snapshot of sentence converted to text

      switch easily between Braille and English. The processing time is reduced as the delay for comparing with templates is reduced. The audio output can be obtained for different accents.

      VI. FURTHER ENHANCEMENTS

      The proposed model can be further improved to make the system read text in different languages. Finger mounted device can be developed using this concept. Video processing can be implemented instead of image processing for more accurate results.

      Fig 9: Snapshot of audio output

    2. HARDWARE IMPLEMENTATION

    FPGA – Spartan 3E is the world's lowest cost logic optimized full feature platform of five devices with system gates ranging from 100K to 1.6M gates, and I/O's ranging from 66 to 366 I/O's, with density migration. It increases system reliability by eliminating external components. It also helps us to create projects ranging from simple logic circuits to complex controllers. It is fully compatible with XILINX/ISE CAD tools. Heart of this project is FPGA – Spartan 3E.

    Fig 10: FPGA – Spartan 3E

    REFERENCES

    1. Rawan Ismail Zaghloul and Tomader Jameel Bani-Ata (2011), "Braille Recognition System With a Case Study Arabic Braille Documents",European Journal of Scientific Research, Vol. 62pp. 116-122.pretation stages.

    2. Lisa Wong, Waleed Abdulla, and Stephan Hussmann (2004), "A Software Algorithm Prototype for Optical Recognition of Embossed Braille", Proceedings of the International Conference on Pattern Recognition (ICPR), Cambridge, UK, Vol. 2, pp. 586- 589.

    3. A. Antonacopoulos and D. Bridson (2004), "A Robust Braille Recognition System", Lecture Notes in Computer Science, Vol. 3163, pp. 533-545.

    4. Néstor Falcón, Carlos M. Travieso, Jesús B. Alonso, and Miguel

      1. Ferrer (2005), "Image Processing Techniques for Braille Writing Recognition", Lecture Notes in Computer Science, Vol. 3643, pp. 379-385.

    5. Lisa Wong, Waleed Abdulla, and Stephan Hussmann (2004), "A Software Algorithm Prototype for Optical Recognition of Embossed Braille", Proceedings of the International Conference on Pattern Recognition (ICPR), Cambridge, UK, Vol. 2, pp. 586- 589.

    6. Iain Murray, and Andrew Pasquale (2006), "A Portable Device for the Translation of Braille to Literary Text", Proceedings of ACM Conference on Assistive Technologies (ASSETS), pp. 231-

      The objective of this project is to convert Braille as well as English Text present in image to audio, and making use of hardware like FPGA – Spartan 3E is the best option for an individual interested in low cost hardware. It has many inbuilt

    7. 232.

      AbdulMalik Al-Salman, Yosef AlOhali, Mohammed AlKanhal, and Abdullah AlRajih (2007), ''An Arabic Optical Braille Recognition System", Proceedings of the International Conference in Information and Communication Technology & Accessibility (ICTA07), Tunisia,pp. 81-86.

      features which helps us to experience the power of using it.

      The image captured with the help of Webcam is processed in the board to recognize text and then finally converted to audio output thus helping the visually impaired to access information. The software switch designed in this prototype can be used to select one among the two modes (i) Braille to audio (ii) English text to audio.

      Fig 11: Hardware Set-Up

  5. CONCLUSION

The system designed will help the visually impaired to read Braille and English. The proposed model will help them

  1. Zhenfei Tai, Samuel Cheng, and Pramode Verma (2008), "Braille Document Parameters Estimation for Optical Character Recognition", Lecture Notes in Computer Science, Vol. 5359, pp. 905-914.

  2. Marwa Abdelmonem, M. El-Hoseiny, Asmaa Ali, Karim Emara, Habiba Abdel Hafez, and Asmaa Gamal (2009), "Dynamic Optical Braille Recognition (OBR) System", Proceedings of the International Conference on Image Processing,Computer Vision, & Pattern Recognition (IPCV), Las Vegas, Nevada, USA, pp. 779-786.

  3. Nobuo Ezaki, Marius Bulacu, Lambert Schomaker, Improved text-detection methods for a camera-based text reading system for blind persons, IEEE in Proceedings of Eighth International Conference on Document Analysis and Recognition, pp 257 – 261 Vol. 1 ISSN: 1520-5263, 2005.

  4. Shehzad Muhammad Hani, Lionel Prevost Texture Based Text Detection in Natural Scene Images: A Help to Blind and Visually Impaired Persons, Conference & Workshop on Assistive Technologies for People with Vision & Hearing Impairments Assistive Technology for All Ages CVHI, 2007.

  5. Kumar J.A.V. ,Visu A., Raj M.S., Prabhu M.T. , Kalaiselvi

    V.K.G. A pragmatic approach to aid visually impaired people in reading, visualizing and understanding textual contents with an automatic electronic pen, IEEE International Conference on Computer Science and Automation Engineering (CSA), Page(s): 623-626 Vol.4,2011.

  6. Oi-Mean Foong and Nurul Safwanah Bt Mohd Razali, Signage Recognition Framework for Visually Impaired People, 2011 International Conference on Computer Communication and Management Proc .of CSIT vol.5 IACSIT Press, Singapore,2011.

Leave a Reply