- Open Access
- Total Downloads : 16
- Authors : Sandeep Babu , Suryakeerthi V , Sidhu M Raju , Praveen Mathew , Ria Maria George
- Paper ID : IJERTCONV3IS05035
- Volume & Issue : NCETET – 2015 (Volume 3 – Issue 05)
- Published (First Online): 24-04-2018
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Wearable Device for Blind
Sandeep Babu1, Suryakeerthi V1, Sidhu M Raju1, Praveen Mathew1 and Ria Maria George2
Department of Electronics and Communication, Amal Jyothi College of Engineering,
M.G University, Kottayam,India
1 UG Student; 2.Asst.Professor(guide)
AbstractWearable Assistive Devices for Blind is an embedded device dedicated for the blind or visually impaired people. Assistive devices represent potential aids for people with physical and sensory disabilities that might lead to improvements in the quality of life. For them, all that information, which exists in the daily life as newspapers, banknotes, schedule of train, books, postal letters, is not easily accessible. Our aim is to build an automatic text reading assistant which combines small-size, mobility and low cost price. The main aim of this system is to build an automatic text reading assistant using existing hardware associated with innovative algorithms. The project incorporates latest advancement in image processing and usage of QR codes in this process.
KeywordsOptical Character Recognition, QR Code, Facial Recognition, Image Processing
-
INTRODUCTION
Visual Information is inaccessible for individuals who are blind or visually impaired, as it is a purely visual feature. Given that many everyday tasks rely on visual data including coordinating clothing, social interactions, etc., the inaccessibility of vision has an adverse effect on daily life.We propose an interactive, wearable assistive device that can recognize and convey meaningful data and guidance. As computer vision is challenging in real world environments due to, e.g., illumination or pose changes, computer vision algorithms can be augmented with sub- systems that can provide information on working environments of a recognition algorithm, and how it affects the recognition accuracy[1].
Current products made to help the blind navigate rely heavily on GPS, whichisnt always detailed or accurate enoughto distinguish between, say, a sidewalk and a street. Plus, GPS isnt always available in places like parking garages, underground transit stations and sports venues, and it doesnt pick up on obstacles like crowds and cars.
QR codes (ISO/IEC 18004) are the type of 2D barcode with the sharpest increase in utilization in the last years. Fig. 1 shows some examples of QR code symbols. The most common uses of QR codes are as physical hyperlinks connecting places and objects to websites. QR
codes have been designed to be easily found and to have its size and orientation determined under bad imaging conditions [2].
Fig. 1. Examples of QR code symbols.
Optical character recognition (OCR) is themechanicalorelectronicconversion ofimagesof typewritten or printed text into machine-encoded text.
A text-to-speech (TTS)system converts normal language text into speech; other systems rendersymbolic linguistic representationslikephonetic transcriptionsinto speech. Synthesized speech can be created by concatenating pieces of recorded speech that are stored in a database [3].
The aim of this work is to develop a method for recognizing printed text, detecting QR codes and recognizing faces in arbitrarily acquired images. Once the presence of such data is detected, the camera holder can be guided by some appropriate set of commands to correctly frame the data.
-
ASSISTIVE TECHNOLOGY
From the point of view of visually impaired people the perception of the surrounding environment is very important, even essential, in order to facilitate their mobility.
Assistive technologies for environmental perception and for navigation in the surrounding environment are
advancing day by day. In the last decade a variety of portable navigation systems have been designed to assist people with visual disabilities during navigation in the indoor/outdoor known/unknown environments(electronic cane for navigating in indoor environment, AudioMUD, SMART Vision, VONAVS, E- Glass.)[6]
Another important aspect concerning visually impaired people is the need for common information and its fulfillment by using modern assistive technologies:audio transcription of printed information, accessing documents and books, music software, communication and information access, computing, telecommunications, tactile access of information, speech, text and Braille conversion technology [7].
SYSTEM FRAMEWORK
The framework of the system is developed from the architecture of the system
-
SYSTEM DESIGN
ARCHITECTURE OF THE SYSTEM
This system has a three layer architecture:
-
Input Interface Layer
It takes input from the real world and gives it to the processing layer where it is processed.
-
Processing Layer
All processing is done in this layer. This layer is responsible for the control and management of whole system.
-
Output Interface Layer
It provides the output to the user corresponding to the data obtained in the processing layer.
Fig.2. System Architecture
Fig.3. System framework
-
Input interface layer
Connect the input devices in this section Camera
It is used to acquire visual information from real world into the system. The camera will take photos/videos randomly and sent it to the processor. Here we prefer a small size on-chip camera with high clarity and resolution which have noise filters in it.
Control switch
It is used to control the status of the system by user. The blind is able to access information he requires from the data collected by the system.
-
Processing layer
Control and processing of data is done in processing layer. PROCESSOR
The processors selected for the development of this project is SAM3X and ARM cortex embedded in a project board (UDOO).
Between the two processors theres a direct UART serial connection which is always ON. Through this serial connection, the two processor communicate directly between them. For example, system uploads sketch to the SAM3X from the iMX6 running Linux within the Arduino IDE. Like any other Arduino boards, serial data are also available at pin 0 and pin 1 (RX0/TX0).
SAM3X operates mainly in the input interface layer and output interface layer whereas Arm Cortex operates in processing layer doing image processing and data processing. Various I/O controls are equally accessible by both processors.
ARM is a family of instruction set architectures for computer processors based on a reduced instruction set computing (RISC) Architecture developed by British company ARM Holdings. Atmel provides the ATSAM3U line of flash-based microcontrollers based on the ARM Cortex-M3processor, as a higher end evolution of the SAM7 microcontroller products. They have a top clock speed in the range of 100 MHz, and come in a variety of flash sizes.
Fig.4. Processor Architecture
GSM MODULE
It is used to access the data from the internet using GPRS technology
TTS MODULE
It is used to convert the text output given by the processor into speech output, it give output to the speaker
-
Output Interfacing layer
Interface the output devices in this section.
SPEAKER
It will give speech output to the user, prefer a headset type speaker.
-
-
SMART STICK
Blind mobility is one of the main challenges that scientists are still facing around different partsof the world. According to the World Health Organization, approximately 0.4% of the population is blind in industrialized countries while the percentage is riing to 1% in developing countries [5].Currently, blind people use a traditional cane as a tool for directing them when they movefrom one place to another. Although, the traditional cane is the most widespread means that is used today by the visually impaired people, it could not help them to detect dangers from all levels ofobstacles[4].
In this context, we propose a new intelligent smart stick system for guiding individuals who are blind or partially
sighted. The system is used to enable blind people to move with the help of an ultrasonic sensor and a vibrator module. The Ultrasonic distance measuring sensors provide information on an absolute position of a target or moving object.The vibrator helps the blind people informed about the obstacle in front of them if the module detects one. This device sends out ultrasonic waves as the user walks on the road. If an object is present in front, the waves will get reflected back to the sensor.The reflected waves can be captured by the ultrasonic sensor which inturns activates the vibrator module.Thus the user gets notified about the obstacle in front of him/her.
Sonar, like radar, uses the principle of echo location. For echo location, a short pulse is sent in a specific direction. When the pulse hits an object, which does not absorb the pulse, it bounces back, after which the echo can be picked up by a detector circuit.
By measuring the time between sending the pulse and detecting the echo, the distance to the object can be determined. By multiplying the time between pulse and echo with 343(speed of sound in air i.e 343 m/s ), we will get twice the distance to the object in meters (since the sound traveled the distance twice to get to the object and bounce back)[9]:
2d = Vs* (Tp Te)
Vs= speed of sound in air
Tp = time in seconds of pulse transmission Te = time in seconds of echo detection
d = distance to object onto which pulse bounces back
Fig.5. Working of Ultrasonic Sensor.
-
QR CODE
Each QR Code symbol consists of an encoding region and function patterns, as shown in Fig 6.
Fig.6. The structure of QR Code
DECODING QR CODE
-
Gray conversion
QR Code symbol is captured by embedded system with CCD or CMOS, and it is a colorful image. QR Code symbol is a set of dark and light pixels. It is needless to deal with color information and the gray image calculated quickly with little space, so gray conversion is needed to do firstly.
-
Binarization
Binarization of gray scale images is the first and important step to be carried out in pre-processing system. Selecion of a proper binarization method is critical to the performance of barcode recognition system. In binarizing an image,a simple and popular method is thresholding. There are two types of thresholding methods: global and local thresholding.In international standard of QR Code , a global threshold by taking a middle value between the maximum reflectance and the minimum reflectance in the image is used.
-
-
OPTICAL CHARACTER RECOGNITION Character recognition technology would enable
automatic digitization of printed characters into computer codes. OCR systems take a scanned character image file as an input, and automatically generate a text file. An example of OCR is shown in Fig 7.First, an image is acquired through any of the standard image acquisition techniques. The input image is assumed to be in the Y, Cb, Cr colour format.
The obtained binary image is then passed on to a specific edge detection process. The edge detection algorithm is performed such that only the right sided edges of each alphabet are obtained and the other edges are eliminated. After edge detection, the image is then segmented and feature extraction is performed. In this step,
different details of the segments, which are required for further processing, are stored.
The next step is to profile stored line segments. Profiling of segments is the process of categorizing them into different types of segments such as short, long, line or curve, etc.
Fig.7. OCR
Each of the alphabets is given a feature vector which contains a set of flags (bits). Each of the flags corresponds to one of the segment profiles which the alphabet is made of. The segments extracted and profiled are used to update these feature vectors of each alphabet.
When the complete alphabet is processed, the feature vector which contains all high bits represents the recognized alphabet which that feature vector belongs to. Finally, ASCII equivalent of the recognized alphabet is the output [11].
-
FACE RECOGNITON.
Person identification is one of the most crucial building blocks for smart interactions.
The algorithm does not rely on detection of any salient facial features, such as eyes. It just partitions an aligned face
image into 8×8 pixels resolution non-overlapping blocks. Discrete cosine transform (DCT) is used to represent the local regions. DCT closely approximates the compact representation ability of the Karhunen-Loeve transform (KLT), which makes it very useful for representation both in terms of information packing and in terms of computational complexity[8].
IMPLEMENTATION OF FACE RECOGNITION TECHNOLOGY
The implementation of face recognition technology includes the following four stages:
Data acquisition:
The input can be recorded video of the speaker or a still image. A sample of 1 sec duration consists of a 25 frame video sequence. More than one camera can be used to produce a 3D representation of the face and to protect against the usage of photographs to gain unauthorized access.
Input processing:
A pre-processing module locates the eye position and takes care of thesurrounding lighting condition and colour variance. First the presence of faces or facein a scene must be detected. Once the face is detected, it must be localized andNormalization process may be required to bring the dimensions of the live facialsample in alignment with the one on the template.
Face image classification and decision making:
In training phase ,it creates a prototype called face print foreach person. A newly recorded pattern is pre- processed and compared with each faceprint stored in the database. As comparisons are made, the system assigns a value tothe comparison using a scale of one to ten. If a score is above a predeterminedthreshold, a match is declared. From the image of the face, a particular trait isextracted and is stored in the database.
Fig.8.Face Recognition Block Diagram.
-
CONCLUSION
In this paper ,we are incorprating different engineering domains such as TTS,OCR,QR Technology,Face Recognition etc to develop an integrated system which is capable of assisting the blind and improving the quality of their life
Here , we are bringing forth recent advances in technology and various domains together to develop a system which has the pocc tential to become a virtual guide for the blind.QUICK RESPONSE (QR) Code is succesfully decoded from arbitrarily captured images and data has succesfully processed.
OCR program has succesfully converted text data in a image to digital text format.Face Recognition program is capable of detecting and recognizing the presence of friends in the blind vicinity. The friends
faces can be stored in Database for comparison and recognition.Wifi facility and GSM compatibility in the embedded board is capable of accessing data from the internet to guide the blind on demand.
The programs to drive the various functions and processes are tailored to a single master program so as to drive the embedded system.
REFERENCES
-
T.Mcdaniel, K.Kahol, and S. Panchanathan, An interactive wearable assistive device for individuals who are blind for color perception in HCI International 2007, Beijing, P.R. China, p.751-760
-
Luiz F. F. Belussi and Nina S. T. Hirata, Fast QR Code Detection in Arbitrarily Acquired Iages in 2011 24th SIBGRAPI Conference on Graphics, Patterns and Images.
-
Allen, Jonathan; Hunnicutt, M. Sharon; Klatt, Dennis from Text to Speech: The MITalk system. Cambridge University Press. ISBN 0- 521-30641-8, 1987
-
Abdel IlahNourAlshbatat, Automated Mobility and Orientation System for Blind or Partially Sighted People, Department of Electrical Engineering, Tafila Technical University, Tafila 66110, Jordan.
-
Chaudhry M., Kamran M., Afzal S., Speaking monuments design and implementation of an RFID based blind friendly environment. Electrical Engineering, 2008. ICEE 2008. Second International Conference on 25-26 March 2008 Page(s):1 6.
-
SándorTihamér BRASSAI, László BAKÓ, Lajos LOSONCZI, Assistive Technologies for Visually Impaired People, ActaUniversitatisSapientiae Electrical and Mechanical Engineering, 3 (2011) 39-50.
-
Keating, D., Parks, S., et.al, Assistive technology for Visually Impaired and Blind People, Springer, 2008.
Research, Computer Science Department, Universitat Karlsruhe (TH)76131, Karlsruhe, Germany, e-mail: ekenel@ira.uka.de, web:http://isl.ira.uka.de
-
Nandhini.N, Vinothchakkaravarthy.G, G.DeepaPriya, TalkingAssistance about Location Finding both Indoor and Outdoor for
Blind People in International Journal of Innovative Research in Science, Engineering and Technology.
-
Yue Liu and Mingjun Liu, Automatic Recognition Algorithm of Quick Response Code Based on Embedded System in Proceedings of the Sixth International Conference on Intelligent Systems Design and Applications (ISDA'06)
-
SushruthShastry, Gunasheela G, ThejusDutt, Vinay D S and Sudhir Rao Rupanagudi, i-A novel algorithm for Optical Character Recognition (OCR) inIEEE10.1109/iMac4s.2013.652644 .