Navigating By Means Of Electronic Perceptible Assistance System

DOI : 10.17577/IJERTV1IS5350

Download Full-Text PDF Cite this Publication

Text Only Version

Navigating By Means Of Electronic Perceptible Assistance System

Akella.S.Narasimha Raju1, S.M.K.Chaitanya2 and Vundavalli Ravindra3

1 2 3Department of Electronics & Communication Engineering

1 3V.S.M. College of Engineering, AU Ramachandrapuram, AP, India

2Gayathri Vidya Parishad Engineering College, Visakhapatnam, AP, India

Abstract

The paper describes about the Electronic Blind Assistance System that is developed for the visually impaired, with the help of which they can steer around the world without any assistance from others. Using this system, Blind persons can navigate autonomously through indoor and outdoor environments effortlessly, since this system includes wireless networks like ZigBee and Bluetooth and GPS technology. In this system, the user wears the vision sensor on the head and holds the GuideCane in front of him/her while walking. It gives the information about the environment that lies in front and detects the obstacles at the ground level. This system also gives the information about position and location of the user. All information like environment, obstacle detection, position and location are given in the form of audio signals. This system finally detects head height to foot level obstacles. The remote server does optimal path planning

Keywords: Blind , navigate, ZigBee, Bluetooth, GPS, image Processing,

  1. Introduction

    Much of the information that humans get from the outside world is obtained through sight. Without this facility, visually impaired people suffer inconveniences in their daily and social life. A total loss of eyesight is one of the most serious misfortunes that can happen. Currently there are

    about 55 million blind people in the world and this population is anticipated to be 75 million by 2020.

    Guidance by other humans, with good vision, or specially trained dogs is an obvious solution to

    help blind persons navigate their way around both in the house as well as outside the house. However, dependence on other humans is highly demanding and constraining in many ways. Trained dogs are very helpful however, they have limitations that include inability to interpret what the blind persons really wants and identifying objects. This is in addition to the continuous care cost for the dog.

    Since 1960s evolving technology helped many researchers built electronic devices for navigation. Classification is as follows:

    1) Vision enhancement, 2) Vision replacement, and 3) Vision substitution.

    Vision enhancement involves input from a camera, process the information, and output on a visual display.

    Vision replacement involves displaying the information directly to the visual cortex of the human brain or via the optic nerve.

    Vision substitution is similar to vision enhancement but with the output being non-visual, typically tactual or auditory or some combination of the two.

    The category that we will focus in this work is the

    vision substitution. Electronic travel aids (ETAs): Devices

    that transform information about the environment that would normally be relayed through vision into a form that can be conveyed through another sensory modality. Blind peoples navigation is restricted because they do not receive enough information about the objects or obstacles in their environment. Electronic Travel Aids (ETAs) are electronic devices designed to improve autonomous navigation of blind people. ETAs design varies from the sizes, the type of sensor used in the system, the method of conveying information and the method of usage.

    The National Research Councils guidelines for ETAs are listed below:

    1. Detection obstacles in the travel path from ground level to head height for the full body width.

    2. Travel surface information including textures and discontinuities.

    3. Detection of objects bordering the travel path for shore lining and projection.

    4. Distant object and cardinal direction information for projection of a straight line.

    5. Landmark location and identification information.

    6. Information enabling self-familiarization and mental mapping of an environment.

    7. In addition: ergonomic, operate with minimal interface with natural sensory channels, single unit, reliable, user choice of auditory or tactile modalities, durable, easily repairable, robust, low power and cosmetically accepted.

  2. Existing System

    The following literature available in one IEEE journal paper [1] to assist blind people has a different Electronic Travel Aids:

    1. Navigation Assistance for Visually Impaired (NAVI)[2]

      Sainarayanan et al. from University Malaysia Sabah developed an ETA (sound-based) to assist blind people for obstacle identification during navigation, by identifying objects that are in front of them. The prototype navigation assistance for visually impaired (NAVI)[2] is consisted of a digital video camera, headgear (holds camera), stereo headphones, the single- board processing system (SBPS), rechargeable batteries, and a vest (that holds SBPS and batteries).The idea is that humans focus on objects that are in front of the center of vision and so it is important to distinguish between background and obstacles. The video camera captures grayscale video, which is resampled to 32×32 resolution. Then using a fuzzy learning vector quantization (LVQ) neural network the pixels are classified to either background or objects using different gray level features. Then the object pixels are enhanced and the background

      suppressed. The final stage cut the processed image into left and right parts, transform to (stereo) sound that is sent to the user through the headphones.

      2.2. Guidecane[3]:

      GuideCane[3] project by Borenstein, It is a device that the user can hold like a white cane and that guides the user by changing its direction when an obstacle is detected. A handle (cane) is connected to the main device. The main device has wheels, a steering mechanism, ultrasonic sensors, and a computer. The operation is simple: the user moves the GuideCane, and when an obstacle is detected the obstacle avoidance algorithm chooses an alternate direction until the obstacle is cleared and route is resumed (either in a parallel to the initial direction or in the same). There is also a thumb- operated joystick at the handle so that the user can change the direction of the cane (left or right). The sensors can detect small obstacles at the ground and sideways obstacles like walls. Computer automatically analyzes the situation and guides the user without requiring him/her to manually scan the area, there is no need for extensive training

  3. Proposed System concept

    In this Proposed System, the design is follows:

    In the existing system there are two Electronic Travel Aids as discussed above. Each serves special purposes for Blind people . NAVI gives the information about the environment in the form of audio signals by means of image processing and fuzzy logic methodology. Guidecane detects the obstacles of the ground level and gives the optimal path planning.

    For this proposed system, we have to combine these two Electronic Travel Aids into one ETA named as Electronic Perceptible Assistance System. The proposed system can be simply implemented by making the user carry both the NAVI and guidecane at the same time. By doing so the two systems have to be connected with wires to the main processing systems. The main aim of our proposed system is to create a wireless system that is easy to carry and helpthe blind people navigate effortlessly their own.

    Therefore, we propose to connect both ETAs with a wireless connection like ZigBee and Bluetooth. We also connect with the Global Position System (GPS) for position and location of Blind person. It also includes the headphones and mike to receive and inform messages in the form of audio signals.

  4. Zigbee:

    The past several years have witnessed a rapid growth of wireless networking. However, up to now wireless networking has been mainly focused on high-speed communications, and relatively long-range applications such as the IEEE 802.11

    Wireless Local Area Network (WLAN) standards. The first well-known standard focusing on Low-Rate Wireless Personal Area Networks (LR-WPAN) was Bluetooth. However, it has limited capacity for networking of many nodes. There are many wireless monitoring and control applications in industrial and home environments which require longer battery life, lower data rates and less complexity than those from existing standards. For such wireless applications, IEEE has developed a new standard called IEEE 802.15.4. The new standard is also called Zigbee, when additional stack layers designed by the Zigbee Alliance are used.

    The name Zigbee is said to come from the domestic honeybee, which uses a zigzag type of dance to communicate important information to other hive members. This communication dance (the "Zigbee Principle") is what engineers are trying to emulate with this protocol _ a bunch of separate and simple organisms that join together to tackle complex tasks. The goal IEEE had when they specified the IEEE 802.15.4 standard was to provide a standard for ultra-low complexity, ultra-low cost, ultra-low power consumption and low data rate wireless connectivity among inexpensive devices. The raw data rate will be high enough (maximum of 250 kb/s) for applications like sensors, alarms and toys.

    IEEE 802.15.4 networks use three types of devices:

    • The network Coordinator maintains overall network knowledge. It is the most sophisticated one of the three types, and requires the most memory and computing power.

    • The Full Function Device (FFD) supports all IEEE 802.15.4 functions and features speci_ed by the standard. It can function as a network coordinator. Additional memory and computing power make it ideal for network router functions or it could be used in network-edge devices (where the network touches the real world).

    • The Reduced Function Device (RFD) carries limited (as specified by the standard) functionality to lower cost and complexity. It is generally found in network-edge devices. The RFD can be used where extremely low power consumption is a necessity.

    1. Architecture:

      Fig 1 shows the high-level software architecture of ZigBee. As illustrated in the diagram on the right, the software stack comprises three basic levels:

      Application level Zigbee Stack level

      Physical/Data Link level

      Fig 1 :Architecture of Zigbee

      These levels are described below.

      4.1.1 Application Level

      The Application level contains the applications that run on the network node. These give the device its functionality – essentially an application converts input into digital data, and/or converts digital data into output. A single node may run several applications – for example, an environmental sensor may contain separate applications to measure temperature, humidity and atmospheric pressure.

      4.1.2. Zigbee Stack Level

      The Zigbee Stack level provides the Zigbee functionality, and provides the glue between the applications and the Physical/Data Link level. It consists of stack layers concerned with network structure, routing and security (encryption, key management and authentication).

      4.1.2. Physical/Data Link Level

      The Physical/Data Link level is concerned with low-level network operation such as addressing and message transmission/reception. It is based on the IEEE 802.15.4 standard and comprises the following two layers:

      MAC (Media Access Control) sub-layer PHY (Physical) layer

  5. Global Positioning System(GPS)

    The Global Positioning System (GPS) is a space-based satellite navigation system that provides location and time information in all weather, anywhere on or near the Earth, where there is an unobstructed line of sight to four or more GPS satellites. It is maintained by the United States government and is freely accessible to anyone with a GPS receiver.

    The GPS program provides critical capabilities to military, civil and commercial users around the world. In addition, GPS is the backbone for modernizing the global air traffic system.

    The GPS project was developed in 1973 to overcome the limitations of previous navigation systems, integrating ideas from several predecessors, including a number of classified engineering design studies from the 1960s. GPS was created and realized by the U.S. Department of Defense (DoD) and was originally run with 24 satellites. It became fully operational in 1994.

    Advances in technology and new demands on the existing system have now led to efforts to modernize the GPS system and implement the next generation of GPS III satellites and Next Generation Operational Control System (OCX).

      1. Basic Concept

        GPS receiver calculates its position by precisely timing the signals sent by GPS satellites high above the Earth. Each satellite continually transmits messages that include

        • the time the message was transmitted

        • satellite position at time of message transmission

        The receiver uses the messages it receives to determine the transit time of each message and computes the distance to each satellite using the speed of light. These distances along with the satellites' locations are used with the possible aid of trilateration, depending on which algorithm is used, to compute the position of the receiver. This position is then displayed, perhaps with a moving map display or latitude and longitude; elevation information may be included. Many GPS units show derived information such as direction and speed, calculated from position changes.

        Three satellites might seem enough to solve for position since space has three dimensions and a position near the Earth's surface can be assumed. However, even a very small clock error multiplied by the very large speed of light the speed at which satellite signals propagate results in a large positional error. Therefore, receivers use four or more satellites to solve for both the receiver's location and time. The very accurately computed time is effectively hidden by most GPS applications, which use only the location. A few specialized GPS applications do however use the time; these include time transfer, traffic signal timing, and synchronization of cell phone base stations.

        Although four satellites are required for normal operation, fewer apply in special cases. If one variable is already known, a receiver can determine its position using only three satellites. For example, a ship or aircraft may have known elevation. Some GPS receivers may use additional clues or assumptions such as reusing the last known altitude, dead reckoning, inertial navigation, or including information from the vehicle computer, to give a less degraded position when fewer than four satellites are visible.

      2. Position calculation introduction

    Fig 2: Two sphere surfaces intersecting in a circle

    Fig 3 :Surface of sphere intersecting a circle (not a solid disk) at two points

    To provide an introductory description of how a GPS receiver works, error effects are deferred to a later section. Using messages received from a minimum of four visible satellites, a GPS receiver is able to determine the times sent and then the satellite positions corresponding to these times sent. The x, y, and z components of position, and the time sent, are designatedas where the subscript i has the value 1, 2, 3, or 4. Knowing the indicated time the message was received , the GPS receiver computes the transit time of the message as . A pseudorange, , is computed as an approximation of the distance from satellite to GPS receiver.

    A satellite's position and pseudorange define a sphere, centered on the satellite, with radius equal to the pseudorange. The position of the receiver is somewhere on the surface of this sphere. Thus with four satellites, the indicated position of the GPS receiver is at or near the intersection of the surfaces of four spheres. In the ideal case of no errors, the GPS receiver would be at a precise intersection of the four surfaces.

    If the surfaces of two spheres intersect at more than one point, they intersect in a circle. The article trilateration shows this mathematically. A figure,Two Sphere Surfaces Intersecting in a Circle, is shown below. Two points where the surfaces of the spheres intersect are clearly shown in the figure. The distance between these two points is the diameter of the circle of

    intersection. The intersection of a third spherical surface with the first two will be its intersection with that circle; in most cases of practical interest, this means they intersect at two points. Another figure, Surface of Sphere Intersecting a Circle (not a solid disk) at Two Points, illustrates the intersection. The two intersections are marked with dots. Again, the articletrilateration clearly shows this mathematically.

    For automobiles and other near-earth vehicles, the correct position of the GPS receiver is the intersection closest to the Earth's surface. For space vehicles, the intersection farthest from Earth may be the correct one.

    The correct position for the GPS receiver is also on the intersection with the surface of the sphere corresponding to the fourth satellite.

  6. Bluetooth

    Bluetooth is a proprietary open wireless technology standard for exchanging data over short distances (using short-wavelength radio transmissions in the ISM band from 24002480 MHz) from fixed and mobile devices, creating personal area networks (PANs) with high levels of security. Created by telecoms vendor Ericsson in 1994,it was originally conceived as a wireless alternative to RS-232 data cables. It can connect several devices, overcoming problems of synchronization.

    The Bluetooth Special Interest Group, which has more than 16,000 member companies in the areas of telecommunication, computing, networking, and consumer electronics.The SIG oversees the development of the specification, manages the qualification program, and protects the trademarks manage Bluetooth. To be marketed as a Bluetooth device, it must be qualified to standards defined by the SIG. A network of patents is required to implement the technology and are licensed only for those qualifying devices; thus the protocol, whilst open, may be regarded as proprietary.

    6.1. Implementation

    Bluetooth uses a radio technology called frequency-hopping spread spectrum, which chops up the data being sent and transmits chunks of it on up to 79 bands (1 MHz each; centered from 2402 to 2480 MHz) in the range 2,4002,483.5 MHz (allowing for guard bands). This range is in the globally unlicensed Industrial, Scientific and Medical (ISM) 2.4 GHz short-range radio frequency band. It usually performs 800 hops per second, with AFH enabled.

    The image processing involves five stages; preprocessing, edge detection, edge linking, object enhancement and noise elimination and object preference procedures.

    Image is captured from the vision sensor and used as input image.

  7. Image processing methodology for blind navigation

    The fundamental purpose of blind navigation is to assist the blind people to navigate freely among obstacles by providing them the position, size and distance of the obstacles. The obstacles in the captured image should be given more importance compared to that of background. Thus, it is essential to develop a navigation kit such that the blind user understands the environment in front of him with minimum efforts.

    The fundamental task of the image processing in this ETA is to identify and highlight objects in the scene in front of blind user and deliver this information in real time through auditory cues. The term real time is referred to the sampling of the information of the scene in front at the rate of 1 (or 2) image frame per second. In ETA, duration of sound produced from each sampled image is equal to one second. So, the computational time for image processing has to be less than one second so that the processing of new image and sonification can be carried out during the transmission of previous images sound. Figure1 illustrates the block diagram of proposed image processing stages for this ETA

    Fig 4: Proposed Image Processing Methodology for Blind Assistant System

      1. Preprocessing:

        The input image is resized to 32 x 32 pixels and contrast-stretching technique is applied to each colour component of the resized image. Image resizing is done as to enable faster processing

      2. Edge Detection:

    From the enhanced image, edges are extracted. The goal of edge detection is to provide structural information about the object boundary. In this work, region inside the closed boundary is considered as object. Thus, by extracting edges in the image, objects feature can be obtained. In this work, the preprocessed colour image is separated into three components (R, G and B components) and Canny edge detector is applied to each of the colour component. The resulting edges in each component is then combined using logical OR operator.

    7.3 Edge linking

    This is method of assembling edges in the image to form a closed edge. Discontinuity in edge is due to various reasons such as insufficient lighting, geometric distortion and image resizing effect. To connect the edge fragment in the image, edges are scanned in vertical and horizontal direction in a small neighborhood (3 x 5 pixels for horizontal direction and 5 x 3 pixels for vertical direction). If an edge is present at the specified location and no edges exist between located edge pixels, this non-edge pixel is identified as a candidate edge. The two edge pixels are linked by the candidate edge pixel thus forming an edge link. It is also noted that edge detector fails to detect edge at image border. To connect edges at the image border, labelling is done to the edges to ensure only edges with the same label are connected. By undertaking the proposed edge linking procedure, closed boundary of objects can be extracted.

    7.4. Object Enhancement and Elimination of Noise

    The objects region pixels have to be enhanced to high intensity and the backgrounds pixels to low intensity. This stage is necessary prior to sonification procedure. During the sonification, the image with pixels of high intensity will produce sound of higher amplitude. On the other hand, image with pixels of low intensity will produce sound of lower amplitude. Stereo sound patterns set of pixels with high intensity in the dark background is easy to identify than the low intensity over bright background. If the image is transferred to sound without any enhancement, it will be a complex task to understand the sound. With this consideration, the object region is enhanced to high intensity. A flood-fill operation is undertaken for the object enhancement. Through experimentation, it is observed that not all edges in the image are object boundaries. These extra edges are considered as noise. Noise has to be eliminated so that exact properties of object can be measured. Morphological operations, erosion and dilation are employed to eliminate noise in the image.

    7.5 Disparity Calculation

    Disparity forms the basic criteria for stereo vision. Once the disparity is determined then the depth can be calculated using equation 1. In order to calculate the disparity,

    correspondence ormatching of two images has to be undertaken. Area based stereo matching is performed over the stereo image pair.

    7.6. Object Preference:

    The main objective of object preference assignment in this work is to identity objects in the image in accordance to human visual preference. In human vision system, a visual consideration is given on an interested object and the other areas are given less consideration. By assigning equal preferences to all objects, blind user will have confusion in understanding the features of object. In this work, the central (8 x 8) of image pixel area is called as iris area. In human vision system, the object of interest will be usually at the centre of sight (in this work denoted as iris area). If the object is not within the centre of sight, the object can be brought to the centre by turning the head or occurrence in iris area and sizes. Object in the centre is more important for collision free navigation. In industrial vision system, the object of interest is found by known feature of object in the scene. But in NAVI, object is undefined, uncertain and time varying. This is due to the constant shifting of the headgear-mounted orientation by the blind people. To resolve this uncertainty, fuzzy logic is applied.

    Fig 5:a. Original captured Image. b. Preprocessed grey scale image. c. Edge Detected Image

    d. Edge Linked Image .e. Object Enhancement and noise elimination. f. Disparity Calculation

    g. Object Preference

  8. GuideCane

    The GuideCane is considerably heavier than the white cane, but it rolls on passive wheels that support its weight during regular operation. Both wheels are equipped with encoders to determine the relative motion. A servomotor, controlled by the built-in computer, can steer the wheels left and right relative to the cane.

    To detect obstacles, the GuideCane is equipped with ten ultrasonic sensors.

    During operation, the user pushes the GuideCane forward. While traveling, the ultrasonic sensors and proximity sensors detect obstacles in a 120_ wide sector ahead of the user. Based on the sonar and encoder data, the embedded computer instantaneously determines an appropriate direction of travel. If an obstacle blocks the desired travel direction, then the obstacle avoidance algorithm prescribes an alternative direction that clears the obstacle and then resumes in the original direction.

    Once the wheels begin to steer sideways to avoid the obstacle, the user feels the resulting horizontal rotation of the cane. Once the obstacle is cleared, the wheels steer back to the original direction of travel. The new line of travel will be offset from the original line of travel. Depending on the circumstances, the user may wish to continue walking along this new line of travel, or the system can be programmed to return to the original line of travel. The user can prescribe a desired direction of motion with the thumb-operated keypad.

  9. System Architecture

Fig 6: Block Diagram of Electronic Perceptible Assistance System (EPAS)

The system design consists of 2 main subsystems. One is Electronic Perceptible Assistance System (EPAS) and the other is Server system which are shown in fig 6 and fig 7

EPAS consists of several blocks each having a different functionality as follows

  1. Blind person can wear the Vision sensor on the Head Gear that captures the images in front of him/her. These images can be sent to the Micro processing unit (MPU) through the Wireless Connection ZigBee transceiver.

  2. Before sending the images to the MPU, the images have to be store in memory unit. Vision sensor continuously captures 10 images per sec. These images are first stored in the memory unit and then sent to MPU for image processing.

  3. The MPU processes the images with the image processing concepts such as preprocessing, Edge detection, Edge linking, Object enhancement, noise elimination, disparity calculation and object preference.

  4. With these image processing concepts, the background objects of the image can be suppressed and foreground objects are considered. The foreground and background objects have different frequencies such as low and high respectively. The foreground objects with low frequency indicates objects that are close to the Blind person.

  5. Due to the above reason, the close object image can be converted into sound signals. These signals are stored in server.

  6. At the ground level a Blind person can hold the Guidecane which incorporates the Keypad, Proximity sensors, Ultrasonic sensors and Servomotor. These systems are connected to MPU through the Zigbee transceiver.

  7. This Guidecane can detect the obstacles in front of the Blind person at the ground level and measures the distance between obstacles and user using Proximity sensors.

  8. When the obstacle is detected the Guidecane can change the path of the user by instructing the inbuilt servomotor to steer left or right using the obstacle avoidance[4][5] and the optimal path planning algorithms[6].

  9. When the user wants to navigate using the GuideCane, the Keypad acts as a controlling element, which has four buttons (Left, Right, Top and Bottom). The Left Button acts as a left steer, right button acts as a right steer, top button acts as straight way and finally bottom one acts as a brake.

  10. The information received from detected obstacles is processed by the MPU and can be converted into sound signals and then stored in server through the Zigbee transceiver.

  11. The image sound signals and obstacle sound signals, which are stored in server, are converted as audio

    signals in the audio converter and sent to the Bluetooth Headphones.

  12. The Global Positioning System (GPS) gives the information about the location and positions of the total system. This information is stored in the server through the MPU.

  13. The user can enquire about his current position through the mike provided in the headphone.

Fig 7: Server System Block Diagram

The Server System consists of Several Blocks

  1. ZigBee Transceiver receives input from the vision sensors images processed signals are sent to the Server for the storage of which image to sound converted signals.

  2. ZigBee Transceiver receives Proximity and ultrasonic sensors signals which are used for detecting obstacles processed in MPU also stored in the Server using Obstacle Detection algorithm[4][5].

  3. ZigBee Transceiver receives other signals like Keypad and Servomotor signals processed in the MPU sent for storage in server this gives the user orientation.

  4. GPS Receiver receives the information about the user location and positions that environment information stored in the server database.

  5. All the above informations are monitor in the PC with a GUI software and the server also gives the Optimal Path Planning[6] to the EPAS system through the Zigbee Transceiver

  6. This information again gives to MPU and then sent to Bluetooth mic and headphone

  1. Hardware

    The set of major components that have been utilized to develop such system are:

    1. Vision Sensor: Vision sensor, which is used for this system digital camera.

      10.2 Micro Processing Unit: This is the main unit for the entire Blind Assistance Electronic Travel Aid. For this application, we use the DSP processors because it processes the image and audio signals.

        1. Analog Converter: In signal processing, an audio converter or digital audio converter is a type of electronic hardware technology which converts an analog audio signal to a digital audio format, either on the input (Analog-to-digital converter or ADC), or the output (Digital-to-analog converter, or DAC).

        2. Zigbee Transceiver: The CC2520 is TI's second generation Zigbee/ IEEE 802.15.4 RF transceiver for the 2.4 GHz unlicensed ISMband. This chip enables industrial grade applications by offering state-of-the-art selectivity/co-existence, excellent link budget, operation up to 125°C and low voltage operation. In addition, the CC2520 provides extensive hardware support for frame handling, data buffering, burst transmissions, data encryption, data authentication, clear channel assessment, link quality indication and frame timing information. These features reduce the load on the host controller.

        3. Bluetooth Transmitter & Receiver: It is used to connect the headphones and mike.

        4. Proximity Sensors: A proximity sensor is a sensor able to detect the presence of nearby objects without any physical contact. A proximity sensor often emits an electromagnetic field or a beam of electromagnetic radiation (infrared, for instance), and looks for changes in the field or return signal. The object being sensed is often referred to as the proximity sensor's target. Different proximity sensor targets demand different sensors.

        5. Ultrasonic Sensors: Ultrasonic sensors (also known as transceivers when they both send and receive) work on a principle similar to radar or sonar which evaluate attributes of a target by interpreting the echoes from radio or sound waves respectively. Ultrasonic sensors generate high frequency sound waves and evaluate the echo which is received back by the sensor. Sensors calculate the time interval between sending the

          signal and receiving the echo to determine the distance to an object.

        6. Servomotor: A servomotor is a motor which forms part of a servomechanism. The servomotor is paired with some type of encoder to provide position/speed feedback. This feedback loop is used to provide precise control of the mechanical degree of freedom driven by the motor. A servomechanism may or may not use a servomotor. For example, a household furnace controlled by a thermostat is a servomechanism, because of the feedback and resulting error signal, yet there is no motor being controlled directly by the servomechanism. Servomotors have a range of 0°-180°.

      Servomotors are not the only means of providing precise control of motor output. A common alternative is a stepper motor. In a stepper motor, the input command specifies the desired angle of rotation, and the controller provides the corresponding sequence of commutations without the use of any feedback about the position of the system being driven.

  2. Software

An interface is designed to enable the system administrator to debug the system, or monitor the movement of the blind person for the environment. In addition, the obtained information can be sent to E-Health service center in order to diagnose the condition of the blind person. The user icon in the interface continuously follows the location of the blind node, which makes the guidance procedures more accurate. Finally, the software also handles the path-planning algorithm, where Re-active path planning method is used to connect between the user and the desired target. This algorithm is simply connecting between two points, which are the user, and the desired target. However, for enhancement the system has the ability to adapt any complex path planning algorithm i.e. distance transforms method or potential field algorithm.

Fig 8. Application Software GUI

8. Conclusion

This paper presented the design of a system that assists a blind person to navigate inside an enclosed environment such as the home and outdoor environment with something like visual perception. The system can be considered as a semi- autonomous device. It provides full autonomy for global navigation (path-planning & localization[8]), but relies on the skills of the user for local navigation (Obstacle avoidance[6][7]). This device offers pioneering solutions in order to replace the straight methods of guiding visually impaired person. In addition, it can be easily applied anywhere where it can handle places like malls, Railway stations, bus stands, universities and airports. This system will allow the visually impaired to wander freely and autonomously.

References:

[1].Wearable Obstacle Avoidance Electronic Travel Aids for Blind: A Survey Dimitrios Dakopoulos and Nikolaos G. Bourbakis, Fellow, IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART C: APPLICATIONS AND REVIEWS, IEEE, VOL. 40, NO. 1, JANUARY 2010

[2]. G. Sainarayanan, R. Nagarajan, and S. Yaacob, Fuzzy image processing scheme for autonomous navigation of human blind, Appl. Softw.Comput., vol. 7, no. 1, pp. 257264, Jan.

2007.

[3]. IA: Syst. Hum., vol. 31, no. 2, pp. 131136, Mar. 2001..

Ulrich and J. Borenstein, The guidecane applying mobile robot technologies to assist the visually impaired people, IEEE Trans. Syst.,Man Cybern., A: A: Syst. Hum., vol. 31, no. 2, pp. 131136, Mar. 2001.yst. Hum., vol. 31, no. 2, pp. 131136, Mar.

2001.

  1. I. Ulrich and J. Borenstein, VFH+: Reliable obstacle avoidance for fast mobile robots, in IEEE Int. Conf. Robotics and Automation, Leuven, Belgium, May 1998, pp. 15721577.

  2. VFH*: Local obstacle avoidance with look-ahead verification, in IEEE Int. Conf. Robotics and Automation, San Francisco, CA, Apr.2000, pp. 25052511.

  3. Time optimal path planning considering acceleration limits Marko Lepetic , Gregor Klancar, Igor krjanc, Drago Matko, Botjan Potocnik Faculty of Electrical Engineering, University of Ljubljana, Traka 25, SI-1000 Ljubljana, Slovenia Robotics and Autonomous Systems 45 (2003) 199210

BIOGRAPHIES

Image processing

Akella.S.Narasimha Raju, Asst. Prof. Electronics &

Communication Engineering at V.S.M College of Engineering.

Ramachandrapuram, E.G.Dt. His area of interest includes

Communication Systems and Wireless Communication. Embedded Systems,

S.M.K.Chaitanya, Asst. Prof. Electronics & Communication Engineering at Gayathri Vidya Parishad Engineering college, Visakhapatnam His area of interest includes

Embedded Systems and Wireless Communication.

V.Ravindra, Asst. Prof. Electronics & Communication Engineering at V.S.M College of Engineering, Ramachandrapuram, E.G.Dt.

His area of interest includes Communication Systems and Wireless Communication.

Leave a Reply