- Open Access
- Authors : Prof.Anupama V P, Harsha S, Swaroop Bn, Nishan N, Sriram Bs
- Paper ID : IJERTV13IS050259
- Volume & Issue : Volume 13, Issue 05 (May 2024)
- Published (First Online): 30-05-2024
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Redefining Mobility: Deep Learning-Assisted Obstacle Detection
Prof.Anupama V P
Information Science and Engineering Jyothy Institute of Technology
VTU, Bengaluru, India
Harsha S
Jyothy Institute of Technology VTU, Bengaluru, India
Nishan N
Jyothy Institute of Technology VTU, Bengaluru, India
Swaroop BN
Jyothy Institute of Technology VTU, Bengaluru, India
Sriram BS
Jyothy Institute of Technology VTU, Bengaluru, India
AbstractObstacle detection is crucial for the safety and mobility of individuals with visual impairments. Traditional methods of obstacle detection typically involve the use of canes, guide dogs, or other forms of assistance. However, recent advancements in computer vision technology have enabled more development to sophisticated solutions like implementation of the algorithm for real time obstacle detection. In this paper, we explore the challenges and considerations of implementing YOLO for indoor obstacle detection specifically for blind individuals. We will discuss how the algorithm works for real-time object detection, analyze the potential applications and limitations of using YOLO for indoor obstacle detection, and present our findings and recommendations for implementing YOLO for this particular application. Overall, this research is to develop a more efficient and effective solution for indoor obstacle detection for individuals with visual impairments.
-
INTRODUCTION
In the realm of assistive technologies, the pursuit of enhancing mobility and independence to people with visually weak eyesights stands as a paramount objective. The capability to move in indoor environments seamlessly and safely is not merely a convenience but a fundamental aspect of daily life, profoundly influencing one's sense of autonomy and well- being. Traditional methods of obstacle detection, reliant on tactile cues or assistance from guide dogs, while invaluable, often present limitations in providing real-time feedback and comprehensive spatial awareness. In this context, the integration of cutting-edge technologies, particularly those rooted in computer vision and deep learning, holds immense promise in redefining mobility for people with visually weak eyesights.
Our investigation delves into the underlying principles of the YOLO algorithm, elucidating its mechanism for real-time object detection and its potential applicability in the realm of indoor navigation. Moreover, we undertake a comprehensive analysis of the challenges and considerations intrinsic to the implementation of YOLO for this specific application domain.
Through empirical evaluation and iterative refinement, we endeavor to optimize the effectiveness and practicality of the proposed solution, with a keen focus on usability, accuracy, and real-world applicability.
-
RELATED WORK
-
Indoor-Walker: Device Indoor Walking Assistance for Blind People to Avoid Obstacles[1])
Indoor-Walker leverages LiDAR sensor technology to aid blind individuals in navigating indoor environments by first constructing a 2D occupancy grid map of the surroundings using data from the LiDAR sensor [Page 5, Corridor-Walker 179:5]. Subsequently, the system assigns cost values to each cell in the grid map and plans obstacle-avoiding paths based on this information [Page 5, Corridor-Walker 179:5]. Simultaneously, the system preprocesses the image of the grid map and detects upcoming intersections through the YOLOv3 detector, enabling users to recognize paths and avoid obstacles effectively [Page 5, Corridor-Walker 179:5]. Additionally, the system provides spatialized audio feedback to guide users along the generated path while alerting them to obstacles and intersections through vibration and TTS feedback [Page 18, 179:18].
-
Obstacle detecting Device for visually impaired people
The obstacle detection and warning system for visually impaired individuals operates by capturing scene information using a mobile Kinect device, including color images, depth images, and accelerometer data [T5, L1-3]. This data is then analyzed to detect both static obstacles like trash or plant pots and moving obstacles such as people [T5, L5-7]. The system employs a tactile-visual substitution system with an electrode matrix to convey warnings to visually impaired users, assisting them in navigating through obstacles in indoor environments [T5, L9-11]. By integrating data acquisition from the Kinect with obstacle detection and warning functionalities, the system enhances the mobility and safety of visually impaired individuals in unfamiliar settings [T6, L1-3].
-
-
SYSTEM DESIGN
Our primary objective is to aid visually impaired individuals in navigating indoor corridors safely, enabling them to reach their destination without the need to modify existing infrastructure or rely on static route maps. Imagine a scenario where a blind person is traversing indoor spaces like office buildings, hospitals, or hotels. They possess knowledge of the number of intersections they need to navigate to reach their destination. However, obstacles obstruct their path within the corridor. In response to such challenges, our system is engineered to provide assistance using only a smartphone equipped with a LiDAR sensor.
A.Avoiding obstacles
The system we've designed aims to assist visually impaired individuals as they navigate indoor environments. By generating a path that maintains a safe distance from walls and obstacles, it helps prevent collisions and ensures smooth navigation. Instead of relying on walking along walls, which can lead to accidents, the system guides users along a clear path, even when obstacles are present. When obstacles are detected ahead, the system automatically adjusts the path to circumvent them, ensuring users can safely navigate around any obstructions.
Fig1. Object Detection Accuracy B.Intersection Detection
-
To effectively navigate to their destination, visually impaired individuals rely on perceiving the positions and shapes of intersections along their route. However, when they're unable
to walk alongside a wall due to obstacles obstructing the way, they might inadvertently pass an intersection without realizing it. Traditional navigation aids like white canes and guide dogs don't provide assistance in recognizing intersection shapes. To enhance the capabilities of these aids, our system notifies users in advance of an upcoming intersection, alerting them to its presence to prevent them from overlooking it. Upon reaching the intersection, the system further provides information about its shape, aiding users in navigating more confidently and accurately.
-
-
IMPLEMENTATION
The YOLO algorithm, or You Only Look Once, is a state-of- the-art object detection algorithm designed for real-time object detection [10]. YOLO works by dividing the image into a grid, and then predicting bounding boxes and class probabilities of objects simultaneously for each grid cell [10]. YOLO uses a one-stage approach for object detection, making it more
efficient than other algorithms in terms of speed and accuracy [10]. The algorithm has evolved over the years with newer versions, including YOLOv5, YOLOv6, and YOLOv8, each with improved capabilities and features [10][11]. YOLOv5 is a base model used for achieving parallel way of object detection and space estimation. The algorithm is designed specifically for the CCO dataset and has been evaluated for fire detection in Korea, showing high performance and real-time capabilities [10][11]. The model can identify objects and provide voice output, outperforming other models and having the potential for assisting visually impaired individuals in an IoT environment [11]. However, YOLOv6 may face difficulty identifying objects against textured backgrounds and potential challenges in compressing network width [11].
-
What are the specific challenges and considerations for implementing YOLO for indoor obstacle detection for blind individuals?
Implementing YOLO for indoor obstacle detection for blind individuals comes with several challenges and considerations. The major challenges is developing a lightweight implementation of YOLO that we can use for obstacle detection [12]. Another challenge is to develop the algorithm using deep learning techniques, specifically YOLO-v5 (You Only Look Once) that can detect and recognize obstacles for autonomous vehicles [10]. To assure that system could be implemented on embedded devices as well, the suggested technique for detecting obstacles is implemented using YOLO v5 [11]. Additionally, researchers have combined YOLO with other technologies like light field cameras to create a simpler setup than RGB-D sensors [12]. A novel obstacle detection algorithm was presented in one study which combined the algorithm of YOLO with the light field camera [10]. Another study proposed a simultaneous object-detection and distance-estimation algorithm based on YOLOv5 for obstacle detection in indoor autonomous vehicles, which accurately calculates obstacle size and position using object information and depth maps, demonstrating higher detection accuracy indoors [10][11]. Integrating DenseNet into the YOLO v5 backbone can impact the reuse of features and data transfer with additional challenges to consider when implementing YOLO for indoor obstacle detection for blind individuals [11].
-
What are the potential applications and limitations of using YOLO for indoor obstacle detection?
The YOLOv8 model has been proposed as a potential solution for obstacle detection in indoor environments. This method has been compared with other object detection models such as YOLOv7.
Detection accuracy indoors [12]. Furthermore, the combination of YOLO with a light field camera can accurately calculate obstacle size and position, providing an easier setup than RGB- D sensors [12]. Overall, YOLOv8-based models have shown better accuracy in indoor obstacle detection than other models [12]. However, the limitations of using YOLO for indoor obstacle detection are not discussed in the given text [12]. In addition, YOLOv7 has the highest number of trainable parameters among all the models, which leads to lower generalization capacity for indoor obstacle detection [12]. Despite these limitations.
The YOLO algorithm, with its real-time object detection capabilities, has the ability to change the way blind individuals navigate indoor environments. The algorithm's ability to simultaneously predict bounding boxes and class probabilities of objects in each grid cell makes it more efficient than other algorithms in terms of speed and accuracy. However, the limitations of using YOLO for indoor obstacle detection are not fully explored in the given text, and its use of anchor boxes may affect its accuracy in detecting certain types of obstacles. One study attempted to address these limitations by combining YOLO with a light field camera, resulting in a novel obstacle detection algorithm. Additionally.
-
-
User Study
We conducted a user study at our university building to assess the effectiveness of our Redefining Mobility Device. Blindfolded participants were enlisted to complete various tasks while utilizing our system alongside a cane. We compared the outcomes to those when participants solely relied on a white cane without the system. The term "system-aided" refers to the condition where participants used both the system and a white cane, while "cane-only" indicates participants solely using a white cane without the system. Approval for this user study was obtained from the university's institutional review board (IRB).
-
Participants
We enlisted 14 blindfolded participants who regularly travel independently. These individuals predominantly rely on white canes for navigation and have been using smartphones in their daily lives for over two years on average.
-
Task And Conditions
Fig2: Detection of objects
Task 1: Identification and Turning at Single Intersection: In this task, participants were instructed to make a specific turn (left or right) at an intersection and subsequently state the shape of the intersection after completing the turn. We created simulated intersections of varying shapes using room dividers.Prior to each walk, participants were randomly positioned between 6 m and 10 m before the intersection and were then asked to initiate the task from that point. Participants were informed beforehand that they would be required to identify the shape of the intersection after each walk.
Task 2: Involved obstacle avoidance, where participants were tasked with traversing a 15-meter straight corridor. Two distinct routes were designed: Route 2-1, featuring two obstacles, and Route 2-2, comprising four obstacles. These obstacles were strategically positioned alternately on opposing sides of
Fig 3: Sensing of object and its accuracy
the corridor. For instance, one route might begin with an obstacle on the left side, followed by an obstacle on the right side. The obstacles utilized included boxes, chairs, or rubbish bins. Participants were randomly positioned either 3 meters or 6 meters away from the entrance of the route, marking the commencement of the task.
Task 3: In the task involving navigating long corridors with obstacles, participants were instructed to traverse a corridor characterized by numerous intersections and obstructions. This task utilized an existing corridor within our university premises. Two distinct routes were devised: Route 3-1, featuring three intersections and three obstacles, with a length of 37.4 meters; and Route 3-2, containing four intersections and four obstacles, spanning a length of 47.4 meters.
Fig 4: Sensing of object using Night Vision
-
-
RESULT
In this section, we present the findings from the experiments. We begin by detailing the daily experiences in navigating indoor corridors as gathered from the pre-interview (Section 6.1), And an analysis of the Complete performance of the mobility device (Section 6.2). Finally, we outline the qualitative feedback obtained from the post-interview (Section 6.3).
Participant's Daily Experiences in Navigating Indoor Corridors.
To navigate around obstacles, all participants agreed on using their white canes to tap the obstacles. Six participants found it challenging to block any obstacles with hollow lower parts, such as Table or desks, as their upper body might still collide with them. Additionally, three participants found it difficult to avoid low-height obstacles like boxes and rubbish bins, as they couldnt rely on their echolocation skills for detection.
When it comes to locating an intersection, 12 participants mentioned that they walk along the wall and use a white cane. Ten participants said they listen to ambient sounds, and nine participants mentioned that they perceive the flow of air. In familiar places, they also used step counting and intuition. However, seven participants reported instances of walking past an intersection without noticing. Two participants attributed this to distraction, while five participants said it happened while they were avoiding obstacles. One participant described the relationship between intersections and bstacles as follows: If obstacles or people are standing before an intersection, and because we have to avoid them, I lose track of my position and therefore may walk past the intersection.
Nine participants mentioned that this would be difficult to walk straight in an indoor corridor. They mentioned that their
primary strategy is to listen to the echo of the sound from the nearby wall.
CONCLUSION
In summary, our investigation into indoor navigation assistance for people who are blind has highlighted the transformative potential of algorithms, particularly the YOLO algorithm. Through this exploration, we have aimed to present a comprehensive framework for real-time obstacle detection and navigation support tailored specifically to the unique requirements of blind individuals.
This research emphasizes the importance of leveraging state-of- the-art technologies to improve mobility and safety, emphasizing the crucial role of computer vision, sensor fusion, and intelligent algorithms in achieving this goal. By integrating these components, our objective is to create a system that it not only detects obstacles instantly but also offers intuitive navigation assistance, empowering visually impaired individuals to navigate indoor spaces with confidence and independence.
While our study offers promising directions for innovation, it also recognizes the challenges associated with implementing such systems. These challenges include optimizing algorithm performance, ensuring reliability across various environmental conditions, and improving the accessibility of user interfaces. Overcoming these obstacles will necessitate interdisciplinary collaboration and ongoing refinement of our approaches.
Looking forward, our vision extends beyond technological progress; it encompasses a commitment to fostering inclusivity and accessibility in society. By providing advanced assistive technologies to individuals with visual impairments, we aspire to create a world where everyone can navigate their surroundings with dignity, autonomy, and equal opportunities. This journey towards greater inclusivity is not only a scientific endeavor but also a moral imperative, and we are dedicated to advancing it with integrity and innovation.
FUNDING DETAILS
Selected/Funded under 47th series of Student Project Programme( SPP) : 2023-2024
Karnataka State Council For Science and TechnologyFunding amout : 5000rs
REFERENCES
-
An effective obstacle detection system using deep learning advantages to aid blind and visually impaired navigation (2024)
https://www.sciencedirect.com/science/article/pii/S209044 7923002769[2]
-
Obstacle Detection System for Navigation Assistance of Visually Impaired People Based on Deep Learning Techniques (2023)
https://www.mdpi.com/1424-8220/23/11/5262[4]
-
Smart_Eye: A Navigation and Obstacle Detection for Visually Impaired People through Smart App (2022) https://journal.yrpipku.com/index.php/jaets/article/view/20 13[3]
-
A Wearable Navigation Device for Visually Impaired People Based on the Real-Time Semantic Visual SLAM System (2022)
https://doi.org/10.1109/access.2022.3155524
-
Pedestrian Detection Based on Light-Weighted Separable Convolution for Advanced Driver Assistance Systems (2021)
https://doi.org/10.1109/ssd52085.2021.9556641
-
Drivers Fatigue Detection Using EfficientDet In Advanced Driver Assistance Systems (2021) https://doi.org/10.1109/ssd52085.2021.9556641
-
A Transfer Learning Approach for Indoor Object Identification (2021)
https://doi.org/10.1007/s42979-021-00790-7
-
Deep Learning Based Application for Indoor Scene Recognition (2020)
https://doi.org/10.1007/s11063-020-10231-w
-
Traffic Signs Detection for Real-World Application of an Advanced Driving Assisting System Using Deep Learning (2020)
https://doi.org/10.1109/access.2020.2967219
-
Medical Images Segmentation for Lung Cancer Diagnosis Based on Deep Learning Architectures (2020) https://doi.org/10.3390/app10103934