- Open Access
- Total Downloads : 6
- Authors : P. Om Prakash, T. Surya, R. M. Rajkumar, M. Joseph Steffin
- Paper ID : IJERTCONV5IS09040
- Volume & Issue : NCIECC – 2017 (Volume 5 – Issue 09)
- Published (First Online): 24-04-2018
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Navigation of Unmanned Aerial Vehicles in GPS Denied Region Using Vision Based Obstacle Detection
P. Om Prakasp ,T. Surya2, R. M. Rajkumar3, M. Joseph Steffin4
1 Assistant professor, 2,3,4 UG Students Department of Electronics and Communication Engineering
Velammal College of Engineering and Technology, Madurai, Tamilnadu, India
Abstract: Unmanned aerial vehicle (UAV) creates a good impact on development of military and security purpose. UAV has its impact over various applications such as surveillance, tracking, detection of real time objects etc. Our method is concerned on the UAV navigation in indoors (GPS navigation becomes invalid due to real time indoor obstacles) using vision based video processing techniques. In this paper we have formulated the simulation code for automatic unmanned aerial vehicle (UAV) navigation. The centralized algorithm is proposed to detect the obstacle, in meantime find the size of the obstacle (nearest object) of the particular frame of the video using the Sobels edge detection algorithm and we have further simulated the navigation of UAV by approximating time delay to the propeller which positions the UAV in the virtual imaging considering the values of the UAV dimensions and the size parameters detected.
INTRODUCTION:
In recent years UAVs have gained significant roles in the field of robotics. Low-cost platforms using inexpensive Sensor payload have been shown to provide satisfactory flight. In this report, we demonstrate video processing and navigation methods that can be applied to low-cost UAVs, and we list some popular inexpensive platforms and application fields where they are useful. Unmanned vehicles, including UAVs, offer new perspectives for transportation and services. Although the legal requirements are still quite restrictive Therefore, in the beginning of this paper, we explain the general ideas for object detection. We survey techniques that work in any application, but are specifically motivated by infrastructure inspection, so we point out the connections of vision and methodologies and highlight those shown to be successful in UAV applications.
EXISTING METHOD:
Some early research and theories on aerial robots are done as: non-vision vs. vision and control with this area. We begin by presenting at a high level existing vision doing on indoor surface the navigation (hence GPS denied)act as the key factor ; whereas in outdoor the focus is the main concern using control algorithm it hasbeen achieved.
NON-VISION BASED SENSORS:
Here autonomous quad-rotor navigation with the help of active sensors like laser range scanners, sonar, and infra- red. For a specific distance or within a short range Roberts[11] uses infrared and ultrasonic sensors to y a quad-rotor indoor in a (7×6m) room, but it is not applicable on long-range sensing. Whereas, Achtelik[1] used a laser rangender and a stereo camera for quadrotor navigation, combining the two complementary sensors. When making the circuit modification like explained above i.e. making complex ,hence it requires more power than the normal resulting in heavy battery weight necessary to support active, power-hungry sensors; Although miniature MAVs(Micro-Air Vehicle) are able to carry only a light- weight and low-power sensor for example camera.
VISION BASED SENSORS:
Talking about vision, the recent speculation surrounded that designing UAVs are allowed to fly in an indoor environment by using vision only; the main reason for going for vision based is that with minimum power consumption we can deploy it for a long range sensing. Nicoud et al.[12] deals with the tradeoffs of designing indoor helicopters in the other hand, Schafroth et al.[13] is developed for different test benches for micro helicopters
also designed a dual-rotor single-axis helicopter with an omnidirectional camera. Extending the list, Mejias et al.[14] also used vision to land a helicopter while avoiding power lines. Concluding, Zingg[4] et al. presented optical ow based algorithms for navigating an MAV in corridors. Behind every process the three main factors plays the very important part ,they are Stabilization, control and navigation to be more clear the following brief define its worth in a process sensors, with concern on stabilization or navigation, in different environments.
STABILIZATION AND CONTROL:
Various kind of cameras are handled to achieve focus, stabilization of quad-rotors ,for instance stereo cameras are used for refocusing at certain depth by programming it E.g., Moore et al. [15] for autonomous ight is made by using stereo camera. Johnson [16] to make a quadrotor hover stable in indoors is possible by using vision based. All the above methods and research work we discussed above is purely for stabilization and pose estimation of quad-rotor. Therefore the most important fact is Navigation. Everything works perfectly only if a proper Navigation is done.
NAVIGATION:
So it is clearly understood that for simple camera (lightweight) needs navigation guide which more likely depends on estimated pattern or environments. For example, Tournier et al.[20] to estimate the attitude and position of quad-rotor vehicles used Moire patterns pasted in the environment. Soundararaj, Prasanth and Saxena [21] and Courbon et al. [22]) used vision to y in known environments; But their method are not suitable for conditions like when full visual databases are unknown. Mori et al. [23] to stably hover a co-axial helicopter and go from one marker to another used marker. Therefore different works depending condition varies where vision and navigation are constant factors.
OTHER RELATED WORKS:
The other related works in Indoor navigation, On-board cameras were used by Michels, Saxena and Ng [6] for autonomous obstacle avoidance in a small RC car driving at high speeds by computing image features that capture single image distance cues (Make3D, [2], [13], [1]) for predicting the distances of obstacles. Hence Our work is energized by this work; Hence the main disadvantage of this work is that the algorithm deployed for specific robots like ground robots are not abruptly applicable for Mavs for obstacle detection. Other work called visual SLAM where we can able to construct a map of the environment using images captured from an attached camera.
Ribnick et al. [7] from monocular views he is able to estimate positions and velocities. For navigating on 2D ground Such methods are used for estimating position/location from visual landmarks have been deployed in many robots. Celik et al. [2], he is famous for reconstructing 3D properties of the environment for planning the path for UAV using vision. These technique only suits when there are strong feature points that which
can be easily trackable from frame to frame, therefore we came to understood that this method is not applicable for some indoor conditions which are non-trackable like walls. So Our method does not appreciates simply performing 3D reconstruction there it becomes less computationally intensive Hence we are following vision based algorithm it does not in urge or require a high-quality images, simply a basic image of having small resolution of 128*128 pixels suffices is enough, which results in impressive algorithm for simple aerial vehicles or robots. So our method is more efficient in every aspect hence most flying platform machines are pre attached cameras on them which makes our algorithm to be more convenient to be apply.
PROPOSED METHOD:
-
ISION: CAMERA-BASED SENSING AND IMAGE PROCESSING:
Propagation of Unmanned vehicles require sensing. Usually, utrasonic sensors, colour, thermal or infrared cameras or laser rangefinders are used to extract raw data about the existing environment. From these data, low-cost UAVs often possess colour cameras. Information is then extracted using computer image processing techniques. The stored frames of images are processed.
FEATURE DETECTION AND DESCRIPTION METHODS:
Feature detection and description algorithms are primary features for detection of object and tracking. These methods are used, for example, to extract UAV position and motion information. Methods differ from each other in the pre-processing used (gray scaling, blurring, masking); in the way the features are interpreted and selected, and in the mathematical operations used in the processing steps. Image detection methods are responsible for identifying them, whereas descriptors are used to match a feature in two images (e.g., images from a different perspective or subsequent frames from a video stream). Detectors in combination with descriptors and matching methods form complete tools for motion tracking. Edge detection is usually employed to identify lines and planes in images. Some of the classic methods are Canny, Sobel, Laplacian and Scharr edge detectors. Several surveys exist that compare the performances of these and other algorithms. In UAV inspection applications, feature detectors are useful for detecting targets (buildings, objects) or references that have to be followed. For linear structure detection, edge detectors can be combined, e.g., with line extractors. To achieve target detection, feature detectors can be mixed with reference and processed images. When mixed with descriptors and matchers, feature detectors can also be used to track moving objects or to keep a reference position relative to a detected object. Optical flow is a family of techniques that focuses on determining motion from images. More precisely, optical flow can be defined as the apparent motion of feature points or patterns in a 2D image of the 3D environment. Although providing useful navigation information, optical flow algorithms are
-
ALGORITHM
This paper we have formulated simulation code for unmanned aerial vehicle by using the MATLAB software. The required image is captured in the camera is taken into the process. The original image captured by the camera is converted into grey scale image by using MATLAB. By using this converted grey scale images, the largest blobs are found by comparing the areas of several blobs and the largest blobs are stored as a separate image. By analyzing the largest blobs their respective edges can be found. Then after the edge values of the blobs are found. By using the edge value of the blobs a sparsity matrix is formed. The sparsity matrix is stored. By usually time consuming. using the sparsity matrix we can easily find the size of the object in pixels. The size of the object is found by knowing the maximum and the minimum point of the pixels. Using vanishing point technique we can easily find the distance of the object from the propeller. After knowing the distance of the object the object from the propeller, by applying time delay to the propeller, we can navigate the drone.
Step1: Read the image.
Step2: Convert the image into greyscale Step3: Find the largest blob.
Step4: Compare the areas and store the largest blob as a separate image.
Step5: Find the edges of the blob( Sobel's edge detection ) Step6: Form sparsity matrix.
Step7: Visualize the sparsity pattern (matrix values to image for display purpose) .
Step8: Find the size of the object (in pixels) by detecting maximum and minimum points of the edges.
Step9: Using vanishing point technique, find the distance of the object.
-
l1: a line in the image, discovered via Hough transform. Let L be the number of lines found in the image, and l [0, L). The line can be represented by parameters, as y=+.
+
-
(,) 2 : the coordinates of the intersection of two lines. There are K total intersections. In detail, if the lines and do not intersect, let (,) = (, ).If they do, then we get them by solving
=+=+.
-
G: The × grid that tiles the image plane ( = 11 in our experiments)., represents the number of line intersections falling in the grid element (, ). (, [0, ) are integers). i.e.,
-
= 1{ < + 1,
,
=1
< + 1}
Where – the width of the image and h is is the height of the image.
Therefore the initial estimate of the vanishing point is:
(, ) = ( (
)
+ 0.5 , (
+ 0.5))
However this estimate is noisy, and therefore we average the actual location of the intersections lying near (, ) in order to accurately estimate the vanishing point. N represents the set of points lying close to (, ):
N={ [0, ): ||(, ) (, ))||2 } Where is the distance threshold. We then compute the new estimate of the vanishing point as:
|N|
(, ) = 1 N(, )
Step10: Find the dimensions (in cms) using the distance and size in pixels.
Step11: Using the value of the dimensions calibrate the UAVs propellers for the required motion.
CONCLUSION:
The project was concerned more on the navigation part of UAVs automation and successful simulation of the overall operation was done using MATLAB simulink. The process of HDL coding makes it easy to implement the ready code directly into the UAV navigation control kit, provided the energy source and propulsion mechanisms are taken under calibrated motion adjustments and other stabilisations carried out considering the entire vehicle as a single part. The real time implementation takes this algorithm to a fruitful application including interior mapping using SLAM(simultaneous localisation and mapping), Object tracking (image matched follower vehicle), Object locator and surveillance.
RESULT:
As we used reduced frame rate and less perception we can achieve faster response in obstacle detection and thereby improve the navigation. As we use Sobels algorithm the computation time is also reduced. The combination of perceptional cues and edge detection we can achieve greater efficiency.
FUTURE SCOPE:
The proposed method can be implemented further to the extent of ecosystems which is monitored surveyed and updated completely by vision based vehicles (both land and air). Also unmanned ecosystems can be a perfect solution in industrial environment where fetching surveying and maintenance can be readily improved using UAVs.
REFERENCES:
-
M. Achtelik, A. Bachrach, R. He, S. Prentice, and N. Roy, Stereo vision and laser odometry for autonomous helicopters in gps-denied indoor environments, in SPIE Unmanned Systems Technology XI, 2009
-
K. Celik, S.-J. Chung, M. Clausman, and A. K. Somani, Monocular vision slam for indoor aerial vehicles, in IROS, 2009.
-
M.Goesele, N. Snavely, B. Curless, S. M. Seitz, and H. Hoppe, Multiview stereo for community photo collections, in ICCV, 2007.
-
S. Zingg, D. Scaramuzza, S. Weiss, and R. Siegwart, Mav navigation through indoor corridors using optical ow, in ICRA, 2010.
-
A. Saxena, S. Chung, and A. Ng, Learning depth from single monocular images, in NIPS, 2005. [6] J. Michels, A. Saxena, and A. Y. Ng, High speed obstacle avoidance using monocular vision and reinforcement learning, in ICML, 2005.
-
P. Abbeel, A. Coates, M. Quigley, and A. Y. Ng, An application of reinforcement learning to aerobatic helicopter ight, in NIPS, 2006.
-
E. Feron and S. Bayraktar, Aggressive landing maneuvers for unmanned aerial vehicles, in AIAA GN&C, 2006.
-
V. Gavrilets, I. Martinos, B. Mettler, and E. Feron, Control logic for automated aerobatic ight of miniature helicopter, in AIAA GN&C, 2002.
-
A. Coates, P. Abbeel, and A. Y. Ng, Learning for control from multiple demonstrations, in ICML, 2008.
-
J. Roberts, T. Stirling, J.-C. Zufferey, and D. Floreano, Quadrotor using minimal sensing for autonomous indoor ight, in EMAV, 2007.
-
J.-D. Nicoud and J.-C. Zufferey, Toward indoor ying robots, in IROS, 2002
-
D. Schafroth, S. Bouabdallah, C. Bermes, and R. Siegwart, From the test benches to the rst prototype of the muy micro helicopter, JIRS, vol. 54, pp. 245260, 2009.
-
L. Mejias, J. Roberts, K. Usher, P. Corke, and P. Campoy, Two seconds to touchdown vision-based controlled forced landing, in IROS, 2006.
-
R. J. D. Moore, S. Thurrowgood, D. P. Bland, D. Soccol, and
M. Srinivasan, A stereo vision system for uav guidance, in IROS, 2009.
-
N. Johnson, Vision-assisted control of a hovering air vehicle in an indoor setting, Ph.D. dissertation, Bringham Young University, 2008.
-
F. Kendoul and K. Nonami, A visual navigation system for autonomous ight of micro air vehicles, in IROS, 2009.
-
A. Cherian, J. Andersh, V. Morellas, N. Papanikolopoulos, and B. Mettler, Autonomous altitude estimation of a uav using a single onboard camera, in IROS, 2009. [19] C. Fan,
S. Baoquan, X. Cai, and Y. Liu, Dynamic visual servoing of a small scale autonomous helicopter in uncalibrated environments, in IROS, 2009.
-
G. Tournier, M. Valenti, and J. P. How, Estimation and control of a quadrotor vehicle using monocular vision and moirre patterns, in AIAA GN&C, 2006.
-
S. Soundararaj, A. Sujeeth, and A. Saxena, Autonomous indoor helicopter ight using a single onboard camera, in IROS, 2009.
-
J. Courbon, Y. Mezouar, N. Guenard, and P. Martinet, Visual navigation of a quadrotor aerial vehicle, in IROS, 2009.
-
R. Mori, K. Hirata, and T. Kinoshita, Vision-based guidance control of a small-scale unmanned helicopter, in IROS, 2007.
-
A. Saxena, S. Chung, and A. Ng, 3-d depth reconstruction from a single still image, in IJCV, vol. 76, no. 1, 2008, pp. 5369.
-
A.Saxena, M. Sun, and A.Ng, Make3d: Learning 3D Scene Structure from a Single Still Image, IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 824840, 2008. [26] B. Williams, M. Cummins, J. Neira, P. Newmann,
-
Reid, and J. Tardos, An image-to-map loop closing method for monocular
-