- Open Access
- Total Downloads : 444
- Authors : Prasanna Linci. A, Vinyojita Mohanraj
- Paper ID : IJERTV4IS040593
- Volume & Issue : Volume 04, Issue 04 (April 2015)
- DOI : http://dx.doi.org/10.17577/IJERTV4IS040593
- Published (First Online): 15-04-2015
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Target Identification and Tracking using UAV
Prasanna Linci.A
M. Tech, Avionics Engineering
School of Aeronautical Science Hindustan University,
Chennai, India.
Ms.Vinyojita Mohanraj
M. Tech Assistant Professor,
School of Aeronautical Sciences, Hindustan University,
Chennai, India.
Abstract Unmanned Air Vehicles (UAV) have been widely used in the battlefield. They are usually employed for surveillance and intelligence gathering. An Automatic Targeting System (ATS) would greatly enhance its capability for such reconnaissance. Therefore this project aims to design such a system onto UAVs. The ATS includes the camera system and the wireless communication system. The camera system provides the eyes for the UAV, and the wireless communication system reports the actual GPS position of the target. There are two different methods for creation of the image patches. Initially the stored image patch created prior to flight with previous flight data is used for detecting and tracking. The next method takes in images manually and it is further used for detecting and tracking the object.
-
INTRODUCTION
Unmanned aerial vehicle or UAV have great potential in military as well as civil missions. It is a fast developing remote sensing platform in present world. In rescue and surveillance mission people and vehicle detection from aerial platform has become an important aspect of deployment for autonomous unmanned aerial vehicles (UAV) system. Ground target tracking is an important application of UAV. Human quest for an automatic detection system of everyday occurrence lead to the necessity of inventing an intelligent surveillance system which will make lives easier as well as enable us to compete with tomorrows technology and on the other hand it pushes us to analyze the challenge of the automated video surveillance scenarios harder in view of the advanced artificial intelligence. Now a day, it is seen that surveillance cameras are already prevalent in commercial establishments, with camera output being recorded that are either rewritten periodically or stored in video archives. To extract the maximum benefit from this recorded digital data, detect any moving object from the scene is needed without engaging any human eye to monitor things all the time. Real- time segmentation of moving regions in image sequences is a fundamental step in many vision systems. Background subtraction is a typical method. Image background and foreground are needed to be separated, processed and analyzed. The data found from it is then used further to detect motion. In this work accurately detecting moving objects have been developed and analyzed.
The method chosen to obtain the goal, the problems faced during the implementation and the primary idea of the
solution is discussed, along with the proposed algorithm with its describe implementation of algorithm, simulation results and conclusions and the future work.
This template, modified in MS Word 2007 and saved as a Word 97-2003 Document for the PC, provides authors with most of the formatting specifications needed for preparing electronic versions of their papers. All standard paper components have been specified for three reasons: (1) ease of use when formatting individual papers, (2) automatic compliance to electronic requirements that facilitate the concurrent or later production of electronic products, and (3) conformity of style throughout a conference proceedings. Margins, column widths, line spacing, and type styles are built-in; examples of the type styles are provided throughout this document and are identified in italic type, within parentheses, following the example. Some components, such as multi-leveled equations, graphics, and tables are not prescribed, although the various table text styles are provided. The formatter will need to create these components, incorporating the applicable criteria that follow.
-
PROPOSED DESIGN OR FLYING PLATFORM
The quad rotor designed with expandable poly propylene material to reduce the gross weight. Four geared motors are used to fly the UAV. The design can act as a UAV as well as drone. It has USB and Wi-Fi (802.11n) interfaces. The onboard sensors are made more sensitive for better control. The ultrasound altimeter is enhanced with the addition of an air pressure sensor, allowing for more stable flight and hovering.ARM CORTEX processor is used. It is designed with the customized flight control system. FCS consists of accelerometer gyro and magnetometer. Three axis gyros are used to get stability. Calibrated compass is used to indicate the direction. It can achieve maximum of 165ft altitude. And the endurance is 20-30mins. Lithium iron battery is used to give the power to the UAV. Inertial measuring unit is used for the orientation purpose. It is designed with twin cameras for safer side. Cameras are mounted in front of the UAV which is front camera and another one is a vertical camera mounted downwards.
-
Front camera 720p sensor with 93° lens, recording up to 30fps.
-
Vertical camera: QVGA sensor with 64° lens, recording up to 60fps.
The source code is designed with a facility of using any camera for future purpose. It accepts all type of HD cameras. Considering camera as the main hardware, in this paper the high resolution cameras are used. It can automatically avoid blurriness and clutters. This paper is designed with the ability of using both cameras. The data can be collected after the UAV landing with the flight recorder.
Figure 1: Front Camera (it is located in such a way that it is projecting outside of the Quad setup for a possibility of full frontal capture of the scene)
-
-
TARGET IDENTIFICATION SYSTEM ARCHITECTURE
This is main part of this project. The SURF algorithm is used to identify the particular object and then the background will be subtracted to follow the identified object.
Figure 2: Target identification system
Serial communication will be initialized to start the communication between camera and the computer. After the initialization the camera starts to search for the defined target in the environment. There are two options. It may detect the object or may not detect. If it detects the defined object the target information will be collected and then transmitted to ground control system or GCS to calculate the center point of
the target. The information is then used to calculate the target position/ X, Y coordinates altitude and time. The information of the object will be displayed in the system monitor. If the target is not found then the UAV starts to search the target again. If the target is missing for a long time the UAV will come back to its starting position. Ultrasonic sensor used to collect the data about the target and the Wi-Fi is used to transmit the data to the ground control system GCS.
-
IMAGE PROCESSING
Two tasks are given to the processor to complete this project. First one is ground motion estimation and second task is target detection and tracking. Depending on imaged scene both tasks leads to many problems. In order to avoid this target identification and tracking is made simpler by assuming the color and the size of the target or object. And the tests are conducted in daytime. The target standard specification parameters are stored in the beginning of the source code. This allows ease in changing the parameters to cater for different mission which may require targeting of different vehicle. This paper used the source code which has the flexibility of changing the camera and its parameters.
Anexample of image processing is shown in figure 3.
Figure 3: Image Processing
When capturing the video simultaneously the coding will change the original images into grayscale image. First window shows the original image from the video camera and the second window demonstrate the image which is processed from the original video. Information about the UAV is updated by the micropilot. It collects height and heanding of the UAV. Targrt information is sent to GCS only when the target is found.
Speeded Up Robust Feature, SURF is used for feature detection and backround subtraction. It will extracts some uniue keypoints andd descriptors from an image. These extracted points will be saved and used later. The source has been implemented using OpenCV and visual studio. SURF algorithm follows the below steps
-
Detection
-
Description
-
Matching
-
-
EXPERIMENTAL RESULT
In order to test the designed target identification system, a ground test is conducted. Preliminary test is color matching and the next one is vehicle detection. The below images are the results got from the ground test. The small size of car with 10cm by 10cm is used as a target to get the experimental result. When this test is conducted the UAV were in the ground. After getting positive results altitude of the UVA was changed and tested again. Finally the test was fully successful and satisfied.
Color detection with background subtraction
(a) Red Target (b) Blue Target
(c) Object detection without (d) Multiple Target Background subtraction Detection
Figure 4: Experimental result
From figure 4 its clearly understood that the target identification system works properly. Pictures a and b are the examples of background subtraction in color based target detection and tracking and figure c gives the result without background subtraction. Image d is the multiple target identification. If multiple targets are being encountered at the same time the coding will choose the mean point of the possible targets as its tracking point. The final tests were carried out for checking data transmission and found a good result. Ground test was conducted for all type of vehicle from small to large sizes and verified successfully.
Figure 5: Model vehicle (yellow car inside the green box) being tracked by the Quad rotor during a trial run.
After getting successful result from ground test the actual flight test was done to conform the working of the source code. The value of yellow color was set in the calibrating panel and then the test was continued.
With the use of color intensity the object has been tracking from the flying UAV. By doing trial and error method the source code has been done to define the colors.
Figure 6: Calibrating color intensity and selecting object to track
HSV means hue, saturation and value color space. V is also can be represent as brightness. HSV models are often used in image analysis feature detection or image segmentation. This application tools had been used here and got great results. By changing these values we can define all the colors and then we can track the required object. Fig 6 shows output window which is used to change the HSV value. When multiple possible targets being encountered the target which is to be tracked can be selected by changing the object id. The color intensity values will be changing from zero to 255.
Figure 7 shows the trajectory of the moving object with respect to the UAV. The angular distance between quad rotor and moving vehicle also will be provided by the output window.
Figure 7: Trajectory of the moving target
-
CONCLUTION
This paper proposed the UAV for vision based target identification and tracking method in all types of environment. This paper has mainly highlighted on automatic target identification system. It begins with designing the source code for the real time experiment build on UAV. The full system has been designed and tested carefully before integrating into UAV. All the required tests have been tested and integrated successfully.
This has proven the concept of target identification system. If we can replace the camera with military specified camera clearer and better results can be achieved.
-
FUTURE WORK
This paper concentrated on multiple target identification but single target tracking. Future it can be upgraded with multiple targets tracking system with different object parameters. The target identification parameters like size, color are preloaded before the integration. In future it will be better if the parameters can be changed by the GCS. In this paper we used only ground moving targets. It can be developed to identify and track the air targets.
REFERENCES
-
Wang, Jiabao, et al. "A Framework for Moving Target Detection, Recognition and Tracking in UAV Videos." Affective Computing and Intelligent Interaction. Springer Berlin Heidelberg, 2012. 69-76.
-
Rafi, Fahd, et al. "Autonomous target following by unmanned aerial vehicles."Defense and Security Symposium. International Society for Optics and Photonics, 2006.
-
Watanabe, Yoko, et al. "The onera ressac unmanned autonomous helicopter: Visual air-to-ground target tracking in an urban environment." American Helicopter Society 66th Annual Forum (AHS 2010). 2010.
-
JeongWoon Kim; Shim, D.H. "A vision-based target tracking control system of a quadrotor by using a tablet computer", Unmanned Aircraft Systems (ICUAS), 2013 International Conference on,On page(s): 1165
1172
-
Yun Hou; Changbin Yu "Autonomous target localization using quadrotor", Control and Decision Conference (2014 CCDC), The 26th Chinese, On page(s): 864 869
-
Siam, M.; ElSayed, R.; ElHelw, M. "On-board multiple target detection and tracking on camera-equipped aerial vehicles", Robotics and Biomimetics (ROBIO), 2012 IEEE International Conference on, On page(s): 2399 2405
-
Qadir, Ashraf, Jeremiah Neubert, and William Semke. "On-board visual tracking with unmanned aircraft system (uas)." arXiv preprint arXiv:1203.2386 (2012).
-
Pandya, Megha M., Nehal G. Chitaliya, and Sandip R. Panchal. "Accurate Image Registration using SURF Algorithm by Increasing the Matching Points of Images."
-
Li, Shiqiang, et al. "SURF-based Video Mosaic and Its Key Technologies."Journal of Computational Information Systems 6.10 (2010): 3267-3275.