Target Shooting Training and Instructive System Model using Python

DOI : 10.17577/IJERTV5IS050902

Download Full-Text PDF Cite this Publication

Text Only Version

Target Shooting Training and Instructive System Model using Python

Sumit A. Guthe

Electronics & Communication Engineering Dept.

DIEMS, Aurangabad, India

Prof. P. M. Soni

Electronics & Communication Engineering Dept.

DIEMS, Aurangabad, India

Abstract Shooting range is an important tool for shooting practice in defence and police. In the conventional shooting range the trainee shoots target using real gun. This range requires high cost for installation and maintenance. Safety of the trainee should be considered carefully to avoid accident and injury caused by the weapon. This system present a target shooting training and instructive system model, where the trainee shoots the circular target using a laser pointer attached on the gun. A camera is also attached on the weapon to capture the frames of laser spot and circular target. The image processing techniques are used for detecting circular target and laser pot under different background environments. Further the proposed system is able to compute the shooting score properly.

Keywords Camera, Laser pointer, Image Processing, OpenCV.

  1. INTRODUCTION

    This system presents a shooting training and instructive system model, where the shooter shots the target using a laser pointer attached on the weapon. A camera is attached on the weapon to capture the image of laser shot and circular target. The image processing techniques are employed for detecting the laser spot on the target. From the experiments, it is obtained that the proposed system could detect both circular target and laser spot under different background environments. Further the proposed algorithm is able to compute the shooting score properly. In this system we required the 1) laser pointer 2) camera 3) circular target 4) processing system with OpenCV. A system suitable for shooting practice within a given simulation environment. In a real world shooting environment the range involved can be in excess of 50 metres. If the shooters aim is out by only a few millimetres they may miss the target by up to a metre. Any tracking system must be able to resolve the position and direction of the gun to within an acceptable limit. A low latency and high refresh rate are needed for smooth and responsive simulation. A high latency will result in noticeable lag in the simulation. Portability and flexibility are both desirable, though not essential for the development of a prototype system. The cost of the system is competitive with tracking systems of a similar performance.

    The developed shooting simulator utilizes a low cost wireless camera to capture the laser spot on the target. Different from the existing laser pointer-camera systems, our system places the camera inside a target box where the circular pattern target attached in front of it. The algorithm locates the position of circular pattern automatically by employing the image processing techniques. The simple thresholding method is adopted to detect the laser spot. From

    the experiments conducted, our system could detect the position of the laser spot accurately. The circular pattern usually has the same width of the circles. Unfortunately, since the low quality camera is used, those circles are often blurred. Thus it is difficult to detect all circles properly. Moreover, detecting all circles requires a high computation cost. To overcome the problems, modify the circular pattern by thickening the outermost circle. Here we assume that the distance from each circle to another is the same, therefore instead of finding all ten circles, we only find the outermost circle, then the coordinates of the inner circles are mapped appropriately[1].

  2. SYSTEM DEVELOPMENT

    1. System Configuration

      Fig.1 shows the block diagram of the proposed system configuration of a shooting training and instructive system model. As shown in the figure, the weapon or gun is equipped with a laser pointer and camera. The camera is connected to a computer for performing the image processing tasks. In this work, the shooting target is the circular pattern which is printed on a piece of paper then posted on the wall or any other places. The target could also be generated and displayed on the computer monitor.

      Fig.1 Block Diagram

      An electronic circuit is adopted to provide the fire mechanism of laser pointer attached on the weapon. Using this circuit, when the shooter pushes a fire button, the laser pointer is ON for a short pulse only. The camera on the weapon captures the images continuously, frame by frame. The computer analyses the captured image for detecting the circular target and laser spot. Once the laser spot is detected,

      the shooting score is calculated based-on the position of detected laser spot and the target.

    2. Detection Algorithm

      The detection algorithm is shown in Fig.2. The input of algorithm is the BGR image captured from a video camera. Since the camera is attached on the weapon, the captured image moves according to the shooters aiming. Therefore the first task is to find the circular target pattern on the image. To provide the simplicity and robustness of the algorithm, the circular target pattern should be prepared.

      After the circular target pattern is found, the radius of the target is identified. Then the algorithm finds the laser spot inside the circle. This approach assumes that the laser spot of the shooter hits inside the target, unless it will not be detected by the system.

      as well as to carry out analysis of the shooting of particular firer. Printouts for individual soldier for data analysis and its corrective actions can be taken. The system flow diagram shows in fig.3.

      BGR image

      BGR image

      Fig.2 Detection Algorithm

    3. System Flow

    This system device contain high resolution camera is fitted on the weapon and Laser Unit also fitted on the weapon with the help of bracket. The CPU or processing unit with Python software is connected to camera. The camera once synchronized and progress alignment is checked on the screen makes get the system ready to commence shooting. The camera is capturing frame by frame continually .On pressing the trigger a high intensity laser beam get fired like a shot and when it strikes the target the image will captured by the camera and computer stores the captured image. At the same time the computer find the circular target. The computer find brightest pixel coordinates and also the radius of circular target and center coordinates of circular target. So using this point coordinates the distance of this brightest pixel from the center of the target is found by using Euclidean formula which will be multiplied by the calibrated multiplier to find actual distance. The Graphical user interface screen provides tabular representation of different five shots. The system designed keeping the instruction required to be impacted by the instructor during the training session. The instructor has complete control of the shooting program in progress; he can at any stage stop/interfere during the processes of shooting by means of communication and pass necessary instruction including recommending the shooting program. All data results of various shooting reports can be generated to record

    Fig.3 System flow diagram

  3. SIMULATED RESULTS

    This system is done using the Python and OpenCV software. Initial work has been done on a normal circular target pattern for finding the circular target. We use the different image processing technique step by step using OpenCV library. For circular target detection we convert the BGR image frame geyscale image frame, then we use filter Median blur to reduce noise, after filtering image frame we use Canny edge detector to detect circular edges then for finding the circle gradients we Hough circle transform technique. This image processing techniques are shown in figures bellow.

    Fig.4 BGR image frame showing circular target

    Fig.5 Median blur image frame showing circular target

    Fig.8 Gaussian blur image frame showing laser spot

    Fig.9 Frame showing the laser spot

    After finding the circular target and getting gradients of circle we work for finding the laser spot. For finding the laser spot and pixel coordinates of laser spot we use image processing technique. First we use the filter to reduce noise then we find for brightest pixel. Using this techniques we get laser spot and pixel coordinates of laser spot, this shows in figures (fig.8 and fig.9).

    Distance between two pixels has been found by using formula

    Where,

    , = (1 1)2 + (2 2)2 (1)

    Fig.6 Canny edge detection image frame showing circular target

    Fig.7 Center and circle on image frame using Hough circle transform

    (X1,X2) pixel coordinates first pixel (Y1,Y2) pixel coordinates second pixel

    Example-

    Centre of circle pixel Coordinates X = (311,213) Brightest Pixel Coordinates Y = (250,190)

    Distance between center of circle and laser spot (brightest pixel)

    Dxy = (311 250)2 + (213 190)2

    Dxy = 65.1920 pixelss

    By using above equation we get the distance of center of circle and laser spot in pixel. To get the distance in centimetre we compare the above distance in pixel , radius of circle in pixel that we get from the Hough circle transform and known radius of circle in centimetre.

    If we get the radius of circle 93.1314 in pixels by using Hough circle transform, distance between center of circle and laser spot 65.1920 in pixels and known radius of circle is 10

    cm then we get the laser spot distance (d) in centimetre from centre of circle,

    d = 10×65.1920

    93.1314

    d = 7cm

    This above type of calculation to finding distance of laser spot from the center of circular target is in program itself. The result is display on Tkinter GUI in Python which we show the tabular representation of distance in pixel and distance in centimetre of different five firing shots. Then for simple representation we draw the circles by using different colours on which the laser spot shoots the target.

    Fig.9 GUI of results

  4. CONCLUSION

In this system model for shooting range simulator using an embedded camera for detecting circular target and laser spot, both the camera and laser pointer are attached on the weapon. It employs the simple effective image processing techniques using Python and OpenCV for detecting and locating the laser spot on the target with a high accuracy. We get the distance of

laser spot from the center of circular target in GUI form directly.

REFERENCES

  1. Aryuanto Soetedjo and Eko Nurcahyo,Developing of Low Cost Vision-Based Shooting Range Simulator, IJCSNS International Journal of Computer Science and Network Security, VOL.11 No.2, February 2011.

  2. Aryuanto Soetedjo, Ali Mahmudi, M. Ibrahim Ashari and Yusuf Ismail Nakhoda, Camera-Based Shooting Simulator using Color Thresholding Techniques

  3. Ali Mahmudi, M. Ibrahim Ashari and Yusuf Ismail Nakhoda, Department of Electrical Engineering, National Institute of Technology (ITN), Malang, Indonesia, Detecting Laser Spot In Shooting Simulator Using an Embedded Camera, International Journal On Smart Sensing And Intelligent Systems Vol. 7, No. 1,

    March 2014

  4. Ganesh Laxman Dake and R. M. Autee, Intelligent Training Aid

  5. Peb Ruswono Aryan, Vision based Automatic Target Scoring System for Mobile Shooting Range

  6. Matej Mesko and Stefan Toth, Laser Spot Detection

  7. Poonam Yamgar, Harshali Chaudhari, Deepika Mali, Dhanashri Kunte, Prof. J. L. Chaudhari, Smart Interactive Projector using Laser Pointer (Using Camera Tracked Method)

  8. Jong Gwan Lim, Farrokh Sharifi and Dong-soo Kwon, Fast and Reliable Camera-tracked Laser Pointer SystemDesigned for Audience

  9. Feng Wang, Xiangshi Ren, Zhen Li, A Robust Blob Recognition and Tracking Method in Vision-based Multitouch Technique

  10. Kristian Jantz, Gerald Friedland, Lars Knipping, A Low-Cost Mobile Pointing and Drawing Device

  11. XU Zhihui, LI Weizhong, XIAO Yongjun,The Impact Point Detecting System for Photo-electricity Targets

  12. Dr. Neelu Jain and Neha Jain, Coin Recognition Using Circular Hough Transform

  13. Brian Thorne, Introduction to Computer Vision in Python

  14. Atul Chowdhary, Vivek Agrawal, Subhajit Karmakar, Sandip Sarkar, Laser Actuated Presentation System

  15. Feng Wang, Xiangshi R2en and Zhen Liu, A Robust Blob Recognition and Tracking Method in Vision-based Multitouch Technique

Leave a Reply