Prototype of Computer-Assisted Moving Target Shooting System

DOI : 10.17577/IJERTV6IS090021

Download Full-Text PDF Cite this Publication

Text Only Version

Prototype of Computer-Assisted Moving Target Shooting System

J. Cortés-Galicia Instituto Politécnico Nacional Escuela Superior de Cómputo

Av. Juan de Dios Bátiz, esq. Miguel Othón de Mendizábal,

México City 07738 , México

M. A. Maldonado-Muñoz Instituto Politécnico Nacional Escuela Superior de Cómputo

Av. Juan de Dios Bátiz, esq. Miguel Othón de Mendizábal,

México City 07738, México

  1. C. Martínez-Perales Instituto Politécnico Nacional Escuela Superior de Cómputo

    Av. Juan de Dios Bátiz, esq. Miguel Othón de Mendizábal,

    México City 07738 , México

    AbstractThis article presents the development of a prototype system for adapting automatic target recognition systems used in the military industry, a system that can be implemented with accessible technology and resources. The system consists of a hardware module and a software module. The software module is in charge of identifying the targets through a neural network, considering that the objectives can maintain a fixed position or a uniform one-dimensional rectilinear movement, to make the decision to hit certain targets, relying on the parabolic shot to calculate the trigger angle that will be sent to the hardware module, which in turn will trigger the shot as accurately as possible on the targets previously selected by the system.

    Keywords Neural networks, object recognition, assisted firing.

    1. INTRODUCTION

      One of the key elements of military defense weapon systems and mainly of robotic type war systems is the automatic recognition of objectives (ATR) [1].

      ATR-based systems are intended to detect and recognize objects such as war tanks, armored vans and ships in images previously captured by a laser scanner, a radar or a camcorder by computer processing of such images, later perform an analysis that determines which of the objects identified are white objects. Thus, the system can make a series of decisions that affect the identified targets, decisions that range from keeping them within the vision frame of the system, to their immobilization through a remote attack.

      Various systems have been developed based on the above- described operation, mostly for military purposes, using techniques and technology that only this sector has access to.

      Currently, there is a great variety of sensors capable of detecting objects within a given area, generating descriptive information about sensed objects. On the other hand have developed techniques that allow the processing of the information generated by the sensors, thus facilitating the use of algorithms in charge of performing the object recognition.

      The present article introduces the development of a prototype system capable of accurately recognizing and firing targets, considering that they can maintain a fixed position or a uniform one-dimensional rectilinear motion.

    2. METHODOLOGY

      The architecture of the system prototype consists of two modules: one software and one hardware.

      1. Page Layout

        The components of the software module are shown in Figure 1.

        Fig. 1. Components of the software module.

        The purpose is to perform these functions in real time to be able to adapt to different situations. The domain of the problem requires Image Processing (PI) and Pattern Recognition (PR) tools [2].

        1. Acquisition of the image

          Hence the operation of the prototype of the system, because in this component is where is located the area where the target is likely to shoot, then focus the camcorder and start capturing information related to the area of analysis.

        2. Pre-processing

          Once the sequence of images containing the object to be analyzed is captured, this sequence must be passed through a series of filters [3] in order to eliminate variations in the intensity or contrast in the image due to the noise generated by the conditions in the capture was performed.

          Due to the characteristics of the camcorder, color images are obtained in the RGB space with their intensity values for the Red, Green and Blue channels respectively.

          Working with images in the RGB plane involves working on each of these planes and working with three different intensity values for each pixel of an image, so if you work with very large images or with a large number of them, the processing will slow down, however, if you work on a single channel the treatment time is reduced, resulting in a grayscale image.

          Transforming a color image (RGB plane) to grayscale involves assigning a single intensity value to each pixel in the image. This transformation process consists of calculating the average of the intensities of each channel, the result represents the intensity within the range [0-1] or [0-255] where zero represents absolute black and 1 or 255 represents the target absolute white depending on the type of data that the image is being processed. Mathematically, grayscale RGB conversion is expressed by equation 1.

          f(x,y) = [(1/3)*(R+G+B)] (1)

          After obtaining the gray levels of the captured image, we proceed to eliminate as much noise as possible, in the work presented here we consider the following filters commonly used in digital image processing.

          The process of digital filtering of images consists of applying an operation or transformation on an image obtaining a new image and thus preserve certain characteristics of the image and eliminating the less important ones [9]. This transformation is expressed mathematically in equation 2.

          g(x,y) = T[f(x,y)] (2)

          Where f(x,y) is the original or input image, g(x,y) is the output or result image of applying an operator or transformation T to f within the set of pixels (x,y) or the purpose of f(x,y), for example, reduce the noise in the image as already mentioned.

          The image filtering techniques are based on the convolution theorem, from the original image f(x,y) and an invariant linear operator h(x,y), that is, an operator whose result depends only on the value of and not the position of the operator's point.

          From equation 2, where T[f(x,y)] represents the convolution operation, we derive equation 3.

          g(x,y) = [h(x,y)*f(x,y)] (3)

          h(x,y) represents the linear system responsible for producing the output g(x,y). In line terminology theory of linear systems h is called transfer function.

        3. Segmentation

          This is the intermediate component of the prototype of the system, for this point it is assumed that a potential target has been located and that the filtering of the image containing the target has also been performed, leaving it free of noise.

          Once this is done, the filtered image (hence the segmentation name) is divided into a series of components of interest, to be used later. This stage of image processing is what determines the correct distinction of the system between common objects and white objects.

          Segmentation involves making modifications to the original image at the pixel level, applying operators or other transformations. Next, we describe the image segmentation methods considered for the realization of the present work.

          Umbralization. It consists of separating the objects of interest from the background by binarizing the levels of gray in the image, in order to clearly separate the objects from the background. Binarization consists of only maintaining the image with two levels of zero gray for the background and one or 255 for the objects of interest or vice versa. From the histogram of gray levels corresponding to an image f(x,y) [4] in which an obscure background can be seen and on the objects with a greater luminsity, the histogram has the levels of gray are grouped into two ends. The idea is to separate these ends, that is, to extract the object from the background, this can be done by establishing a line that divides the two ends of the histogram by choosing a threshold T that represents that line. Thus a point (x,y) in which f(x,y)>T determines a point of the object; the opposite case determines a bottom point.

          A thresholded image is determined as:

          The spatial domain in digital image processing refers to the set of pixels that make up an image, therefore, the spatial

          g(x,y) = {

          1, f (x,y) > T (5)

          0, f (x,y) T

          domain filtering is translated into a set of operations that directly modify the value of the pixels belonging to that set using an area of sub-image centered on (x,y) which must be moved pixel by pixel starting typically in the upper left corner and applying the operator at the position (x,y) to obtain g.

          Obtaining g(x,y) is mainly done using the so-called masks or kernel, which are defined as a small two-dimensional distribution whose coefficients define the nature of the operation to be performed. If from operating the kernel on the image to be treated, we obtain g(x,y) and the convolution theorem g(x,y) is the result of the transformation (convolution) of the original image with h(x,y), we have, by analogy, that g(x,y) is represented by convolution of the original image f(x,y) and a previously defined kernel, commonly called the convolution mask. Mathematically this concept is represented in equation 4.

          a b

          g(x,y) = w(s,t)f(x + s, y + t) (4)

          s=-a t=-b

          Where w(s,t) represents the value of the coefficients of the convolution mask. Depending on the type of mask to be used in the transformation of an image, you can have different types of transformations.

          The ideal threshold has a value of 97, if you take a smaller threshold, some areas of the background would be merged with the object, and on the other hand, if you take a larger threshold some areas of the object would be fused to the bottom.

          Arithmetic-logical operations on images from the thresholding, an image is obtained from which the background is clearly distinguished from the possible target to be affected. Now it will be necessary to operate on said image to extract the characteristics of interest and discard the rest of the pixels of the image, in order to speed up the automatic recognition process. The arithmetic-logic operations used in digital image processing are shown in table 1, given two pixels p and q:

          TABLE I. Arithmetic-logic operations used in digital image processing.

          Arithmetic operators

          Addition

          p + q

          Subtraction

          p – q

          Multiplication

          p * q

          Division

          p / q

          Operators logical

          AND

          p & q

          OR

          p | q

          NOT

          ~p

          These operations can be combined with each other to form any other operation, logical operations are basic tools in the treatment of binary images and are used in the analysis of forms, isolation of regions, masking and as will be seen in the following section in operations Morphological, with operations oriented to the neighbors of a pixel in the context of the so-called kernels or masks already mentioned.

          Mathematical morphology. In biology the word morphology refers to the structure and form of animals and plants, this concept, transferred to digital image processing represents a mathematical tool that extracts the most representative components of an image, such as contours, skeletons and specific shapes, besides to serving as a pre- processing tool with filtering techniques, trimming and morphological reduction. The basis of morphology techniques is set theory; in short mathematical morphology represents the shape of objects in an image.

        4. Description and representation of the characteristics of the object

          Based on the isolation of the components of interest, the next component of the system prototype is the image extraction of a set of characteristics that describe the physical properties of objects.

        5. Targeting and targeting

          The last component of the prototype of the system performs the interpretation of those data that describe the physical properties of the objects. This interpretation is the one that will indicate if the description belongs to a common object or to an objective and in case of being an objective the system will also determine based on the description of the object, the type of action to take on the target.

          This component is responsible for simulating human intervention, providing the prototype of the system with the sense of artificial perception by detecting and recognizing real world objects from previously captured and processed images, that is, by performing pattern recognition [5]. Artificial perception for this component is given by neural networks.

          In the field of artificial intelligence, neural networks are a model of learning and automatic processing inspired by the functioning of the nervous system [6]. In this network each of the interconnected neurons collaborates to produce an output stimulus from an input stimulus, so it is able to perform the pattern recognition [7].

          Because they have characteristics similar to those of the brain, neural networks offer numerous advantages included in the brain:

          • Adaptive learning: ability to learn to perform tasks based on training or initial experience.

          • Self-organization: the neural network creates its own organization or representation of the information received through a learning stage.

          • Fault Tolerance: the partial destruction of a network leads to the degradation of its structure; however, some of the network capabilities can be retained even after they have suffered great damage.

          • Real-time operation: performing large processes with very fast data is a feature that neural networks adapt well due to their parallel implementation. Making these the best

            alternative for recognition of classification of patterns in real time.

          • Easy insertion into existing technology: Because a network can be quickly trained, confirmed, and verified and transferred to a low-cost hardware implementation, it is easy to insert neural networks for applications within existing systems.

            Considering those characteristics that will give solution to the recognition of the image patterns and thus determine which contain the target to affect and which do not, we must:

          • Most problems are not linearly separable.

          • Minimum numerical changes are required for similar images, therefore real recognition values are required and not validation of the true / false type.

          • The system must be able to interpret if an object unknown to it, is a target or not, since the whole set of targets cannot be subjected to training, if it is very large.

          According to the previous analysis, the neural network used in the prototype of the system is the Backpropagation network, which fulfills the requirements for the prototype [8].

      2. Hardware module

        The purpose of the hardware module is to implement the firing system of the weapon, both mechanically and electronically, allowing the chosen weapon to be manipulated from top to bottom and from right to left, in addition to being triggered when required.

        1. Mechanical structure

          The structure is rigid to withstand all the weight of the components and must be able to not suffer deformations by the recoil when firing the weapon. We opted for a metallic structure and with wheels to reduce the friction at the time of the movement.

          A base of wood approximately 40cmX40cm was chosen, which will be leveled manually using air bubbles in water. It has a fixed half-inch axis for the structure to rotate around it. It consists of a fixed part and other 3 variables to be able to level it in any surface that does not have unevenness greater than 2cm.

          It has a ½-inch vertical iron axle fastened to the wooden base, this axis serves for the entire structure to rotate on it, giving stability to the entire structure and allowing movement from left to right. It has a hole with rope in the upper part to hold the piece of joint where the servomotor will be mounted that will give the movement from left to right. The hole is for a standard rope screw.

          A splicing piece for lower servomotor splices the servomotor to the structure to transfer the movement from left to right. It is a piece of plastic which can be improved to an aluminum piece, this was thought because the servomotor has a cross-shaped piece which fits in this piece. A lower servomotor structure attaches the motor to the structure, which is extremely important because it provides the support points to move the entire structure around the vertical axis. In this way, the vertical axis does not move and everything else moves around it.

          A bearing of the vertical axis of ½ inch internal, holds the whole structure to the axis with the minimum friction, chosen to the measure of the axis to avoid the wobble of the structure. Because of the size of the bearing, it is not possible to weld it directly to the structure, so a box was made to which the structure can be directly welded, to make a hole with rope to fasten it with a screw to the structure and this wraps the bearing, the fastening screw only provides grip between the bearing case.

          A spring of the vertical axis adapted after the realized tests, located between the low of the bearing and the surface, gives greater flexibility to the left-right movement of the structure.

          The lower structure is a ½ inch wide 1/8 inch thick, resistant to possible sudden movements and support the weights of the upper structures, designed to give 4 points of support to the upper structures, which gives it its shape cross with a circular center to introduce the bearing box. The ends of the cross with "Y" termination for adapting tires, tires provide less friction between the frame and the base, attached to the frame by means of millimeter drop head screws with ½ inch nut.

          The upper structure A is of the same material as the lower structure, necessary to hold the case for the top bearing A of the horizontal axis, welded to the lower shaft in 2 points, welded for strength and firmness.

          The upper structure B is of the same form as the upper structure A, with holes to hold servomotor that carries out the movement downwards, used to fasten box for the upper bearing B of the horizontal axis, welded to the lower shaft in 2 points, welded for greater strength and firmness. Structures A and B are separated by 7.5cm.

          The bearings A, B have the same characteristics as the bearing of the vertical axis, they hold the horizontal axis avoiding play between the axis and the upper structure. The horizontal axis is 13cm long, same material as the vertical axis, is located between the two bearings A and B.

          A hole with rope at the end that faces the bearing B to hold the joint for the upper servomotor, leaves approximately 1cm from the side of the bearing B and 3cm from the side of the bearing A. Two holes to support structure that holds the weapon apart by 0.8cm with rope at half the distance between bearing A and B, holes are needed to mount the structure to hold the gun.

          The holding structure of the weapon is of the same material as the previous structures, divided into 2 parts, part 1 molded for the chosen weapon and lay the weapon on it. Part 2 surrounds the front and rear gun, the front part by 2 screws that provide calibration and in the back a screw with millimeter nut from side to side of the structure that provides greater stability to the weapon. The mechanical structure described above can be seen armed in figure 2.

          Fig. 2. Mechanical structure used in the prototype of the system..

        2. Structure for activating the trigger

          To activate the trigger a human finger was imitated which is pulled by a pneumatic cylinder of 2.5 cm of drag. It should be noted that this was a very difficult task as several options were built, such as activating the trigger with solenoids used by soccer robots and their electronics to activate the solenoid, discarded by the little strength that provided. Another option discarded was the use of the motor used for auto insurance, because it also did not provide enough strength.

          A level drop was mounted on the gun for laser calibration, the camera mounts to the top of the peephole so a piece of metal protrudes. The laser is turned on with an arrangement where only a screw is turned and so we do not alter its calibration when it is turned on, it is important to note that it is not left on all the time since an internal resistance is heated. That by not protecting the laser can damage it.

          The weapon is secured by 4 main screws one in the back that crosses from side to side, another 2 in the sides that serve to orient more precisely the weapon, and one more at the top that pushes the weapon down avoiding that it is moving and is always mounted on the mold that holds it.

          In the holding structure of the weapon (profile), to change weapons, it only has to change part 1 and part 2 leaving everything else in the structure unmodified, I was thought about this in case you want to implement the mechanism with another weapon. Part 1 is molded for our weapon, all these pieces can be made for other weapons and as long as the space for the 2 fasteners is respected. The structure for the trigger activator described above can be seen armed in Figure 3.

          Fig. 3. Structure for the trigger activator used in the prototype of the system..

        3. Electronic control

          The electronics used to control the hardware mainly consists of a microprocessor PIC 18F4550 which will be responsible for moving the motors using PWM (Pulse Width Modulation), in addition to communicating via USB with the computer. This also controls the structure manually by means of physical switches which will send a signal to the microcontroller to orient the weapon to the requested point.

          It is intended that also receive information on the position at any time of the servomotor, an alternative is to take information from the internal card that controls the servomotor, another alternative is by incremental encoders of 8 bits multiplexed to take turns in sending the information to processor.

          Several options were discarded for trigger activation, one was the activation by solenoids and lever arrangements with its control circuit consisting of circuit that charged a capacitor to discharge it on the solenoid. Another option discarded was the use of electric motors of the doors of the electric cars since it did not give enough drag for its implementation.

          It was decided to use a pneumatic cylinder, which is controlled by a festo valve that activates with 24 volts, and activates the trigger without problems, the valve receives compressed air from a tank and this supplies it to the cylinder.

          For horizontal movement, a 5kg torque servomotor is used to easily move the entire structure from left to right, since approximately the entire structure weighs approximately 4kg. For vertical movement, a 5kg torque motor is being used because the gun and the components to be weighed weigh about 2kg and it is intended to include a mechanism that holds the weapon so that it does not move when the weapon is fired, this will require more torque to move this part of the structure.

          The servomotor that will pull the trigger whip is a modified servomotor, to which the control circuit is extracted so that it can rotate freely without the limitation of the degrees of freedom of the servomotors. This has a pulley where you will tangling the whip to trigger the trigger. To power the motors will use an independent source of 5volts, because the microprocessor is powered by USB and the current it provides

          is not enough for the 3 engines to use. It is a standard that servo motors work at 50hz with a pulse width modulation that modulates from 0.7ms to 2.3ms (varies by manufacturer), but never exceeds 2.5ms, because at a pulse of 50hz equivalent to a pulse lasting 20ms could be modulated up to 8 servomotors.

          Because every 2.5ms is modulated for a servomotor, and its next movement will have to pass 7 pulses of 2.5ms which would give rise to another 7 servomotors, if it were modulated one after another and the process would be repeated every 2.5ms, of this one way each servomotor would receive its modulation every 20ms complying with the standard.

          The PIC works at very high frequencies, the timer is 16 bits and can have a prescalar of up to 16 bits, which helps to reduce the frequency to the ranges that are needed for the modulation of the pulses of the servomotors.

          The plate shown in figure 4 is to mount the PIC used and have it fixed so that it maintains everything necessary to operate as are the power, outputs and inputs of the ports to be operated. The important thing is that everything is fixed and this gives more protection to the PIC avoiding false contacts. The system will handle 3 power supplies, one that gives the USB port, a 5-volt source independent of the USB port, and a third of 24 volts to activate the pneumatic valve that controls the flow of compressed air entering the pneumatic cylinder. All the sources are distributed in the printed circuit of the acquisition plate to feed the different devices (pass through the copper lines and every soldier makes it more stable against false contacts).

          Fig. 4. Electronic circuit used in the prototype of the system..

    3. EXPERIMENTATION AND RESULTS Carrying out the tests for this job requires a high degree of

safety because a ball gun is being handled, which is pushed with compressed gas and the velocity of the ball when a new tank is being used, can lead to perforation of the skin causing injury. The most convenient is to wear protective equipment and always put the safety on the gun

Another aspect that has to be taken into account is that the ball bounces almost in a straight line because of the material with which they are made, which is non-deformable steel. To

solve this problem, a blanket is used which causes the balin to no longer rebound due to the absorption of its energy.

To visualize where each ball hits, we opted for plasticine, after discarding paper and cardboard, they break at each impact. In the unicel also cannot be clearly seen where it incrusts each ball because it is sealed since the speed of the ball performs very quickly.

Due to the limitations in the size of the controlled environment, it was decided to do night tests to eliminate sunlight and only to have illumination in front of the controlled environment, a background was set as dark as possible to contrast the objects.

In order to calibrate the laser, the level drops that were installed in the structure were used, in addition to a level hose, to observe if at a distance of 4 meters the projectile impacted to the marked height. This technique allowed to calibrate the laser at the corresponding height, and for the part of the accuracy in the x-axis was based on tests of laser firing and readjustments, until having the slightest error.

Once the laser has calibrated, the camera is calibrated, this is done by placing a figure in the controlled environment, and from the system we can plot the video on the center of the camera and the calculated centroid of the object. This technique will help us calibrate the center of the camera with the centroid of the object, as eliminating the line would make zero the distances between the center of the camera and the centroid of the object.

This is in the part where the frames of the system are printed, but in the controlled environment where the object is held, the laser must also be pointing to the center of the object, otherwise the camera will be readjusted until the centering of the 2 parts. For this it is advisable to leave the fixed structure of preference to the center, and move the object until the laser points to the centroid and then calibrate the camera until achieving the center tie of camera with the centroid of the object.

After calibrating the entire prototype of the system, tests are performed to calculate the error that the weapon has in each shot. It was shot on a figure of plasticine to take the measures corresponding to each shot, because the projectiles are embedded. The criteria under which the tests were performed were as follows:

      • A shooting distance, in this case of 4 meters, was set.

      • The structure is fixed so that the laser points to the centroid of the figure.

      • A series of shots were fired, 3 sets of 5 shots were taken.

TABLE II. System prototype test results.

Series shot

1° Serie

2° Serie

3° Serie

1

4.22cm

5.12cm

4.75cm

2

3.53cm

3.53cm

3.33cm

3

4.25cm

4.47cm

4.68cm

4

3.54cm

4.22cm

3.82cm

5

3.03cm

4.00cm

3.24cm

Average

3.71cm

4.26cm

4.16cm

The results are summarized in Table 2. With the results obtained, it is possible to limit the size of objects that in this case should be at least 2 times the overall average of the results obtained, to ensure that it will hit the target at a distance of 4 meters. The area of the object remaining at least

8.5 centimeters after rounding.

ACKNOWLEDGMENT

The authors would like to thank the Instituto Politécnico Nacional (Secretaría Académica, EDD, COFAA, SIP and ESCOM) for their economical support to develop this work..

REFERENCES

  1. L. M. Novak, G. J. Owirka, W. S. Brower, A. L. Weaver, The Automatic Target-Recognition System in SAIP. The Lincoln Laboratory Journal. Volume 10, no. 2, pp. 187-193, 1997.

  2. Sánchez, Aplicaciones en la visión artificial y la biometría informática. Universidad Rey Juan Carlos, primera edición. Librería editorial Dykinson. 2005. ISBN: 849772660x.

  3. H. Zhu, J. Lel, X. Tian, A pattern recognition system based on computer visión – The method of Chinese chess recognition. IEEE International Conference on Granular Computing. China. 2008.

  4. F. Tao, F. Jianan, L. Qizhen, Image Segmentation Based on Histogram Simulation by Use of Trigonometric Series, Journal of Image and Graphics, 2007, no.10, pp. 38-41.

  5. González, Procesamiento Digital de Imágenes. Editorial Addison- Wesley/Díaz De Santos. 2007. ISBN: 013168728X.

  6. García, Detección y clasificación de objetos dentro de un salón de clases empleando técnicas de procesamiento digital de imágenes. M.S Thesis, Universidad Nacional Autónoma de México. México. 2008.

  7. J. Choe, K. Lee, C. Lee, No-reference video quality measurement using neural networks. IEEE 16th International Conference on Digital Signal Processing. Grecia. 2009.

Leave a Reply