Mobile Robot Self Planning and navigation based on Artificial landmark Localization method and Binocular Stereo Vision

DOI : 10.17577/IJERTV1IS5136

Download Full-Text PDF Cite this Publication

Text Only Version

Mobile Robot Self Planning and navigation based on Artificial landmark Localization method and Binocular Stereo Vision

T.VENKATESWARA RAO#1, GOPICHAND NELAPATI *2

#1 M.Tech .,Embedded Systems,Department of ECE ,Nalanda Institute of Engineering and Technology, Guntur Dt, AP, India

*2 Associate Professor, Department of ECE ,Nalanda Institute Technology, Sattenapalli,

Abstract This paper describes design of artificial landmark based on colour model used for Self-planning in unstructured environment to a robot for its movement. This method provides less error in estimation when compared to existing methods. This project is an investigation into building a system which visually detects artificial landmarks to determine the landmarks within a location, decipher their position within that location and track the landmark throughout the location using Binocular stereo vision.

Keywords Artificial landmark, Colour model, Localization method

  1. INTRODUCTION

    The motivation for this project is the fact that the extensively used Global Positioning System (GPS) does not work when a receiver unit is unable to obtain a signal from a satellite. In consequence, GPS is impractical, infeasible and, in actuality, impossible for use in tracking a units location indoors throughout a building. There are of course many possible ways in which an object can be located and tracked indoors. A local system which mimics GPS isnt totally unfeasible, but does have many downfalls, one of these would definitely be cost another may be issues with timing a signal traveling at the speed of light passing over short distances i.e. 3ns to travel 1 meter. Visually locating and tracking a passive landmark, such as an image, is in comparison relatively simple and inexpensive. A camera could be used to take an image of a location which could then be processed to discover whether the landmark is present in that location. It could then be possible to determine that landmarks position within that location.

    Natural landmarks are generally static and form part of the surroundings. One obvious example of a natural landmark is a mountain, although an object such as a plug socket could also be seen as a natural landmark and can be used as a reference point if its location is known. Artificial landmarks, also known as fiducially markers, can be sited in static locations and be used as signposts indicating a specific point of interest, for example. This point of interest may change, it may become uninteresting and the marker may be removed or repositioned. Artificial landmarks can also be used to tag objects to give them an identity. A barcode could be an example of such a tag. It is a visual representation of a numerical code that is used represent

    some information, typically identifying the object to which it is attached.

  2. ARTIFICAL LANDMARK DESIGN

    A landmark is localized physical feature that a robot can sense and use to estimate its own position in relation to a known map or a reference frame. Considering the accuracy, real-time, and the scalability of the landmark in complex environment, the artificial landmark is designed as showed in Fig.1. The landmark is composed by four parts: red rectangular border, blue background, scalable pattern, and direction prompt (the triangle in the center).The reason to choose the red color as the rectangular border is that there is little red color in the indoor environment, further more, the red rectangular border is much fewer, and red is high purity, low lightness, and small scattering intensity. The strong contrast between the blue background and the white scalable pattern is helpful to recognize the gap situation of the landmark. In the center of the landmark, a white triangular direction prompt is designed to indicate the direction of the landmark, and avoiding the recognition error caused by error direction of the landmark. The position information can be got by the landmarks with different codes which generate from the gap situation of the inner concentric circles (1 for gap, 0 for no gap). Owing to the different situation of the 8 gaps, there are 256 situations in maximum. According to the working environment, the numbers of concentric circles can be added or reduced to meet the requirement. (The landmarks in the experiment are square structure of 18cm length, the width of the red border is 1cm, and the width of inside and outside ring is also 1 cm.)

    Fig 1.Artifical Landmark

  3. LANDMARK DETECTION AND RECOGNITION

    1. Landmark detection

      According to the feature of the red rectangular border of the artificial landmarks analyzed in section 2, the landmarks can be detected by color segmentation and structure contour. The color segmentation is based on the transformation from RGB color model to HSI.

      Figure 2. (a) Original image (b) binary image after color segmentation (c) image of landmark detection

    2. Landmark recognition

    The steps involved in landmark recognition are as follows:

    Figure 3. Schematic of landmark recognition

  4. LANDMARK LOCALIZATION METHOD AND

    BINOCULAR STERO VISION

    1. Landmark Localization Method

      Approach: Figure 4 shows the experimental setup. Two poles (used as artificial landmarks) were placed at coordinates of (4000, 2000) and (4000,-1000), respectively, with reference to the robots global coordinate frame.

      four vertexes of the landmark in the collected image can be got by linear approximation, as showed in Fig. 3(a), the white+ represents the four vertexes;

      According to the affine invariant principle, and the four vertexes which are located, the center of the landmark can be located, as showed in Fig. 3(b), the red + represents the center of the landmark;

      Based on the invariance of cross-ratio principle, the 8 gap positions of the landmark can be located in the landmark image by the relationship of positions and the actual size of the landmark. As showed in Fig. 3(c) and 3 (d), the green + represents the gap positions of the landmark.

      As showed in Fig. 1, the pixel value of the two concentric circles is much higher than the pixel value of the background (the concentric circle is white, and the background is blue), so the gap situation can be detected by comparing the pixel value. However because of the moving of robot, the size of landmark and the illumination of the landmark image would be different.

      To guarantee the gap judgment accuracy, a gap recognition method based on self-adaptive window is proposed. As showed in Fig. 3(e), the gap situation can be got by comparing the proportion of the number of low

      pixels value in the window. Self-adaptive window could adjust its size through the detected landmark region, so it can eliminate the influence of inaccuracy of gap location.

      Figure 4. Test Setup for the Landmark Localization Method

      Figure 5. The Artificial Landmark Localization Method

      The separation of the landmarks represented as d was set at 3 m in the tests. The distance d could be varied depending on the work space available and the environment in which the robot was working. The distances between the landmarks and the centre of the LRF were defined as ra and rb for Pole A and Pole B respectively. a and b were defined as the angles between X1 axis of the local coordinate frame and the line of sight of the poles. In Figure 5 (a), a was defined as a negative angle, and b as a positive angle. Objects detected to the left were assigned a positive value and to the right negative values. The variables ra, rb, a and b were ll known from the LRF data. In order to compute the position of the LRF (x2,y2) with respect to the poles the following equations were derived and used in the landmark localization algorithms.

      Figure 5(b) shows the position of the LRF relative the robots local coordinate frame. The centre of the LRF was located at 160 mm,7 mm with respect to the local coordinate frame of the robot.

      The position of the centre of the LRF was subsequently transformed to the robots origin to obtain the position of the robot in the global coordinate frame.

    2. Binocular Stereo Vision

    The binocular stereo vision theory is based on the research of human visual system. When a pair of points in the images captured by the binocular vision sensors is given, the three- dimensional coordinates in real world of this point can calculated. When the same landmark in the images collected by the stereo vision system is detected,the distance between robot and landmark can be calculated through matching the corresponding points of the same landmark.

    Figure 6. (a) Equipment of stereo vision (b) mobile robot with stereo vision

  5. RESULTS AND DISCUSSION

    The robot was driven to specific locations on landmark localization system readings were all recorded and analyzed.

    Figure 7. Results from the Artificial Landmark Localization Method (units in mm)

    Figure 8. Experiment results of robot self-location

    The percentage error between the actual position and measured positions were computed. The measured position refers to the wheel encoder measurement, and the combined wheel encoder and gyroscope measurement. The percentage error was computed as follows:

    TABLE 1.

    SUMMARY OF PERCENTAGE ERRORS

    Position estimation method

    Percentage error

    X | Y

    Artificial landmark

    localization

    1.95

    1.93

    TABLE 2.

    THE RECOGNITION RESULTS OF THE ARTIFICIAL LANDMARKS USING BINOCULAR STEREO VISION

    Sum of images

    Correct recognized

    Recognition accuracy

    time/per image

    400

    331

    82.75%

    93ms

  6. CONCLUSION

A self-localization method of moving robot based on artificial landmarks and stereo vision is proposed. First, a new artificial landmark model with the features of strong anti-interference and easy-to-recognize is designed, then combining the artificial landmarks and stereo vision to locate the indoor mobile robots. Stereo vision can solve the problem of Omni-directional vision which can not recognize the landmark far from the stereo vision system, and the problem of monocular vision which can not calculate the distance between the landmark and robot. Meanwhile the artificial landmarks can helps to decrease the complexity of stereo matching. The experimental results show that, the method of landmark detection is not only robust to lights and angles, but also to the feature of real-time, and it can meet the requirement of the self-localization of the indoor mobile robots.

.

ACKNOWLEDGMENT

Authors thank the Management of Nalanda Institute of Engineering and Technology and HOD of ECE, Teaching Staff to their support for this work.

REFERENCES

  1. Barshan, B., and H.F. Durant-Whyte. 1994.

    Orientation estimate for mobile robots using

    gyroscopic information. Proceedings of the IEEE/RSJ/GI International Conference on Intelligent Robot and Systems (IROS 94). Vol. 3, 1867-1874.

  2. Betke, M., and L. Gurvits. 1997. Mobile robot localization using landmarks. IEEE Transaction on Robotics and Automation. Vol. 13 (2), 251-263.

  3. Borenstein, J., and L. Feng. 1996. Measurement and correction of systematic odometry errors in mobile robots. IEEE Transaction on Robotics and Automation. Vol. 12 (6), 869-880.

  4. DOrazio, T., F.P. Lovergine, M. Ianigro, E. Stella, and A. Distante. 1994. Mobile robot position determination using visual landmarks.IEEE Transactions on Industrial Electronics. Vol. 41 (6), 654-662.

  5. Durrant-Whyte, H.F. 1996. An autonomous guided vehicle for cargo handling applications. International Journal of Robotics Research. Vol. 15 (5).Hague, T.,

    J.A. Marchant, and N.D. Tillett. 2000. Ground based sensing system for autonomous agricultural vehicles. Computer and Electronics in Agriculture. Vol. 25 (1/2), 11-28.

  6. Hayet, J.B., F. Lerasle, and M. Devy. 2003. Visual landmarks detection and recognition for mobile robot navigation. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Vol. 2, 313-318.

  7. Leonard, J.J., and H.F. Durant-Whyte. 1991. Simultaneous map building and localization for an autonomous mobile robot. Proceedings of the IEEE International Conference on Intelligent Robots and Systems. Vol. 3, 1442-1447.

  8. Mata, M., J.M. Armingol, A. De La Escalera, and

    M.A. Salichs. 2003. Using learned visual landmarks for intelligent topological navigation of mobile robots. Proceedings of the IEEE International Conference on Robotics and Automation. Vol. 1, 1324-1329.

  9. Morgan, K.E. 1958. A step towards an automatic tractor. Farm mech. Vol. 10 (13), 440-441 Nagasaka, Y., N.Umeda, Y. Kanetai, K. Taniwaki, and Y. Sasaki. 2004.

Leave a Reply