Comparison Between 2D and 3D Mapping For Indoor Environments

DOI : 10.17577/IJERTV2IS100488

Download Full-Text PDF Cite this Publication

Text Only Version

Comparison Between 2D and 3D Mapping For Indoor Environments

Dr.Wael R. Abdulmajeed * , Revan Zuhair Mansoor **

Mechatronics Engineering Department / AlKhwarizmi Engineering College University of Baghdad

Abstract

This paper describes two ways of mapping. The first way is using sonar and the results of it are 2D maps and the second way is using kinect sensor and the results of it are 3D maps. This work involved manually navigation that is operating by using mobile robot with sonar or kinect sensor for mapping. The aim of this paper is to make comparison between the results of mapping by using kinect sensors with results mapping by using sonar.

Pioneer 3-dx robot is used in this projects that it contain sonar sensors and the programs that are used for this robot are the Advanced Robotics Interface for Applications (ARIA) that program with C++ package ( Visual C++.Net ), and ARNetworking software is used for setup Wireless TCP/IP Ethernet-to-Serial connection between robot and PC.

The programs that are used for kinect sensor are OpenNI/NITE to make it work with pc and also we used Skanect software for building 3d map for environment.

Keywords: pioneer 3-dx (mobile robots), sonar, kinect sensor.

  1. Introduction

    The Objectives of this research are to Review two different ways of mapping with cheap variance sensor and make comparison between them.

    A good map is necessary to compute the robot position and on the other hand just an accurate position estimate yields to a correct map.

    Because mapping is an essential step for many projects, many of the approaches used is simple

    heuristics successively searching for face connected cells or configuration with or without the minimization of a criterion. Some Literatures deal with building 2d map by using sonar like Literature of Dirk Bank et al [2]: that is presents a novel approach to high- resolution ultrasonic environment imaging for autonomous mobile systems. The main contribution of this research is a new algorithm, called tangential regression, which has been developed for deriving local environment models from ultrasonic sensor data and improve the perception of objects with sonar sensors in environments consisting of various materials and surfaces that are partly difficult to detect.

    And others Literatures used modern techniques for building 3D maps like Literature of Shahram Izadi et al [3]: that is presented KinectFusion as a real-time 3D reconstruction and interaction system using a moving standard Kinect. The contributions are threefold. First, the paper detailed a novel GPU pipeline that achieves 3D tracking, reconstruction, segmentation, rendering, and interaction, all in real-time using only commodity camera and graphics hardware. Second, the paper demonstrated core novel uses for the system: for low- cost object scanning and advanced AR and physics based interactions. Third, the paper described new methods for segmenting, tracking and reconstructing dynamic users and the background scene simultaneously, enabling multi-touch on any indoor scene with arbitrary surface geometries. The system allows a user to pick up a standard Kinect camera and move rapidly within a room to reconstruct a high- quality, geometrically precise 3D model of the scene. To achieve this system continually tracks the 6DOF

    pose of the camera and fuses live depth data from the camera into a single global 3D model in real-time.

    Figure (1) KinectFusion enables real-time detailed 3D reconstructions of indoor scenes

    And also Literature of Peter Henry et al [4]: investigate how RGB-D cameras can be used for building dense 3D maps of indoor environments. This paper performed several experiments to evaluate different aspects of RGB-D Mapping, and demonstrate the ability of system to build consistent maps of large scale indoor environments, and show that the RGB-D ICP algorithm improves accuracy of frame-to-frame alignment, and illustrate the advantageous properties of the surfel "surface element" representation. And Literature of K. Khoshelham [5]: presente a theoretical and experimental accuracy analysis of depth data acquired by the Kinect sensor by present a mathematical model for obtaining 3d object coordinates from the raw image measurements, and discusses the calibration parameters involved in the model. Further, a theoretical random error model is derived and verified by an experiment.

    kinect sensor are new, low cost, sensors that provide depth information for every RGB pixel acquired. Combining this information, it is possible to develop 3D perception in an indoor environment

    Pioneer 3 DX is a mobile robot that is used in our work which has sonar sensors. The data from sonar sensors is the distance between the sonar and object. Pioneer robot contain wibox which is represent An Ethernet

    serial bridge converts an RS-232 serial connection to a TCP/IP connection available through an 802.11 (Wi- Fi) wireless network, It is mounted on a robot, and connected to the robot microcontroller's serial port and configured to join a certain wireless network as a client station and given an IP address. Software may then connect to a TCP port at that address and send and receive data to/from the robot over the serial port connection. [1]

  2. Pioneer sonar

    The robot is support with up to eight transducers that provide object detection and rang information for collision avoidance. The sonar positions in all pioneer robots are fixed: one on each side and six facing outward at 20-degree intervals, together, sonar arrays provide 180 degrees of nearly seamless sensing for the platform, sonar array's transducers are multiplexed, only one disc per array is active at a time, but all four arrays fire one transducer simultaneously. the sonar ranging acquisition rate is adjustable, normally set to 40 milliseconds per transducer, sensitivity ranges from 10 centimeters (six inches)to five meters depending on the ranging selected, the control of sonar's firing pattern through software and the default is left-to-right in sequence for each array. [6]

    Figure (2) arrange of sonar in pioneer robot. [6]

  3. Microsoft Kinect

    The Microsoft Kinect is a special RGB-D camera created for Microsofts XBOX 360, to be used as a controller substitute and an extra input device for specific games that exploit the use of the Kinect. But because this sensor has a normal USB connector and gives the possibility of depth data for a cheap unit- price, it also found the interest of people to make the

    Kinect available to PC users by making custom drivers. The Kinect is able to grab RGB images of 640×480 pixels in 8 bit depth with a Bayer color filter and IR images of 640×480 pixels with 11 bit depth. It has a frame rate of 30Hz and an angular field of view of 57 degrees horizontally and 43 degrees in the vertical axis. It needs its own power source other than the USB connector, which is provided with the stand alone kit of the Kinect. The base of the Kinect houses an electro motor that allows the Kinect to tilt. Furthermore there is a multi-array microphone built in the Kinect, towards the sides of the Kinect and it also has an accelerometer (3 dimensions). [7]

    The depth acquisition technology is named Light Coding that the company PrimeSense has patented. It has an IR Pattern Source, which has a single transparency with a fixed pattern with an IR light source to project a complex pattern of light dots shown in figure (3) onto an object. The IR camera takes images of the object that has been illuminated with this pattern and the imag data is then processed to reconstruct a three dimensional model of the object using the knowledge of the IR light pattern. [7]

    Figure (3) how kinect sensor capture depth and color image of environment

    Figure (4) PrimeSense Light Coding Technology Technical Overview

  4. 3d mapping

    As the Kinect allows RGBD data for a cheap unit cost, various people have started to create drivers for the PC. The goal for this thesis is to create a mapping solution using the PC, but there are also other applications for the use of the Kinect. Therefore this paper covers the driver currently available and other possible applications for the use of the Kinect with a PC for creates a 3d map.

    1. OpenNI/NITE

      OpenNI is an industry-led, non -profit organization formed to certify and promote the compatibility and interoperability of Natural Interaction (NI) devices, applications and middleware. With Natural Interaction devices they mean devices that would allow interaction with electronic devices like we would with humans, for example using speech and gestures. Devices that would fall under this category would be cameras of any kind of microphones.

      NITE is middleware developed by Primesense, who have the patent behind the technology implemented in the Kinect. The NITE engine has algorithms for user identification, feature detection and gesture recognition, as well as a framework that manages the tagging of users in the scene and the acquisition and release of control between users. [7]

      It offers C++, C# and Flash APIs for Linux and Windows that allows access to RGB and Depth derived data, like Full Body Analysis, Hand Point Analysis,

      Gesture Analysis and Scene Analysis (detection of the floor plane, back ground, foreground, people recognition and labeling). It is not clear if direct access to raw RGB and depth data is possible with this software. [7]

    2. Skanect software for 3d mapping

      Skanect is a software program use for windows that can capture a full color 3d model of an object. Skanect transforms Microsoft Kinect or Asus Xtion camera into an ultra-low cost scanner able to create 3D meshes out of real scenes in a few minutes.

  5. 2D Mapping

    Two different approaches to the mobile robot localization problem exist: relative and absolute. The first one is based on the data provided by sensors measuring the dynamics of variables internal to the vehicle like encoder that we are used it to determine the position and orientation of our robot in the plan; absolute localization requires sensors measuring some parameters of the environment in which the robot is operating. If the environment is only partially known, the construction of appropriate ambient maps is also required.

    SLAM in the mobile robotics community generally refers to the process of creating geometrically consistent maps of the environment. Topological maps are a method of environment representation which captures the connectivity (i.e., topology) of the environment rather than creating a geometrically accurate map.

    The pioneer 3-dx robot is support with eight sonars that provide object detection and rang information for collision avoidance. This work is used sonars of pioneer robot to build a map for indoor environment.

    Figure (5) dimension of sonar beam.

    Figure (6): center of mobile robot with sonar beam

    r is distance between robot and wall

    b6 is angle between range line of sonar 6 and x axis Cx6 is the x distance between sonar6 and center of robot

    Cy6 is the y distance between sonar6 and center of robot

    d6 is the distance between sonar6 and center of robot By using last factors of all sonars of pioneer 3-dx with it position and orientation of in real time we can get

    equations of mapping.

    For n=0 to n=7

    = X (t) + ( (t) + ) × cos( + (t))

    = Y (t) + ( (t) + ) × sin( + (t))

    n: represent real number from 0 to 7 t: represent time

    X (t) and Y (t) and (t): represent the position and orientation of mobile robot in the plan in time t

    : represent the distance from any obstacles and walls that robot is detected by using sonars to the robot position.

    : represent the orientation of sonars robot.

    : are the x distance between sonars and center of robot.

    : are the y distance between sonars and center of robot.

    and : represent the position of obstacles and walls in the plan that robot is detected by using sonars in time t.

    This project is used the distance of center of mobile robot ( , ) of all sonars beam to be 130 mm and this number is approximately to true value of all distance range of sonars place to the center of robot

    5.1 Choosing suitable filter for mapping by using sonar

    Mapping by sonar using last equations have some troubles in practical resulted Because of the relatively wide angle of the sonar beam, an isolated sonar reading imposes only a loose constraint on the position of the detected object, This problem will make wide sonar beam causes a poor directional resolution and make the true position of the object detect could not be known along the fan-shaped area show in figure (7); the blue aria are all rang that can sonar read see obstacle a1, a2 and a3 they are in deferent place from sonar but sonar read them at same position and it is R1 and R1 is distance from a2 to sonar.

    Figure (7): sonar viewing angle.

    Sonar viewing angle of pioneer robot is approximately 30 degree and by using triangles equations we can get these functions:

    = and also = ..

    Figure (8) the made-up environment of plan that we are using it for mapping

    This project is used kinect with mobile robot for mapping by connecting kinect to laptop and run Skanect by click start on it panel screen of Skanect and then we rotate the mobile robot slowly on itself without moving it in one point inside map 6 to start mapping.

    In our sonor of robot R1=5meters and L1=1.3388meters and if R2=2.5meters then L2=0.67m. By reducing the rang distance from 5 m to 2.5 m the error reduced to be

    e=L1-L2=1.3388m-o.67m=0.6688m

    This work is used filter for just read the desire range that work needed for draw walls and obstacle of environment.

  6. Experimental results of 2D and 3D mapping

    These Practical experiments we show how kinect sensor or sonar can build 2D and 3D maps for made-up environment with dimensions 3100mm X 2750mm shown in figure (8). In this project Mobile robot is navigated manually by using keyboard of pc.

    Figure (9) panel screen of Skanect program

    After saving mapping result of Skanct we can open it by using meshlap program and that program deal with 3d image result like our result and also it is easy to control the result show of that program by just hold

    and draw the mouse in the screen of meshlap and the Figure (10) and figure (11) show 3D mapping result of the environment in deferent view angle.

    Figure (10) 3d mapping result of plan use kinect sensor

    Figure (11) deferent view of result mapping of plan

    Meshlap has also deferent techniques one of these technique is can measure distance of any shape in 3d map result shown in figure (12).

    a

    b

    Figure (12) dimensions of plan of mapping result are (a) dimension of width of plan and (b) length of plan

    This project shows the top view result of 3d mapping of the plan and used sonar to build a map for same plan to see the deferent between them as figure (13).

    is impossible used kinect sensor for SLAM projects, because Kinect sensors lose its mapping data when it move from its position, because the position and ordination of mobile robot are not inter on the equations of 3D mapping, that will make sonar work successfully in SLAM projects and kinect sensor for our technique fail in SLAM projects.

    REFERENCES

    a

    b

    Figure (13) comparison between 2d mapping and 3d mapping result (a) 3d mapping use kinect sensor. (b) 2d mapping usesonar and mobile robot position and orientation

  7. Conclusion

The following points summarize the main conclusions of 2D and 3D mapping derived from Experiments of work:

  1. The 3d mapping have amazing result of dimensions and details for shape of plan.

  2. The dimensions result of 3D mapping is much better than 2D mapping.

3-3D mapping has color of environment and 2D mapping has not.

4-3D mapping has very little noise than 2D mapping.

  1. We can see the plan in 3D map from any angle, and in 2D mapping we can just see top view of the plan.

  2. We can distinguish any shape in our 3D map result of using kinect sensor, and in 2D mapping we cannot. 7-The end of works comparisons between 2D and 3D mapping are that, it is easy to used sonar sensors for SLAM projects by taking the range of sonars with used the position and orientation of mobile robot and also it

  1. Ahmed Rahman,''A Fuzzy Logic Control for Autonomous Mobile Robot'', University of Baghdad, Al-Khawarizmi College for Eng. , Mechatronics Department, MSc. thesis 2009.

  2. Dirk Bank, Thomas Kämpke '' High-Resolution Ultrasonic Environment Imaging'', Robotics, IEEE Transactions , Vol. 23 , Issue 2 , 2007.

  3. Shahram Izadi , David Kim , Otmar Hilliges , David Molyneaux , Richard Newcombe , Pushmeet Kohli , Jamie Shotton , Steve Hodges , Dustin Freeman Andrew Davison , Andrew Fitzgibbon, '' KinectFusion: Real-time 3D Reconstruction and Interaction Using a Moving Depth Camera'', ACM Symposium on User Interface Software and Technology, October 16-19, 2011.

  4. Peter Henry, Michael Krainin , Evan Herbst , Xiaofeng Ren , Dieter Fox, '' RGB-D mapping: Using Kinect-style depth cameras for dense 3D modeling of indoor environments'', , The International Journal of Robotics Research 2012.

  5. K. Khoshelham, '' ACCURACY ANALYSIS OF KINECT DEPTH DATA '' International Society for Photogrammetry and Remote Sensing, Calgary, Canada, 29-31 August 2011.

  6. ' Pioneer 3 mobile robot operation manual', 2007.

  7. Bas des Bouvrie,'' Improving RGBD Indoor Mapping with IMU data'', Delft University of Technology, Masters Thesis in Embedded Systems, 2012.

Leave a Reply