Implementing Kinect Sensor for Building 3D Maps of Indoor Environments

Similar documents
Abstract. Introduction

Improving RGBD Indoor Mapping with IMU data

Real-Time 3D Reconstruction Using a Kinect Sensor

Removing Moving Objects from Point Cloud Scenes

Master Thesis Using MS Kinect Device for Natural User Interface

ACCURACY TEST OF MICROSOFT KINECT FOR HUMAN MORPHOLOGIC MEASUREMENTS

How does the Kinect work? John MacCormick

DESIGN OF A TOUCHLESS USER INTERFACE. Author: Javier Onielfa Belenguer Director: Francisco José Abad Cerdá

3D Scanner using Line Laser. 1. Introduction. 2. Theory

PCL - SURFACE RECONSTRUCTION

T-REDSPEED White paper

Kinect Interface to Play Computer Games with Movement

Mobile Mapping. VZ-400 Conversion to a Mobile Platform Guide. By: Joshua I France. Riegl USA

An open-source tool for distributed viewing of kinect data on the web

RGB-D Mapping: Using Kinect-Style Depth Cameras for Dense 3D Modeling of Indoor Environments

Merging overlapping depth maps into a nonredundant point cloud

Applications > Robotics research and education > Assistant robot at home > Surveillance > Tele-presence > Entertainment/Education > Cleaning

Automotive Applications of 3D Laser Scanning Introduction

A 3D OBJECT SCANNER An approach using Microsoft Kinect.

KinectFusion: Real-time 3D Reconstruction and Interaction Using a Moving Depth Camera*

Mobile Robot FastSLAM with Xbox Kinect

REALTIME 3D MAPPING, OPTIMIZATION, AND RENDERING BASED ON A DEPTH SENSOR CRAIG MOUSER. Bachelor of Science in Computer Engineering

Interaction devices and sensors. EPFL Immersive Interaction Group Dr. Nan WANG Dr. Ronan BOULIC

Multi-Kinect Tracking for Dismounted Soldier Training

How To Use Bodescan For 3D Imaging Of The Human Body

Spatio-Temporally Coherent 3D Animation Reconstruction from Multi-view RGB-D Images using Landmark Sampling

Development of Docking System for Mobile Robots Using Cheap Infrared Sensors

C# Implementation of SLAM Using the Microsoft Kinect

PU-USBX. USB over Ethernet Extender OPERATION MANUAL

Analecta Vol. 8, No. 2 ISSN

Basler. Line Scan Cameras

Basler. Area Scan Cameras

Shape Measurement of a Sewer Pipe. Using a Mobile Robot with Computer Vision

VOLUMNECT - Measuring Volumes with Kinect T M

How To Fuse A Point Cloud With A Laser And Image Data From A Pointcloud

Point Cloud Simulation & Applications Maurice Fallon

VIRTUAL TRIAL ROOM USING AUGMENTED REALITY

Product Information. QUADRA-CHEK 3000 Evaluation Electronics For Metrological Applications

MH - Gesellschaft für Hardware/Software mbh

SE05: Getting Started with Cognex DataMan Bar Code Readers - Hands On Lab Werner Solution Expo April 8 & 9

AXIS 205 Network Camera Goes anywhere your network goes

Somero SiteShape System

Contents. Introduction Hardware Demos Software. More demos Projects using Kinect Upcoming sensors. Freenect OpenNI+ NITE + SensorKinect

The Visual Internet of Things System Based on Depth Camera

SNC-VL10P Video Network Camera

Basler scout AREA SCAN CAMERAS

How To Make A Tersea Robot

Chapter 5 Understanding Input. Discovering Computers Your Interactive Guide to the Digital World

An Experimental Study on Pixy CMUcam5 Vision Sensor

DAMAGED ROAD TUNNEL LASER SCANNER SURVEY

High speed 3D capture for Configuration Management DOE SBIR Phase II Paul Banks

Research-Grade Research-Grade. Capture

ASSESSMENT OF VISUALIZATION SOFTWARE FOR SUPPORT OF CONSTRUCTION SITE INSPECTION TASKS USING DATA COLLECTED FROM REALITY CAPTURE TECHNOLOGIES

POS UNITS. Installation and user guide. Count, video recording and POS control unit ENGLISH

Autonomous Advertising Mobile Robot for Exhibitions, Developed at BMF

Synthetic Sensing: Proximity / Distance Sensors

7 MEGAPIXEL 180 DEGREE IP VIDEO CAMERA

PRODUCT SHEET.

Wireless Day / Night Cloud Camera TV-IP751WIC (v1.0r)

Hand Gestures Remote Controlled Robotic Arm

Basler pilot AREA SCAN CAMERAS

3D SCANNING: A NEW APPROACH TOWARDS MODEL DEVELOPMENT IN ADVANCED MANUFACTURING SYSTEM

Mouse Control using a Web Camera based on Colour Detection

A technical overview of the Fuel3D system.

KINECT PROJECT EITAN BABCOCK REPORT TO RECEIVE FINAL EE CREDIT FALL 2013

Twelve. Figure 12.1: 3D Curved MPR Viewer Window

TOOL PRESETTERS. 20 Thompson Road, East Windsor, CT Fax

Super-Resolution Keyframe Fusion for 3D Modeling with High-Quality Textures

Multi-Touch Ring Encoder Software Development Kit User s Guide

Application Report. Propeller Blade Inspection Station

Optical Digitizing by ATOS for Press Parts and Tools

EYE-10 and EYE-12. Advanced Live Image Cameras

Multi-Touch Control Wheel Software Development Kit User s Guide

Cognex Vision Software

White Paper Barcoding

Manufacturing Process and Cost Estimation through Process Detection by Applying Image Processing Technique

8/16-Port IP KVM Switch IKVM-8010 / IKVM Quick Installation Guide

Template-based Eye and Mouth Detection for 3D Video Conferencing

FARO Technologies Inc. Internal Control File Location: X:\CONTROL\RECORDS\05MANUFA\PARTSPEC\7 Software\E1073_SCENECT_5.2_Manual_EN.

You can search the event by POS and you can also add the Time Search condition to narrow the search range. By

Color holographic 3D display unit with aperture field division

MoveInspect HF HR. 3D measurement of dynamic processes MEASURE THE ADVANTAGE. MoveInspect TECHNOLOGY

THE CONTROL OF A ROBOT END-EFFECTOR USING PHOTOGRAMMETRY

Digital Video Recorder

Overview. Proven Image Quality and Easy to Use Without a Frame Grabber. Your benefits include:

Product Description. Licenses Notice. Introduction TC-200

DINAMIC AND STATIC CENTRE OF PRESSURE MEASUREMENT ON THE FORCEPLATE. F. R. Soha, I. A. Szabó, M. Budai. Abstract

How To Use Trackeye

Lesson 6: 6 EXAMPLES OF EMBEDDED SYSTEMS. Chapter-1L06: "Embedded Systems - ", Raj Kamal, Publs.: McGraw-Hill Education

PROPOSED SYSTEM FOR MID-AIR HOLOGRAPHY PROJECTION USING CONVERSION OF 2D TO 3D VISUALIZATION

GestPoint Maestro3D. A White Paper from GestureTek The Inventor of 3D Video Gesture Control

The Universal DAQ Device. Connect and measure immediately!

Development of 3D Image Manipulation Software Utilizing the Microsoft Kinect

Wireless PTZ Cloud Camera TV-IP851WC (v1.0r)

Navigation Aid And Label Reading With Voice Communication For Visually Impaired People

Network Camera SNC-P1

National Performance Evaluation Facility for LADARs

MRC High Resolution. MR-compatible digital HD video camera. User manual

KEYTEC, INC. - A global leader of innovative touch interactive solutions since 1987

THE MS KINECT USE FOR 3D MODELLING AND GAIT ANALYSIS IN THE MATLAB ENVIRONMENT

Transcription:

Implementing Kinect Sensor for Building 3D Maps of Indoor Environments Wael R. Abdulmajeed, Ph.D Mechatronics Engineering Department / Al Khwarizmi Engineering College / University of Baghdad Revan Zuhair Mansoor Mechatronics Engineering Department / Al Khwarizmi Engineering College / University of Baghdad ABSTRACT This paper describes modern technique of mapping. This work involved two cases study; in both cases mobile robot navigated manually and used Kinect sensor for built map of indoor environment. pioneer 3-dx is the robot used in this projects that is programmed by using the Advanced Robotics Interface for Applications (ARIA) that program with C++ package ( Visual C++.Net ), and ARNetworking software is used for setup Wireless TCP/IP Ethernet-to-Serial connection between robot and PC. The programs that are used for kinect sensor are OpenNI/NITE to make it work with PC and also we used Skanect software for building 3d map for environment. Keywords Pioneer 3-dx (mobile robots), Kinect Sensor 1. INTRODUCTION Because mapping is an essential step for many projects, many of the approaches used is simple heuristics successively searching for face connected cells or configuration with or without the minimization of a criterion. Some Literatures used modern techniques for building 3D maps like Literature of Shahram Izadi [1]: that is presented KinectFusion as a real-time 3D reconstruction and interaction system using a moving standard Kinect. The contributions are threefold. First, the paper detailed a novel GPU pipeline that achieves 3D tracking, reconstruction, segmentation, rendering, and interaction, all in real-time using only commodity camera and graphics hardware. Second, the paper demonstrated core novel uses for the system: for low-cost object scanning and advanced AR and physics based interactions. Third, the paper described new methods for segmenting, tracking and reconstructing dynamic users and the background scene simultaneously, enabling multi-touch on any indoor scene with arbitrary surface geometries. The system allows a user to pick up a standard Kinect camera and move rapidly within a room to reconstruct a high-quality, geometrically precise 3D model of the scene. To achieve this system continually tracks the 6DOF pose of the camera and fuses live depth data from the camera into a single global 3D model in real-time. Figure (1) KinectFusion enables real-time detailed 3D reconstructions of indoor scenes And also Literature of Peter Henry [2]: investigate how RGB-D cameras can be used for building dense 3D maps of indoor environments. This paper performed several experiments to evaluate different aspects of RGB-D Mapping, and demonstrate the ability of system to build consistent maps of large scale indoor environments, and show that the RGB-D ICP algorithm improves accuracy of frame-to-frame alignment, and illustrate the advantageous properties of the surfel "surface element" representation. And Literature of K. Khoshelham [3]: presente a theoretical and experimental accuracy analysis of depth data acquired by the Kinect sensor by present a mathematical model for obtaining 3d object coordinates from the raw image measurements, and discusses the calibration parameters involved in the model. Further, a theoretical random error model is derived and verified by an experiment. Kinect sensor is new, low cost, sensors that provide depth information for every RGB pixel acquired. Combining this information, it is possible to develop 3D perception in an indoor environment. Pioneer 3 DX is a mobile robot that is used in this work which has sonar sensors. The data from sonar sensors is the distance between the sonar and object. Pioneer robot contain wibox which is represent An Ethernet serial bridge converts an RS-232 serial connection to a TCP/IP connection available through an 802.11 (Wi-Fi) wireless network, It is mounted on a robot, and connected to the robot microcontroller's serial port and configured to join a certain wireless network as a client station and given an IP address. Software may then connect to a TCP port at that address and send and receive data to/from the robot over the serial port connection. 2. MICROSOFT KINECT The Microsoft kinect is a special RGB-D camera created for Microsoft s XBOX 360 to be used as a controller substitute and an extra input device for specific games that exploit the use of the Kinect [6]. But because this sensor has a normal USB connector and gives the possibility of depth data for a cheap unit-price, it also found the interest of people to make the kinect available to PC users by making custom drivers. The kinect is able to grab RGB images of 640x480 pixels in 8 bit depth with a bayer color filter and IR images of 640x480 pixels with 11 bit depth [7]. It has a frame rate of 30Hz and an angular field of view of 57 degrees horizontally and 43 degrees in the vertical axis. It needs its own power source other than the USB connector [8], which is provided with the stand alone kit of the Kinect. The base of the Kinect houses an electro motor that allows the Kinect to tilt. Furthermore there is a multi-array microphone built in the Kinect, twoards the sides of the Kinect and it also has an accelerometer (3 dimensions). The depth acquisition technology is named Light Coding that the company PrimeSense has patented [9]. It has an IR Pattern Source, which has a single transparency with a fixed pattern with an IR light source to project a complex pattern of light dots (see Figure (2)) onto an object. The IR camera takes images of the object that 18

has been illuminated with this pattern and the image data is then processed to reconstruct a three dimensional model of the object using the knowledge of the IR light pattern (see Figure (3)) [6]. Figure (2) PrimeSense Light Coding Technology Technical Overview manages the tagging of users in the scene and the acquisition and release of control between users [9]. It offers C++, C# and Flash API s for Linux and Windows that allows access to RGB and depth derived data, like full body analysis, hand point analysis, gesture analysis and scene analysis (detection of the floor plane, back ground, foreground, people recognition and labeling). It is not clear if direct access to raw RGB and depth data is possible with this software [10]. 3.2 Skanect software for 3d mapping Skanect is a software program use for windows that can capture a full color 3d model of an object. Skanect transforms Microsoft Kinect or Asus Xtion camera into an ultra-low cost scanner able to create 3D meshes out of real scenes in a few minutes [11]. 4. EXPERIMENTAL RESULTS OF 3D MAPPING OF CASE1 This case is showed how kinect sensor could build 3D maps for made-up environment with dimensions 3100mm x 2750mm shown in figure (4). In this project Mobile robot is navigated manually by using keyboard of pc. Figure (3) how kinect sensor capture depth and color image of environment 3. 3D MAPPING As the Kinect allows RGBD data for a cheap unit cost, various people have started to create drivers for the PC. The goal for this thesis is to create a mapping solution using the PC, but there are also other applications for the use of the Kinect. Therefore this paper covers the driver currently available and other possible applications for the use of the Kinect with a PC for create a 3D map. 3.1 OpenNI/NITE OpenNI is an industry-led, non -profit organization formed to certify and promote the compatibility and interoperability of natural interaction (NI) devices, applications and middleware. With Natural Interaction devices they mean devices that would allow interaction with electronic devices like those we would use with humans, for example using speech and gestures. Devices that would fall under this category would be cameras of any kind of microphones [10]. NITE is middleware developed by Primesense, who have the patent behind the technology implemented in the kinect. The NITE engine has algorithms for user identification, feature detection and gesture recognition, as well as a framework that Figure (4) the made-up environment of plan that we are using it for mapping This project is used kinect with mobile robot for mapping by connecting kinect to laptop and run Skanect by click start on it panel screen of Skanect and then we rotate the mobile robot slowly on itself without moving it in one point inside plan to start mapping. Figure (5) panel screen of Skanect program 19

After we save the mapping result of Skanct we can open it by using meshlap program and that program deal with 3d image result like our result and also it is easy to control the result show of that program by just hold and draw the mouse in the screen of meshlap and the Figure (6) and figure (7) show how can we see our environment in deferent view angle use meshlap Meshlap has also deferent techniques one of these technique is can measure distance of any shape in 3D map result shown in figure (8). a Figure (6) 3d mapping result of plan use kinect sensor b Figure (8) dimensions of plan of mapping result are (a) dimension of width of plan and (b) length of plan 5. EXPERIMENTAL RESULTS OF 3D MAPPING OF CASE2 In this case we use kinect sensor with robot for mapping big room with dimension (6 x 9) meters by using three points on this room. This room contains table, chairs, computer, lockers and other things. Mobile robot is used to carry kinect sensor from one point to another. In the beginning, mobile robot with kinect are initialized and send it manually to point one and after mobile robot reaches it, we start mapping by running Skanect program and rotating mobile robot manually with slow speed. After finishing the mapping at this point, we save the result and send the mobile robot to the next poin two and do the same thing we did in the first point. Figure (7) deferent view of result mapping of plan Figure (9) The big room with dimensions design It uses AutoCAD 20

Figure (10) Pictures of big area The results of mapping at each point are shown in the following figure: Figure (11) Result of the 3D mapping of big room at point one Figure (12) Result of the 3D mapping of big room at point two Figure (13) Result of the 3D mapping of big room at point three 6. RESULTS AND DISCUSSION OF 3D MAPPING The following points summarize the main conclusions of 3D mapping derived from Experiments of work: (1) The 3D mapping have amazing result of dimensions and details for shape of plan. (2) The dimensions results of 3D mapping is very close to the true dimensions of environment. (3) 3D mapping has true color of the environment. (4) 3D mapping has very little noise. (5) 3D map can be seen from any angle. (6) We can distinguish any shape in our 3D map result of using kinect sensor. (7) Building maps using kinect sensor is easer, more accurate and less cost than using another sensors like sonic and laser sonar. 7. REFERENCES [1] Shahram Izadi, David Kim, Otmar Hilliges, David Molyneaux, Richard Newcombe, Pushmeet Kohli, Jamie Shotton, Steve Hodges, Dustin Freeman, Andrew Davison, Andrew Fitzgibbon, ''KinectFusion: Real-time 3D Reconstruction and Interaction Using a Moving Depth Camera'', ACM Symposium on User Interface Software and Technology, October 16-19/ 2011. [2] Peter Henry, Michael Krainin, Evan Herbst, Xiaofeng Ren, Dieter Fox, '' RGB-D mapping: Using Kinect-style depth cameras for dense 3D modeling of indoor environments'', The International Journal of Robotics Research, 5-April- 2012. [3] K. Khoshelham, '' ACCURACY ANALYSIS OF KINECT DEPTH DATA '' International Society for Photogrammetry and Remote Sensing, Calgary, Canada, 29-31 August 2011. 21

[4] Bas des Bouvrie,'' Improving RGBD Indoor Mapping with IMU data'', Delft University of Technology, Master s Thesis in Embedded Systems, 2012. [5] 'Pioneer 3 mobile robot operation manual', 2007. [6] Microsoft. Kinect for XBox 360 - Xbox.com. Dec-2013, Website: http://www.xbox.com/en-gb/kinect. [7] OpenKinect.org. Protocol Documentation - OpenKinect. Dec-2013, Website: http://openkinect.org/wiki/protocol_documentation. [8] Getting Started OpenKinect, Dec-2013, Website: http://openkinect.org/wiki/getting_started#please_read_th is_before_you_start. [9] PrimseSense, Dec-2013, Website:http://www.primesense.com. [10] Openni, Dec-2013, Website: http://www.openni.org/. [11] Skanect, 3D Scanning, Dec-2013, Website: http://skanect.manctl.com/. IJCA TM : www.ijcaonline.org 22