UAV Pose Estimation using POSIT Algorithm



Similar documents
Onboard electronics of UAVs

CHAPTER 1 INTRODUCTION

How To Fuse A Point Cloud With A Laser And Image Data From A Pointcloud

Force/position control of a robotic system for transcranial magnetic stimulation

Tracking of Small Unmanned Aerial Vehicles

Automatic Labeling of Lane Markings for Autonomous Vehicles

Visual Servoing using Fuzzy Controllers on an Unmanned Aerial Vehicle

Design of a six Degree-of-Freedom Articulated Robotic Arm for Manufacturing Electrochromic Nanofilms

Development of Docking System for Mobile Robots Using Cheap Infrared Sensors

Daniel F. DeMenthon and Larry S. Davis. Center for Automation Research. University of Maryland

3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving

Mechanical Design of a 6-DOF Aerial Manipulator for assembling bar structures using UAVs

Effective Use of Android Sensors Based on Visualization of Sensor Information

VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS

ZMART Technical Report The International Aerial Robotics Competition 2014

Intelligent Submersible Manipulator-Robot, Design, Modeling, Simulation and Motion Optimization for Maritime Robotic Research

Static Environment Recognition Using Omni-camera from a Moving Vehicle

INSTRUCTOR WORKBOOK Quanser Robotics Package for Education for MATLAB /Simulink Users

Robot Perception Continued

CONTRIBUTIONS TO THE AUTOMATIC CONTROL OF AERIAL VEHICLES

High-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound

Control of a quadrotor UAV (slides prepared by M. Cognetti)

3D Tranformations. CS 4620 Lecture 6. Cornell CS4620 Fall 2013 Lecture Steve Marschner (with previous instructors James/Bala)

Vision based Vehicle Tracking using a high angle camera

Basic Principles of Inertial Navigation. Seminar on inertial navigation systems Tampere University of Technology

CE801: Intelligent Systems and Robotics Lecture 3: Actuators and Localisation. Prof. Dr. Hani Hagras

Obstacle Avoidance Design for Humanoid Robot Based on Four Infrared Sensors

GANTRY ROBOTIC CELL FOR AUTOMATIC STORAGE AND RETREIVAL SYSTEM

Landing on a Moving Target using an Autonomous Helicopter

An inertial haptic interface for robotic applications

VISION-BASED POSITION ESTIMATION IN MULTIPLE QUADROTOR SYSTEMS WITH APPLICATION TO FAULT DETECTION AND RECONFIGURATION

Wii Remote Calibration Using the Sensor Bar

LEGO NXT-based Robotic Arm

Aerospace Information Technology Topics for Internships and Bachelor s and Master s Theses

GLOVE-BASED GESTURE RECOGNITION SYSTEM

THE problem of visual servoing guiding a robot using

3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving

CS 534: Computer Vision 3D Model-based recognition

E190Q Lecture 5 Autonomous Robot Navigation

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

A Reliability Point and Kalman Filter-based Vehicle Tracking Technique

Design Specifications of an UAV for Environmental Monitoring, Safety, Video Surveillance, and Urban Security

TRIMBLE ATS TOTAL STATION ADVANCED TRACKING SYSTEMS FOR HIGH-PRECISION CONSTRUCTION APPLICATIONS

DESIGN, IMPLEMENTATION, AND COOPERATIVE COEVOLUTION OF AN AUTONOMOUS/TELEOPERATED CONTROL SYSTEM FOR A SERPENTINE ROBOTIC MANIPULATOR

Visual Servoing Methodology for Selective Tree Pruning by Human-Robot Collaborative System

An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network

Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication

Microcontrollers, Actuators and Sensors in Mobile Robots

3D Arm Motion Tracking for Home-based Rehabilitation

Integrated sensors for robotic laser welding

Interactive Computer Graphics

Wireless Sensor Networks Coverage Optimization based on Improved AFSA Algorithm

Robotics. Chapter 25. Chapter 25 1

Effective Interface Design Using Face Detection for Augmented Reality Interaction of Smart Phone

Frequently Asked Questions

Industrial Robotics. Training Objective

Metrics on SO(3) and Inverse Kinematics

ExmoR A Testing Tool for Control Algorithms on Mobile Robots

A non-contact optical technique for vehicle tracking along bounded trajectories

Relating Vanishing Points to Catadioptric Camera Calibration

PID, LQR and LQR-PID on a Quadcopter Platform

Classifying Manipulation Primitives from Visual Data

Human-like Arm Motion Generation for Humanoid Robots Using Motion Capture Database

How To Analyze Ball Blur On A Ball Image

3D Annotation and Manipulation of Medical Anatomical Structures

Virtual CRASH 3.0 Staging a Car Crash

Sensor Based Control of Autonomous Wheeled Mobile Robots

Attitude and Position Control Using Real-Time Color Tracking

VISION ALGORITHM FOR SEAM TRACKING IN AUTOMATIC WELDING SYSTEM Arun Prakash 1

Abstract. Introduction

5-Axis Test-Piece Influence of Machining Position

HAND GESTURE BASEDOPERATINGSYSTEM CONTROL

Geometric Camera Parameters

Building an Advanced Invariant Real-Time Human Tracking System

Stabilizing a Gimbal Platform using Self-Tuning Fuzzy PID Controller

Stirling Paatz of robot integrators Barr & Paatz describes the anatomy of an industrial robot.

RS platforms. Fabio Dell Acqua - Gruppo di Telerilevamento

THE CONTROL OF A ROBOT END-EFFECTOR USING PHOTOGRAMMETRY

Projection Center Calibration for a Co-located Projector Camera System

Flight Controller. Mini Fun Fly

SIX DEGREE-OF-FREEDOM MODELING OF AN UNINHABITED AERIAL VEHICLE. A thesis presented to. the faculty of

The Basics of Robot Mazes Teacher Notes

sonobot autonomous hydrographic survey vehicle product information guide

Control Design of Unmanned Aerial Vehicles (UAVs)

Analecta Vol. 8, No. 2 ISSN

Mechanics lecture 7 Moment of a force, torque, equilibrium of a body

Simulation of Trajectories and Comparison of Joint Variables for Robotic Manipulator Using Multibody Dynamics (MBD)

Experimental Results from TelOpTrak - Precision Indoor Tracking of Tele-operated UGVs

Design-Simulation-Optimization Package for a Generic 6-DOF Manipulator with a Spherical Wrist

Essential Mathematics for Computer Graphics fast

Motion tracking using Matlab, a Nintendo Wii Remote, and infrared LEDs.

Solving Simultaneous Equations and Matrices

FRC WPI Robotics Library Overview

Optical Tracking Using Projective Invariant Marker Pattern Properties

Simultaneous Gamma Correction and Registration in the Frequency Domain

Definitions. A [non-living] physical agent that performs tasks by manipulating the physical world. Categories of robots

Synthetic Sensing: Proximity / Distance Sensors

Limitations of Human Vision. What is computer vision? What is computer vision (cont d)?

Jean François Aumont (1), Bastien Mancini (2). (1) Image processing manager, Delair-Tech, (2) Managing director, Delair-Tech. Project : Authors:

A PAIR OF MEASURES OF ROTATIONAL ERROR FOR AXISYMMETRIC ROBOT END-EFFECTORS

Transcription:

International Journal of Digital ontent Technology and its Applications. Volume 5, Number 4, April 211 UAV Pose Estimation using POSIT Algorithm *1 M. He, 2. Ratanasawanya, 3 M. Mehrandezh, 4 R. Paranjape *1 ollege of Electrical & Information Engineering, Hunan University, hina, E-mail:hemin67@163.com 2 Electronic Systems Engineering, University of Regina, anada, E-mail: ratanasc@uregina.ca 3 Associate Professor, Industrial Systems Engineering, University of Regina, anada, E-mail: mehran.mehrandezh@uregina.ca 4 Professor, Electronic Systems Engineering, University of Regina, anada, E-mail: raman.paranjape@uregina.ca doi:1.4156/jdcta.vol5. issue4.19 Abstract Vision-based pose estimation is widely employed to Mini Unmanned Aerial Vehicles (MUAV) with limited payloads. The Pose from Orthography and Scaling and Iterations (POSIT) is one of the most important solutions to estimate the pose by 2-D images and 3-D model of objects. In order to evaluate the performance of POSIT algorithm, a test platform that consists of a MUAV, a wireless camera, a computer workstation, and a motion capture (Optitrack) system is developed. The pose of the MUAV is calculated by the POSIT algorithm with a set of 2-D images captured by the on-board camera, and the calculated pose is compared to the actual pose reading from the Optitrack system. The experimental result demonstrates that the error remains within acceptable bounds and the POSIT is a useful alternative for pose estimation of a MUAV. 1. Introduction Keywords: Pose Estimation, MUAV, Optitrack System, POSIT. UAVs (Unmanned Aerial Vehicles) recently draw a great deal of attention within the public and private sectors as a useful tool for mitigation, prevention, and timely response to emergency situations. The problem of the object s position and orientation (aka pose) estimation arises in several domains of application such as localization, visual servoing, object tracking and so on [1-5]. ompared to typical inertial, sonar, atmospheric, and GPS based sensors, camera appears as an ideal sensor for deployment in small UAVs with limited payloads due to its compact size and abundant information in captured images. As a result, vision-based pose estimation methods have been focused in many literatures [6-8]. urrent pose estimation algorithms can be classified into model-based [9] and model-free [1] methods depending on the requirement of the knowledge about the 3D target model and the camera parameters. ith the knowledge of both the 3D model of a target object and the feature correspondence between the object and its 2D image, model-based methods estimate the pose of the camera relative to the object by using single view of image. In the case of this method, pose estimation from image points is the most well-known technique. For example, RANSA (Random Sample onsensus) solved Location Determination Problem from three and four coplanar feature points or six points in general position. Unfortunately, no general result is given about the uniqueness of the solution [11]. Lowe s algorithm defined an error function to express the distance between image features in physical space and the projection of the corresponding points at the current camera location, and an iterative process is used to correct possible projection error. More accurate result could be obtained from Lowe s algorithm; however, it is more complex and computationally demanding as well as an approximate pose is needed to initiate the iteration process [12]. The POSIT algorithm estimates the pose of the camera with respect to an object and optimizes the error by using an iterative process. POSIT avoided initial pose estimation and matrix inversion computation, and guaranteed accurate pose estimation. Therefore, it is a simple, efficient, and suitable alternative for real-time applications [13]. ith the original version of the POSIT algorithm, the performance evaluation was implemented by using synthetic images of a tetrahedron and a cube. The pose of the objects which were used to produce - 153 -

International Journal of Digital ontent Technology and its Applications. Volume 5, Number 4, April 211 the images and the pose of the objects computed by POSIT from the synthetic images were compared. However, the validation of this algorithm was questionable in practice in terms of the pose estimation accuracy. In order to evaluate the performance of POSIT for real-time pose estimation of UAVs, we developed a test platform which consists of a Mini UAV (MUAV), a wireless camera, a computer work station, and a motion capture (Optitrack) system. The pose of the MUAV is calculated by POSIT algorithm using a set of images captured by the on-board camera. The calculated pose is compared to the actual pose reading from the Optitrack system. This paper is organized as follows. Section 2 presents the components of the method, including the test platform setup, POSIT algorithm, the homogeneous transformation for pose estimation results comparison. Test results can be found in section 3. Section 4 concludes this paper with a brief discussion. 2. Method and Algorithm 2.1. Experimental setup The POSIT-based UAV pose estimation test platform is shown in Figure 1. The platform consists of a MUAV, the Optitrack system, a computer work station, a wireless video camera, and a 3D object, which is a white cardboard box. (a) (b) Figure 1. Mini UAV test setup (a) Optitrack system and ball-x4 (b) ball-x4 and the target object The ball-x4 helicopter is selected as the MUAV [14]. The ball-x4 is an innovative quadrotor helicopter suitable for a wide variety of UAV research applications. ith the help of four motors fitted with 1-inch propellers, it is able to fly under 6 Degrees-of-Freedom (DOF); 3 translational DOF and 3 rotational DOF (roll, pitch, and yaw). The entire quadrotor is enclosed within a protective carbon fiber cage which gives the ball-x4 a decisive advantage over other vehicles that would suffer significant damage if contact occurs between the vehicle and other obstacles. A light-weighted wireless camera is attached to the ball-x4 body for providing real-time images of the target object. The box is attached to the wall in the field of view of the wireless camera. The background is black for the best contrast to the color of the box in Red-Green-lue (RG) color space, which makes it much easier to identify the corners of the box as image feature points. The Optitrack is a motion capture system which tracks the movement of Infrared (IR) reflectors attached to any object in the workspace using 6 IR cameras [15]. These IR cameras are arranged around an approximately 6 cubic meters workspace in which the ball-x4 moves. The Optitrack system is able to provide the pose of an object defined by a group of IR reflectors relative to the origin of the system s coordinates. In our experiment, we also chose the coordinate frame of Optitrack system as the reference of the world frame,. The origin of the workspace coordinates must be defined during camera calibration for the six infrared cameras. To define the ball-x4 as a trackable object for Optitrack system, three reflectors are attached to the ends of the two cross bars except the front end where the wireless camera is attached as shown in Figure 2(a). Figure 2(b) shows the IR reflectors as seen by the cameras (blue points); the virtual center of gravity (c.g.) of the trackable object is defined during signal processing (red point). The pose of the object is given at c.g. expressed in the world frame. - 154 -

International Journal of Digital ontent Technology and its Applications. Volume 5, Number 4, April 211 All signal processing are finished by the work station, including Optitrack cameras and wireless camera calibration, manual feature point selection, pose by POSIT algorithm and so on. (a) (b) Figure 2. Trackable object definition for ball-x4: (a) three reflectors attached on ball-x4, (b) image of IR reflectors and the vitual c.g. 2.2. POSIT Algorithm POSIT algorithm was proposed for finding the pose of an object relative to the camera from noncoplanar feature points contained in a single image. It is the combination of two algorithms, namely POS (Pose from Orthography and Scaling) and IT (Iterations). The POS algorithm approximates the perspective projection with scaled orthographic projection (SOP) to find the transformation (rotation and translation) between a coordinate frame attached to the object (object frame) and another coordinate frame attached to the center of projection of the camera (camera frame) by solving a linear system; IT algorithm is an iterative error optimization operation that updates the parameters of the approximate pose found in the previous step and repeats the POS algorithm several times in order to compute better scaled orthographic projections of the feature points. ith some requirements such as a 3D model of target object, camera intrinsic parameters, a minimum of four non-coplanar image feature points and their relative geometry matched with the corresponding points in the 3D model, the POSIT algorithm calculates the rotation matrix and translation vector of the object with respect to the camera. In other words, POSIT algorithm supplies the transformation information for a point expressed in the object (the box) frame,, with respect to the camera frame,. Frame is attached to the center of projection with z-axis pointing outwards from the camera. Figure 3 shows the diagram of POSIT algorithm. More details about POSIT can be found in the original paper by DeMenthon [13]. 2.3. Homogeneous transformation Figure 3. Schematic Diagram of POSIT Algorithm The pose of the ball-x4 is calculated using the results from POSIT algorithm, homogeneous transformation, and inverse kinematics. As shown in Figure 4, there are 4 coordinate frames in the test setup: the world frame,, the ball-x4 frame,, which is attached rigidly to the MUAV, the camera frame,, and the object or box frame,, attached to the lower front left corner of the box. In order to compare the pose estimation result of POSIT, expressed in frame, to the pose reading from the - 155 -

International Journal of Digital ontent Technology and its Applications. Volume 5, Number 4, April 211 Optitrack system, expressed in frame, the two coordinate frames need to be aligned using homogeneous transformation. Homogeneous transformation matrix, A T, is a matrix which shows how coordinate frame is transformed with respect to frame A. It is also used to convert the location of a point between the two frames. Figure 4. Different coordinate frame in our system The four coordinate frames in the experiment are related as follows: T = T T T (1) here T is the homogeneous transformation matrix from the box frame to the world frame, is the transformation matrix from the UAV frame to the world frame, T is the homogeneous transfor- mation matrix from the camera frame to the UAV frame, matrix from the box frame to the camera frame. From (1), T is unknown, therefore: ( ) -1 T ( T ) -1 = T T T is the homogeneous transformation T (2) here (*) -1 is the inverse operator. The coordinate frame of the box was assumed to have the same orientation as the world frame but differ in translational position, so the homogeneous transformation between them is made up only of the translational component. é1-18.638 ù T = ê 1 59.1261 ú ê 1-171.6727ú cm (3) êë 1 úû ecause the camera is mounted on the ball-x4, the transformation between frames and is a known constant: T é1 = ê ê êë -1-1 ù - 41275. ú - 32. 385ú 1 úû cm (4) - 156 -

International Journal of Digital ontent Technology and its Applications. Volume 5, Number 4, April 211 ith the result T from POSIT, we can obtain the pose of ball-x4 in the world frame T, and the corresponding translation vector and rotation matrix are obtained by inverse kinematics formula. 3. Experimental Results In order to test the performance of POSIT algorithm for pose estimation of ball-x4, the MUAV was moved around the workspace and planed randomly in 17 different locations. The box was always kept in the view of the camera at those locations. For each location, the attached cameras records an image of the box, meanwhile the Optitrack system captures the pose of the ball-x4. Five corners of the white box were manually detected by the user as non-coplanar feature points needed by POSIT algorithm, the bottom left front corner is the reference point and other four corners from the top side. Since the structure of the box is known as a priory, the pose of the camera relative to the box is calculated by the 3D model configuration of the feature points and their corresponding 2D image coordinates. ith the help of homogeneous transformation and inverse kinematics, the ball-x4 pose is calculated using equation (2), and then the results are compared to Optitrack readings. The x, y, and z coordinates of ball-x4 are shown in Figure 5, as well as the roll angle around z axis, pitch angle around x axis, yaw angle around y axis. The pink square points are results from Equation (2), which indicate the value from POSIT algorithm. The blue diamond points are the measurements from the Optitrack system. 1 8 6 4 2-2 -4-6 -8 4 2-2 -4-6 -8-1 measurement ball-x4 pose: x (cm) 1 2 3 4 5 6 7 8 9 1 11 12 13 14 15 16 17 18 ball-x4 pose: z (cm) measurement 1 2 3 4 5 6 7 8 9 1 11 12 13 14 15 16 17 18 ball-x4 pose: y (cm) 12 1 8 6 4 2 measurment 1 2 3 4 5 6 7 8 9 1 11 12 13 14 15 16 17 18 15 1 5-5 -1-15 ball-x4 pose: Roll angle (degree) maesurment 1 2 3 4 5 6 7 8 9 1 11 12 13 14 15 16 17 18 6 5 4 3 2 1-1 -2-3 measurment ball-x4 pose: yaw angle (degree) 1 2 3 4 5 6 7 8 9 1 11 12 13 14 15 16 17 18 15 1 5-5 -1-15 -2 ball-x4 pose: pitch angle(degree) measurement 1 2 3 4 5 6 7 8 9 1 11 12 13 14 15 16 17 18 Figure 5. omparison of ball-x4 pose estimation results The error comparison results are listed in Table 1, where the maximum, minimum, and mean error of x, y, and z coordinates and error of roll, yaw, and pitch angles can be found. omparing to the reading of the Optitrack system, the POSIT algorithm gives the pose of camera and hence the pose of the MUAV with less than four degrees rotation mean error and less than 7 cm position error. - 157 -

International Journal of Digital ontent Technology and its Applications. Volume 5, Number 4, April 211 Table 1. Relative position errors of 6-DOF parameters x(cm) y(cm) z(cm) roll(degree) yaw(degree) pith(degree) Maximum error 15.2 16.8 8.4 8.6 7.1 7.8 Minimum error..7.5.5.4.1 Mean error 3.7 6.6 3.2 3.5 2.2 3. 4. onclusion The POSIT algorithm was tested for pose estimation of a MUAV from a set of images containing four non-coplanar feature points of a box. The performance of POSIT is evaluated by comparing to the recorded results from the Optitrack system. The experimental result appears to remain within reasonable error and the POSIT proves to be a useful alternative for pose estimation of a MUAV. Some possible causes of the existing error are the Optitrack measurement accuracy of 4cm, and the imaginary c.g. of the ball-x4 trackable object does not correspond exactly to actual c.g. of the MUAV used to define T, which is the homogeneous transformation matrix from the camera frame to the MUAV frame. 5. Acknowledgements e are grateful for the support of the Natural Science and Engineering Research ouncil of anada (NSER), the Hunan Provincial Natural Science Foundation of hina (No.1JJ386) and the Fundamental Research Funds for the entral Universities of hina. 6. References [1] G. hesi and K. Hashimoto, "A simple technique for improving camera displacement estimation in eye-in-hand visual servoing," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 26, pp. 1239-1242, 24. [2] T. Gramegna, L. Venturino, G. icirelli, G. Attolico and A. Distant, "Optimization of the POSIT algorithm for indoor autonomous navigation," Robotics and Autonomous Systems, vol. 48, pp. 145-162, 24. [3] T. Hamel and R. Mahony, "Image based visual servo control for a class of aerial robotic systems," Automatica, vol. 43, pp. 1975-1983, 27. [4] L. ei and E.-J. Lee, "Multi-pose Face Recognition Using Head Pose Estimation and PA Approach," JDTA: International Journal of Digital ontent Technology and its Applications, vol. 4, pp. 112-122, 21. [5] Y. Zhang and L. u, "Face Pose Estimation by haotic Artificial ee olony," JDTA: International Journal of Digital ontent Technology and its Applications, vol. 5, pp. 55-63, 211. [6] J. ourbon, Y. Mezouar, N. Guénard and P. Martinet, "Vision-based navigation of unmanned aerial vehicles," ontrol Engineering Practice, vol. 18, pp. 789-799, 21. [7] G. Xu, Y. Zhang, S. Ji, Y. heng and Y. Tian, "Research on computer vision-based for UAV autonomous landing on a ship," Pattern Recognition Letters, vol. 3, pp. 6-65, 29. [8] Y. K.Yu, K. H. ong and M.M.Y.hang, "Pose estimation for augmented reality applications using genetic algorithm," Systems, Man, and ybernetics, Part : ybernetics, IEEE Transactions on, vol. 35, pp. 1295-131, 25. [9]. Ünsalan, "A model based approach for pose estimation and rotation invariant object matching," Pattern Recognition Letters, vol. 28, pp. 49-57, 27. [1] E. Malis and F. haumette, "Theoretical improvements in the stability analysis of a new class of model-free visual servoing methods," Robotics and Automation, IEEE Transactions on, vol. 18, pp. 176-186, 22. [11] M. A. Fischler and R.. olles, "Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography," ommunications of the AM, vol. 24, pp. 381-395, 1981. - 158 -

International Journal of Digital ontent Technology and its Applications. Volume 5, Number 4, April 211 [12] D. G. Lowe, "Three-dimensional object recognition from single two-dimensional images," Artificial Intelligence, vol. 31, pp. 355-395, 1987. [13] D. F. Dementhon and L. S. Davis, "Model-based object pose in 25 lines of code," International Journal of omputer Vision, vol. 15, pp. 123-141, 1995. [14] uanser Inc., "uanser ball-x4 User Manual," Toronto, anada, 21. [15] Natural Point Inc., "NaturalPoint Tracking Tools Users Manual," orvallis, Oregon, USA, 21. - 159 -