Autonomous Target Following by Unmanned Aerial Vehicles

Similar documents
CHAPTER 1 INTRODUCTION

Visual Servoing using Fuzzy Controllers on an Unmanned Aerial Vehicle

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

INSTRUCTOR WORKBOOK Quanser Robotics Package for Education for MATLAB /Simulink Users

Automatic Labeling of Lane Markings for Autonomous Vehicles

Matthew O. Anderson Scott G. Bauer James R. Hanneman. October 2005 INL/EXT

How To Fuse A Point Cloud With A Laser And Image Data From A Pointcloud

Classifying Manipulation Primitives from Visual Data

Tracking of Small Unmanned Aerial Vehicles

Recent Flight Test Results of Active-Vision Control Systems

Using Multiple Unmanned Systems for a Site Security Task

Managerial Economics Prof. Trupti Mishra S.J.M. School of Management Indian Institute of Technology, Bombay. Lecture - 13 Consumer Behaviour (Contd )

Path Tracking for a Miniature Robot

Preliminary Analysis of an Aircraft Capable of Deploying and Retracting a

Landing on a Moving Target using an Autonomous Helicopter

SIX DEGREE-OF-FREEDOM MODELING OF AN UNINHABITED AERIAL VEHICLE. A thesis presented to. the faculty of

Human-like Arm Motion Generation for Humanoid Robots Using Motion Capture Database

Author: By Siegfried Krainer and Michael Thomas, Infineon Technologies and Ronald Staerz, MCI Innsbruck

Radius Compensation G40, G41, & G42 (cutter radius compensation for machining centers, tool nose radius compensation for turning centers)

A Reliability Point and Kalman Filter-based Vehicle Tracking Technique

Arrangements And Duality

Current Challenges in UAS Research Intelligent Navigation and Sense & Avoid

Hardware In The Loop Simulator in UAV Rapid Development Life Cycle

Onboard electronics of UAVs

Lecture L6 - Intrinsic Coordinates

What did the Wright brothers invent?

Essential Mathematics for Computer Graphics fast

Integer Computation of Image Orthorectification for High Speed Throughput

Lab 8 Notes Basic Aircraft Design Rules 6 Apr 06

Static Environment Recognition Using Omni-camera from a Moving Vehicle

Aerospace Engineering 3521: Flight Dynamics. Prof. Eric Feron Homework 6 due October 20, 2014

Design Specifications of an UAV for Environmental Monitoring, Safety, Video Surveillance, and Urban Security

CYCLOPS OSD USER MANUAL 5.0

Nighthawk IV UAS. Versatility. User Friendly Design. Capability. Aerial Surveillance Simplified

Chapter 3 Falling Objects and Projectile Motion

3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving

Lecture 2: Homogeneous Coordinates, Lines and Conics

U g CS for DJI Phantom 2 Vision+, Phantom 3 and Inspire 1

Using Optech LMS to Calibrate Survey Data Without Ground Control Points

4 The Rhumb Line and the Great Circle in Navigation

THE CONTROL OF A ROBOT END-EFFECTOR USING PHOTOGRAMMETRY

Introduction. Chapter 1

A method of generating free-route walk-through animation using vehicle-borne video image

Intelligent Submersible Manipulator-Robot, Design, Modeling, Simulation and Motion Optimization for Maritime Robotic Research

CIS 536/636 Introduction to Computer Graphics. Kansas State University. CIS 536/636 Introduction to Computer Graphics

Data Review and Analysis Program (DRAP) Flight Data Visualization Program for Enhancement of FOQA

Aerospace Information Technology Topics for Internships and Bachelor s and Master s Theses

Introduction to Netlogo: A Newton s Law of Gravity Simulation

AN OBJECT-ORIENTED SOFTWARE DEVELOPMENT APPROACH TO DESIGN SIMULATOR FOR AIRBORNE ALTIMETRIC LIDAR. Rakesh Kumar Mishra, Bharat Lohani

Information Contents of High Resolution Satellite Images

VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS

2-1 Position, Displacement, and Distance

Autonomous Advertising Mobile Robot for Exhibitions, Developed at BMF

UAV Pose Estimation using POSIT Algorithm

Procedure: Geometrical Optics. Theory Refer to your Lab Manual, pages Equipment Needed

Speed Performance Improvement of Vehicle Blob Tracking System

Modeling and Autonomous Flight Simulation of a Small Unmanned Aerial Vehicle

CFD Modelling and Real-time testing of the Wave Surface Glider (WSG) Robot

Mobile Robot FastSLAM with Xbox Kinect

Development of Knowledge-Based Software for UAV Autopilot Design

Astromechanics Two-Body Problem (Cont)

3D Tranformations. CS 4620 Lecture 6. Cornell CS4620 Fall 2013 Lecture Steve Marschner (with previous instructors James/Bala)

Solving Simultaneous Equations and Matrices

Basic Principles of Inertial Navigation. Seminar on inertial navigation systems Tampere University of Technology

Robot Task-Level Programming Language and Simulation

Eigenständige Erkundung komplexer Umgebungen mit einem Hubschrauber UAV und dem Sampling basierten Missionsplaner MiPlEx

Ground Rules. PC1221 Fundamentals of Physics I. Kinematics. Position. Lectures 3 and 4 Motion in One Dimension. Dr Tay Seng Chuan


The fundamental question in economics is 2. Consumer Preferences

Research Methodology Part III: Thesis Proposal. Dr. Tarek A. Tutunji Mechatronics Engineering Department Philadelphia University - Jordan

The Case for Mode 4 Transmitters, or, "How North America Got It Wrong", or, "How Most of the Planet Got It Wrong"

2. Dynamics, Control and Trajectory Following

How To Run A Factory I/O On A Microsoft Gpu 2.5 (Sdk) On A Computer Or Microsoft Powerbook 2.3 (Powerpoint) On An Android Computer Or Macbook 2 (Powerstation) On

Study on Real-Time Test Script in Automated Test Equipment

By: M.Habibullah Pagarkar Kaushal Parekh Jogen Shah Jignasa Desai Prarthna Advani Siddhesh Sarvankar Nikhil Ghate

Neural Network based Vehicle Classification for Intelligent Traffic Control

The future of range instrumentation.

Using Tactical Unmanned Aerial Systems to Monitor and Map Wildfires

Force/position control of a robotic system for transcranial magnetic stimulation

RS platforms. Fabio Dell Acqua - Gruppo di Telerilevamento

Computers in Film Making

Imaging Systems Laboratory II. Laboratory 4: Basic Lens Design in OSLO April 2 & 4, 2002

HYFLAM D3: Report of Scenarios 08/02/2012

Experiment 3: Magnetic Fields of a Bar Magnet and Helmholtz Coil

Photogrammetric Point Clouds

Traffic Monitoring Systems. Technology and sensors

Newton s Laws. Newton s Imaginary Cannon. Michael Fowler Physics 142E Lec 6 Jan 22, 2009

A highly integrated UAV avionics system 8 Fourth Street PO Box 1500 Hood River, OR (541) phone (541) fax CCT@gorge.

GEOENGINE MSc in Geomatics Engineering (Master Thesis) Anamelechi, Falasy Ebere

Industrial Robotics. Training Objective

Digital Photogrammetric System. Version USER MANUAL. Block adjustment

Motion tracking using Matlab, a Nintendo Wii Remote, and infrared LEDs.

One-Way Pseudo Transparent Display

Cabri Geometry Application User Guide

The Force Table Vector Addition and Resolution

1. Units of a magnetic field might be: A. C m/s B. C s/m C. C/kg D. kg/c s E. N/C m ans: D

MP2128 3X MicroPilot's. Triple Redundant UAV Autopilot

Robot Perception Continued

Laser Gesture Recognition for Human Machine Interaction

A Simulation Analysis of Formations for Flying Multirobot Systems

Transcription:

Autonomous Target Following by Unmanned Aerial Vehicles Fahd Rafi, Saad Khan, Khurram Shafiq and Mubarak Shah Computer Vision Lab, Department of computer Science, University of Central Florida, Orlando FL, USA ABSTRACT In this paper we present an algorithm for the autonomous navigation of an unmanned aerial vehicle (UAV) following a moving target. The UAV in consideration is a fixed wing aircraft that has physical constraints on airspeed and maneuverability. The target however is not considered to be constrained and can move in any general pattern. We show a single circular pattern navigation algorithm that works for targets moving at any speed with any pattern where other methods switch between different navigation strategies in different scenarios. Simulation performed takes into consideration that the aircraft also needs to visually track the target using a mounted camera. The camera is also controlled by the algorithm according to the position and orientation of the aircraft and the position of the target. Experiments show that the algorithm presented successfully tracks and follows moving targets. Keywords: Unmanned Aerial Vehicle, Target Following, Autonomous Navigation 1. INTRODUCTION Autonomous operation of an aerial vehicle is a challenging task. Long distance navigation and way point following is an easier task and is used in commercial aircrafts. Close range maneuvering and following a moving target are still significant problems for research. Close range maneuvering requires constant realtime decision making and optimizing many parameters taking care of physical constraints on the aircraft at the same time. This becomes even more difficult following a moving target for which future target position is not known. In such a scenario, we cannot make decisions based on where the target may go and navigate to an intercept. Unmanned aerial vehicles or UAVs have become an important part of modern warfare. They are cost effective and low risk to human operators. In recent years, the abilities of UAVs have been extended to be close to those of manned fighter bomber aircrafts. But even now, most UAVs are still used for reconnaissance purposes. Moreover, UAVs have been developed that can remain airborne for extended periods of time. Sometimes even months or years. This would be very hard if not impossible to accomplish with manned aircrafts. Remote controlling these UAVs is a good way of reducing cost and yet maintaining the quality of human judgement. The most important goal of research in autonomous flight and navigation is to reduce the time and requirement of human operators. The advantage of this is an increased reconnaissance capability at a lower risk and cost in terms of time and money. Most research goes into reducing the human:uav ratio so that fewer human operators are needed to fly more UAVs. And human decisions can be moved to higher or policy level operation. The work presented in this paper is one step towards acquiring the ability to control a multitude of vehicles with fewer operators and ultimately having a wider view and better situation awareness in a battlefield. This paper presents a method to automate one UAV following an assigned target on the ground without any human intervention. Research has been done in this area before and some published works have appeared. Recent work similar to that presented here is done by Goodrich 1 where they navigate a UAV to orbit around a target on the ground using the slope field of a supercritical Hopf bifurcation and in effect following a circular pattern around the target. Further author information: (Send correspondence to Fahd Rafi) E-mail: frafi@cs.ucf.edu,

When the target is moving however, the aircraft being faster, needs to increase the distance covered and often people have used wave patterns such as Hedrich 2 and Husby. 3 Work by Husby investigates a combination of movement patterns for different scenarios. This approach however requires more information about the movement pattern of the target to be effective and we in our work do not assume any pattern of movement for the target. Work presented in this paper is also of significance as we use very realistic simulation tools taking into account aircraft stability controls in real world scenario. Differen aircrafts have different aerodynamic constraints but the method presented here works for large and small fixed-wing aircrafts as no tight constraints have been put on the aircraft capabilities. The little published material there is on this application, shows that there is no single best strategy for a UAV following a target. Many different strategies have been tried with different results. 2. METHOD In this section is described the problem we set out to solve with all given constraints and desired behavior. The method should work with different models of UAVs as no model specific constraints are considered. The only assumption here is that our UAV is a fixed wing aircraft. High level control strategy for helicopter type vehicles would be much simpler as there are fewer movement constraints. Low level stabilization and control is much more complicated and in many cases not feasible for autonomous operation even with state of the art equipment. 4 2.1. Problem Description Our abstract level goal is to have a UAV follow a target on the ground. There are two main components to this goal. Firstly, we need to maintain knowledge of the position of the target. This is obviously necessary in order to follow the target. Secondly, we need to maintain proximity to the target or in other words, to follow the target. On the UAV, we have: Image from a camera mounted on it; The camera mounted on a gimbal with two degrees of freedom of movement to control the direction of camera; An autonomous flight avionics package, piccolo to maintain stability of the aircraft and translate semantic navigation commands into commands for control surfaces; Realtime telemetry information available from the piccolo system. The UAV being a fixed wing aircraft, has physical constraints on maneuverability in terms of minimum and maximum airspeed, maximum turn rate, control response latency and aerodynamic constraints on the orientation of the aircraft. Orientation of an aircraft performing any given maneuver is determined by the aircraft aerodynamics. Which differ from model to model. These are common constraints for any fixed wing aircraft that any real time control algorithm must cater for. The target however is free to move at any speed and in any direction. In the simulations however, we consider a realistically moving ground vehicle. One limitation that must be accepted is that the target velocity does not exceed aircraft velocity for extended periods of time. In which case the UAV is unable to follow the target. Airspeed is a design feature of an aircraft and has nothing to do with the navigation algorithm. If the target moves faster than the UAV, it can just not be followed. Our algorithm generates two controls for the system: Turn rate for the aircraft; Camera control angles for the gimbal. 2.2. Camera Setup In figure 1, is shown the kind of camera gimbal simulated. The gimbal is mounted under the aircraft airframe. It has two degrees of freedom. θ is the heading of the camera with respect to the aircraft and has a range [0 360), 0 being the direction of the aircraft and clockwise direction being positive. φ is the elevation of the camera with respect to the aircraft and has a range [0 90], 0 being straight ahead and 90 being straight downward. The UAV coordinate system is related to world coordinate system by three angles: yaw, pitch and roll, and a translation. This is shown in Figure 2.

φ θ φ (Elevation) UAV Camera Gimbal θ Z UAV Y UAV φ X UAV Figure 1. Camera gimbal setup Z UAV ZCamera θ (Azimuth) Z world Y Camera Yaw X UAV XCamera Y UAV Y world Pitch Roll X world Figure 2. Relationship between camera, UAV and World coordinate systems 2.3. Target Tracking In order to follow the target, we must track it. Let us first describe how we track the target. A representation of the system setup is shown in figure 3. The goal of the tracking part of the system is to get the world coordinates X w given the image coordinates X i of the target. To achieve this, we use a sensor model described in Section 2.3.1 that gives us geographical position of the target, using which we may employ a navigation algorithm for the UAV to follow the target. 2.3.1. Sensor Model We have as input, image coordinates for the target (x i, y i ). We require world coordinates of the target (x w, y w, z w ). To achieve this, we first transform pixel coordinates of the target in input image to world coordinates with the sensor transform given by Π sensor = T a z T a y T a x R a yr a pr a rr g φ Rg θ (1) where Π sensor is the sensor transform as in Figure 4. The superscript a represents aircraft rotation and translation parameters and superscript g represents gimbal rotation parameters. R is rotation while T is translation. φ and θ are gimbal parameters as shown in Figure 1. The subscripts r, p and y are roll, pitch and yaw respectively and applied as shown in Figure 2.

z w y w x w y im z im x im Figure 3. Image and World Coordinates Target Detected in Image Camera Center Sensor Transform Πsensor Image Coordinates Target Terrain World Coordinates Figure 4. Sensor Transformation (blue) and Ray Tracing (green) to obtain target coordinates Π sensor cannot be applied to the 2D image coordinates of the target. We obtain 3D camera coordinates of the target pixel as X i = (x i, y i, f), where f is focal length of the camera. Camera center or center of projection is initially at origin and is also transformed by Equation 1. Once we have the transformed image location of the target in world coordinates as, Π sensor X i. (2) We can project this image point on to the ground using a simple ray tracing function that we shall call T errainp rojection. And we have coordinates of the detected target in world coordinates as, X w = T errainp rojection (Π sensor X i ) (3) The function, T errainp rojection, projects a ray from the camera center, to the target image pixel, on the terrain. Ray tracing requires geometric information about the environment which in our case is the height of the terrain at each point. This information is commercially available as a Digital Elevation Map or DEM from United States Geographical Survey. If this information is not available, a planar terrain can be assumed as in our simulation experiments. For detail of ray tracing, refer to an introduction 5 or any standard computer graphics text. 2.3.2. Target Detection and Tracking Detection and tracking of target in the image is not addressed as part of this work. For some detection methods, refer to Lipton, 6 Meer. 7 The job of the camera is to be able to keep a track of the target regardless of aircraft position and orientation. This requires additional control for the camera gimbal that we must calculate in real

time. This camera control is, in a closed loop, tied to the world model and geographically tracking the target in the reference frame of the earth. Camera mounted on the vehicle provides us input video frame by frame. In any given input frame. Target is detected for that frame. At this point, we have the telemetry information of the aircraft as well as the camera gimbal available to us. Using this information, the target is projected on to the plane of the earth (using telemetry information) and its geographical location is found out. This geographical position of the target and the location and orientation of the aircraft enable us to calculate the control angles of the camera gimbal, so that it looks directly at the target. Problem with the above method is that it requires the camera to already be looking at the target in order to calculate the angles to look at the target again! To solve this dilemma, we employ a kalman filter to estimate the future telemetry information of the aircraft as well as the target to generate camera controls to be looking at the target in advance. Once, we know the telemetry estimate for the target and the aircraft, we can use this to estimate the camera angles (θ and φ as described in Section 2.2)to look at the target at the next time step. 2.4. Navigation d τ 1 τ 2 UAV t 1 r Target t 2 Figure 5. Navigation strategy Navigation algorithm presented is simple and has very few parameters to adjust. In fact, just one parameter. We ideally make a perfect circle around a static target. Only parameter to the navigation algorithm is the radius of that circle. τ = min(τ 1, τ 2 ) (4) In Fig 5, the UAV and target are represented in a topographical view. Direction vector of the UAV is represented by d. The circle of radius r around the target is the goal trajectory we are trying to achieve. Direction vector d makes angles τ 1 and τ 2 with the two tangents t 1 and t 2. The goal is to take the direction of the nearest tangent possible. Hence, we minimize the minimum of τ 1 and τ 2 by telling the UAV to change the direction d by τ as shown in Equation 4. In other words, to turn. To calculate τ 1 and τ 1, we first take a vector (say T ) from the UAV to the target. Calculate τ as, τ = arcsin r T. (5) Then, angle δ between d and T is, ( ) δ = arccos ˆd ˆT (6) Where, ˆd and ˆT are unit vectors. τ 1 and τ 1 are:

τ 1 = δ τ (7) τ 2 = δ + τ (8) There are two cases other than the ones shown in figure 5 to consider. First, if d lies between t 1 and t 2. We again do the same and head slightly away from the target. This results in better overall tracking of the target because if the UAV passes directly over the target, it has to loop around again which might take the aircraft farther away from the target. Second, what if the UAV is within the circle of radius r around the target. In this case, we simply allow the aircraft to go straight ahead because the goal of staying close to the target is already being achieved. And whenever the aircraft goes beyond a distance of r from the target, the algorithm will adjust heading of the aircraft to start following a circular pattern around the target. The only parameter to the algorithm, the distance r, can be set to any desired value depending on the capability of the camera to be able to track a target from the aircraft. Also, depending on the maneuverability and speed of the aircraft for example it might not be practical to keep a very high speed aircraft very close to the target. 2.5. Overall Loop The overall closed loop algorithm is summarized below: 1. Detect target in camera input image. (Known from telemetry for the scope of this paper) 2. Project target from image on to earth plane and find geographical position of target. 3. Predict position of target and aircraft. 4. Calculate required turn rate of the aircraft. (Equation 4) 5. Calculate tracking controls for camera. 6. Apply turn rate and camera controls in simulation and move to next iteration. 3. RESULTS To test our ideas, we used simulation tools provided with piccolo 8 that use real world flight dynamics for realistic evaluation of our methods. The turn rate of the aircraft is caped to a maximum depending on the type of aircraft as well as time it takes for the entire algorithm to complete one iteration. The cap on the turn rate is inversely proportional to the time it takes for the entire algorithm to complete. If we can update the turn rate very frequently, we can afford asking the aircraft to turn very quickly. If however it takes some time for the system to iterate through the whole process, a high turn rate may result in the aircraft going beyond the desired heading. In Figures 6(a), 6(b) and 6(c), the target follows a straight line at varying speeds and the aircraft is able to follow the target quite closely in each case. Experiment in Figure 6 is performed to ascertain the ability of the navigation strategy to work at different speeds of the target. Figure 8 is the topographical plot of a target moving with changing directions and the UAV following it with reasonable proximity. See that we do not plan a particular path for the UAV but rather the path is a consequence of our control decisions at different times. Several experiments were performed with the target moving in different directions, making turns, varying speed and even stopping. In all cases, the aircraft is able to closely follow the target. Figure 7 shows a simulation of the target with varying speeds and patterns of movement with sudden changes in direction and/or speed. At all times, the aircraft satisfactorily follows the target.

3500 4500 7000 7000 7000 8000 8000 3500 4500 8000 3500 4500 (a) (b) (c) Figure 6. Aircraft is in red, Target is in Blue. Scale is in meters with respect to an arbitrary origin. 4. CONCLUSION The work presented here is part of an ongoing project to achieve complete autonomous control for an aerial vehicle following a moving target on the ground. This would enable us to automate one task that currently requires continuous human intervention. Automation of these simple tasks can free up human resources for more complex strategic decisions as well as reducing time and cost and increasing situational awareness in a large area of interest. REFERENCES 1. M. Quigley, M. A. Goodrich, S. Griffiths, A. Eldredge, and R. Beard, Target acquisition, localization, and surveillance using a fixed-wing, mini-uav and gimbaled camera, in International Conference on Robotics and Automatio, 2005. 2. J. Lee, R. Huang, A. Vaughn, X. Xiao, J. K. Hedrich, M. Zennaro, and R. Sengupta, Strategies of pathplanning for a uav to track a ground vehicle, in AINS Conference, 2003. 3. C. R. Husby, Path generation tactics for a uav following a moving target, Master s thesis, Air Force Institute of Technology, Wright-Patterson Airforce Base, OH, 2005. 4. H. Shim, T. J. Koo, F. Hoffmann, and S. Sastry, A comprehensive study of control design for an autonomous helicopter, in 37th IEEE Conference on Decision and Control, 4, pp. 3653 3658, 1998. 5. A. S. Glassner, An Introduction to Ray Tracing, Morgan Kaufmann Publishers, Inc., San Diego, 1989. 6. A. J. Lipton, H. Fujiyoshi, and R. S. Patil, Moving target classification and tracking from real-time video, in 4th IEEE Workshop on Applications of Computer Vision (WACV 98), p. 8, 1998. 7. D. Comaniciu, V. Ramesh, and P. Meer, Kernel-based object tracking, in IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003. 8. B. Vaglienti, R. Hoag, and M. Niculescu, Piccolo System User s Guide. Cloud Cap Technologies, http://www.cloudcaptech.com, 2001-05. 9. C. Olson, The FlightGear Project. FlightGear.org, 2001.

Figure 7. Trajectories of UAV (red) and Target (blue). Following in a varied and complex pattern with the target moving at different speeds. It is clear from the illustration that a single following strategy works regardless of the movement pattern f the target. 1000 1000 1000 0 1000 7000 1000 0 1000 7000 (a) (b) Figure 8. Figures 8(a) and 8(b) show the target moving at different speeds and how the UAV makes different paths to adjust accordingly