MAV Stabilization using Machine Learning and Onboard Sensors



Similar documents
The Design and Implementation of a Quadrotor Flight Controller Using the QUEST Algorithm

fml The SHOGUN Machine Learning Toolbox (and its python interface)

CONTRIBUTIONS TO THE AUTOMATIC CONTROL OF AERIAL VEHICLES

CHAPTER 1 INTRODUCTION

An inertial haptic interface for robotic applications

SIVAQ. Manufacturing Status Review

Search Taxonomy. Web Search. Search Engine Optimization. Information Retrieval

Control of a quadrotor UAV (slides prepared by M. Cognetti)

ZMART Technical Report The International Aerial Robotics Competition 2014

Onboard electronics of UAVs

Quadcopters. Presented by: Andrew Depriest

Comparing the Results of Support Vector Machines with Traditional Data Mining Algorithms

Visual Servoing using Fuzzy Controllers on an Unmanned Aerial Vehicle

VTOL UAV. Design of the On-Board Flight Control Electronics of an Unmanned Aerial Vehicle. Árvai László, ZMNE. Tavaszi Szél 2012 ÁRVAI LÁSZLÓ, ZMNE

Basic Principles of Inertial Navigation. Seminar on inertial navigation systems Tampere University of Technology

Early defect identification of semiconductor processes using machine learning

Automatic Labeling of Lane Markings for Autonomous Vehicles

Multiple Network Marketing coordination Model

Developer Guide SDK 2.0

Mobile Robot FastSLAM with Xbox Kinect

Maximizing Precision of Hit Predictions in Baseball

AVIONICS SYSTEM FOR A SMALL UNMANNED HELICOPTER PERFORMING AGRESSIVE MANEUVERS

Quadcopter control using Android based sensing

2. Dynamics, Control and Trajectory Following

Control of a Quadrotor Helicopter Using Visual Feedback

Tracking of Small Unmanned Aerial Vehicles

Hardware In The Loop Simulator in UAV Rapid Development Life Cycle

Automating ROV Operations in aid of the Oil & Gas Offshore Industry

Intelligent Submersible Manipulator-Robot, Design, Modeling, Simulation and Motion Optimization for Maritime Robotic Research

Whitepaper. Image stabilization improving camera usability

MP2128 3X MicroPilot's. Triple Redundant UAV Autopilot

How To Control Gimbal

Ar Drone Controller Version and above

Waypoint. Best-in-Class GNSS and GNSS+INS Processing Software

A highly integrated UAV avionics system 8 Fourth Street PO Box 1500 Hood River, OR (541) phone (541) fax CCT@gorge.

Quadcopter Dynamics, Simulation, and Control Introduction

Apogee Series. > > Motion Compensation and Data Georeferencing. > > Smooth Workflow. Mobile Mapping. > > Precise Trajectory and Direct Georeferencing

Sensor Fusion Mobile Platform Challenges and Future Directions Jim Steele VP of Engineering, Sensor Platforms, Inc.

Research Methodology Part III: Thesis Proposal. Dr. Tarek A. Tutunji Mechatronics Engineering Department Philadelphia University - Jordan

Design-Simulation-Optimization Package for a Generic 6-DOF Manipulator with a Spherical Wrist

FRC WPI Robotics Library Overview

General aviation & Business System Level Applications and Requirements Electrical Technologies for the Aviation of the Future Europe-Japan Symposium

Development of Knowledge-Based Software for UAV Autopilot Design

Flight Controller. Mini Fun Fly

SIMULATION AND CONTROL OF A QUADROTOR UNMANNED AERIAL VEHICLE

Classifying Manipulation Primitives from Visual Data

Knowledge Discovery from patents using KMX Text Analytics

sonobot autonomous hydrographic survey vehicle product information guide

Local Gaussian Process Regression for Real Time Online Model Learning and Control

Exploring Health-Enabled Mission Concepts in the Vehicle Swarm Technology Laboratory

UAV Pose Estimation using POSIT Algorithm

Design Specifications of an UAV for Environmental Monitoring, Safety, Video Surveillance, and Urban Security

Support Vector Machines with Clustering for Training with Very Large Datasets

Robotics & Automation

NONLINEAR TIME SERIES ANALYSIS

Motion tracking using Matlab, a Nintendo Wii Remote, and infrared LEDs.

Scalable Developments for Big Data Analytics in Remote Sensing

Online Tuning of Artificial Neural Networks for Induction Motor Control

PID, LQR and LQR-PID on a Quadcopter Platform

Active Vibration Isolation of an Unbalanced Machine Spindle

Simulation-based traffic management for autonomous and connected vehicles

Integration of Inertial Measurements with GNSS -NovAtel SPAN Architecture-

MLPACK: A Scalable C++ Machine Learning Library

Advantages of Auto-tuning for Servo-motors

MSc in Autonomous Robotics Engineering University of York

T1-Fuzzy vs T2-Fuzzy Stabilize Quadrotor Hover with Payload Position Disturbance

Experimental Results from TelOpTrak - Precision Indoor Tracking of Tele-operated UGVs

Statistics for BIG data

Lab 8 Notes Basic Aircraft Design Rules 6 Apr 06

Goal driven planning and adaptivity for AUVs CAR06

Series: IDAM Servo Drive E Digital Motor Drive - DMD

Sensory-motor control scheme based on Kohonen Maps and AVITE model

Separation of Concerns in Component-based Robotics

A Reliability Point and Kalman Filter-based Vehicle Tracking Technique

Line Following and Ground Vehicle Tracking by an Autonomous Aerial Blimp

Performance Test Results of an Integrated GPS/MEMS Inertial Navigation Package

Robotics. Lecture 3: Sensors. See course website for up to date information.

0 28 November 2011 N/A First Release of Manual 1 24 February 2012 Page 21 Error in FY31AP connection diagram corrected.

Solar Tracking Controller

Lecture 8 : Dynamic Stability

Aerospace Information Technology Topics for Internships and Bachelor s and Master s Theses

Landing on a Moving Target using an Autonomous Helicopter

PRODUCT DATASHEET. J1939 Vehicle Inertia Monitor. Advanced Vehicle Inertial Measurement and Vibration Monitoring Device. fleet-genius.

CIS 536/636 Introduction to Computer Graphics. Kansas State University. CIS 536/636 Introduction to Computer Graphics

Design of a Quadrotor System for an Autonomous Indoor Exploration

MTi and MTx User Manual and Technical Documentation

Development of Automatic shooting and telemetry system for UAV photogrammetry INTRODUCTION

Transcription:

MAV Stabilization using Machine Learning and Onboard Sensors CS678 Final Report Cooper Bills and Jason Yosinski {csb88,jy495}@cornell.edu December 17, 21 Abstract In many situations, Miniature Aerial Vehicles (MAVs) are limited to using only on-board sensors for navigation. This limits the data available to algorithms used for stabilization and localization, and current control methods are often insufficient to allow reliable hovering in place or trajectory following. In this research, we explore using machine learning to predict the drift (flight path errors) of an MAV while executing a desired flight path. This predicted drift will allow the MAV to adjust it s flightpath to maintain a desired course. 1 Introduction Past automation work with miniature aerial vehicles (MAVs) at Cornell has produced interesting results [1] and presented additional challenges. During past projects, results have often been limited not by insufficiencies in planning algorithms, but by navigation errors stemming from inadequate control in the face of realistic, breezy operating environments. In many cases the MAVs will simply drift off the desired path (Figure 1). Thus, this project focuses on refining the basic motion of the same platform, and in particular, minimizing its drift. Our work focuses on reduction of low frequency drift in gps-denied environments. Similar work has been done, some using neural networks [4] or using adaptive-fuzzy control methods [5] to stabilize a quadrotor. Though this research has produced promising results, these methods were demonstrated only in simulation, not via live testing. 1

Figure 1: Desired path vs. actual path due to drift. 2 Platform We are using the Parrot AR.Drone quadrotor (Figure 2), a toy recently available to the general public. Commands and images are exchanged via a WiFi ad-hoc connection between a host machine and the AR.Drone. All of our coding and control is processed on the host machine and sent to the drone. The AR.Drone has a multitude of sensors (accelerometers, gyroscopes, sonar, and more) built in, which utilize for our learning purposes. Figure 2: Our Parrot AR.Drone Platform 3 Approach Given the large number of sensors and navigation data available on our quadrotor platform, we suspected that there are unexploited correlations between these sensor values and the drift of the quadrotor. It is difficult for a human to recognize any solid correlation by looking at the data directly, but by using supervised learning methods we have been able to discover relationships between sensor values and quadrotor behavior, and use these to predict a large portion of the drift. To do this we recorded the onboard sensor data while simultaneously collecting ground truth locations over time. We collected the ground truth values using the tracking system 2

described in Section 3.1. After applying the post-processing described in Section 3.2 to the collected data, we used supervised learning methods to learn to predict the drone s drift. For our supervised learning, we tested two support vector machine (SVMs) libraries that support regression. The first algorithm we tested was the LibSVM algorithm [3] in the Shogun machine learning toolbox [2]. The second algorithm we tested is the SVM-Light regression algorithm [6] and obtained similar results. Our system only records and trains on drift in the horizontal plane. We trained each axis (defined as X and Y) separately, resulting in two separate trained systems. 3.1 Tracking System To collect ground truth data, we created a custom tracking system to track MAVs in the horizontal plane. This system uses an infrared LED on the drone, and a Nintendo Wii remote (connected to the computer though bluetooth) to track the MAV. During data collection, we record time, position from the Wii (in pixels), and 6 dimensions of data for prediction. This data comes from onboard sensors as well as the navigation computer of the MAV. To test our tracking system and to assist with data collection, we constructed a simple PD controller to center the drone in real-time. This system works well, and is demonstrated here: http://youtu.be/ptj6e7jw2ly. Figure 3: The tracking system developed to collect data for our project (Section 3.1). Left: The tracking hardware Right: MAV being tracked. 3

3.2 Training Data The training data we collected is divisible into three types: time, external data available from the tracking system, and onboard data available from the quadrotor. For training, we learn a mapping from the onboard data to the derivatives pixels per second in each of x and y computed using the time and the external data from the wii. During the testing or prediction phase, we predict the derivatives from the onborad data. The fields of data we collected, including all 6 dimensions of onboard data, are shown below: Data fields collected and used for training and prediction Time: time Wii tracking system: wii x, wii y, wii xd, wii yd, wii age, wii staleness Onboard data: altitude roll pitch yaw vx vy vz acc x acc y acc z controlstate vbat vphi trim vtheta trim vstate vmisc vdelta phi vdelta theta vdelta psi vbat raw ref theta ref phi ref theta I ref phi I ref pitch ref roll ref yaw ref psi rc ref pitch rc ref roll rc ref yaw rc ref gaz rc ref ag euler theta euler phi pwm motor1 pwm motor2 pwm motor3 pwm motor4 pwm sat motor1 pwm sat motor2 pwm sat motor3 pwm sat motor4 pwm u pitch pwm u roll pwm u yaw pwm yaw u I pwm u pitch planif pwm u roll planif pwm u yaw planif pwm current motor1 pwm current motor2 pwm current motor3 pwm current motor4 gyros offsetx gyros offsety gyros offsetz trim angular rates trim theta trim phi Over the course of this project, the following data points were collected: 11,934 hovering, 1,64 with a directional command, and 417 with gusts of wind, for a total of 13,991 data points. Plots of a few fields of data over five capture sessions is given in Figure 4. Once the data is collected and logged to a file, we run post-processing to make it more suitable for digestion by the learning algorithms. First, we remove all duplicate entries and stale entries caused by network delays, then we compute the position derivatives (velocities) in pixels per second for both the X and Y directions, and it is this data that we learn on. Figure 5 shows the singular values of the data before and after normalization, to give an idea how many dimensions of information are present in the data. 4

1.2876 1.2876 1.2876 1.2876 1.2876 x 1 9 time wii_age 4 2 yaw 35 3 25 2.13.12.11.1.9 acc_y 1 5 5 wii_x wii_staleness vx 2 1 1.98 1 acc_z 5 4 3 2 1 85 wii_y altitude 9 8 vy 1 1 2 1 1 1 1 2 1 1 wii_xd roll vz 5 5 1 15 1 wii_yd pitch 2 1 acc_x.8.7.6 Figure 4: Some of the data collected from five manual runs. The first plot is time, the next six are external data from the Wii tracking system, and the rest represent a fraction of the 6 dimensions of internal data from the quadrotor s sensors. Note that these graphs are not meant to be analyzed, but are presented merely as an example subset of the available features over time. 4 Results By training on data collected in the lab, we were able to predict quadrotor drift quite accurately. Figure 6 shows the actual drift velocity versus the drift velocity predicted by our learned model. The predictions are not perfect, but they do match the actual drift quite well, certainly much better than the null hypothesis of no drift at all. While Figure 6 shows results while hovering and drifting in normal indoor conditions, in Figure 7 we show results obtained by creating an artificially windy environment. We created this environment by waving a large board back and forth, buffeting the quadrotor. A model was then trained on a combination of windy and non-windy data and tested on other windy runs. As one can see in the figure, the quadrotor drifts significantly in one direction, and the prediction follows quite well. 5

1e7 Singular Values, not normalized 1.4 1.2 1..8.6.4.2 18 16 14 12 1 8 6 4 2 Singular Values, normalized. 1 2 3 4 5 6 1 2 3 4 5 6 Figure 5: (left) Singular values of the 6 dimensional training data before normalization. (right) Singular values of the 6 dimensional training data after normalization. As one can see, the sensor data set is fairly rich, offering quite a few dimensions of non-covariant information. 5 Future Work Although our algorithm was able to accurately predict drift, we are still working on getting the prediction code to be fast enough to control the quadrotor in live flight. The SVM regression code runs quickly, but we re currently facing difficulties with delays due to I/O as well as a suboptimal configuration. We integrated and tested a complete feedback loop, but delays caused it to be unstable. 6 Conclusion We are making strong progress towards predicting and removing the quadrotor drift. The drift is predicted fairly accurately via a 6 dimensional sensor data vector from the quadrotor, and we hope to soon demonstrate a controller enhanced by the predicted drift. References [1] Cooper Bills, Yu-hin Joyce Chen and Ashutosh Saxena, Autonomous Vision-based MAV Flight in Common Indoor Environments, ICRA 211. (Submitted) [2] Soeren Sonnenburg, Gunnar Raetsch, Sebastian Henschel, Christian Widmer, Jonas Behr, Alexander Zien, Fabio de Bona, Alexander Binder, Christian Gehl, and Vojtech Franc. The SHOGUN Machine Learning Toolbox. Journal of Machine Learning Research, 11:1799-182, June 21. 6

[3] Chih-Chung Chang and Chih-Jen Lin, LIBSVM : a library for support vector machines, 21. Software available at http://www.csie.ntu.edu.tw/ cjlin/libsvm [4] Nicol, C.; Macnab, C.J.B. and Ramirez-Serrano, A. Robust neural network control of a quadrotor helicopter, CCECE 28. [5] Coza, C. and Macnab, C.J.B. A New Robust Adaptive-Fuzzy Control Method Applied to Quadrotor Helicopter Stabilization, NAFIPS 26. [6] T. Joachims, Making large-scale SVM Learning Practical. Advances in Kernel Methods - Support Vector Learning, B. Schlkopf and C. Burges and A. Smola (ed.), MIT-Press, 1999. 7

2 15 dec17 1 DataLog 1292574599 7 x 12 1 dec17 1 DataLog 1292574599 7 y 1 8 Velocity (pixels per second) 5 5 Velocity (pixels per second) 6 4 2 1 15 2 2 5 1 15 2 25 3 35 Timestep 4 5 1 15 2 25 3 35 Timestep 11 1 dec17 1 DataLog 1292574599 7 9 8 Y position (pixels) 7 6 5 4 3 2 35 4 45 5 55 6 65 7 75 8 X position (pixels) Figure 6: Results for normal indoor conditions. Top left: predicted versus actual x drift at each timestep in pixels per second. Top right: predicted versus actual y drift at each timestep in pixels per second. Bottom: actual position versus position inferred from integrating predictions over time. 8

5 dec17 windplus 4 DataLog 129257634 x 11 1 dec17 windplus 4 DataLog 129257634 y 9 Velocity (pixels per second) 5 1 Velocity (pixels per second) 8 7 6 5 15 4 3 2 2 4 6 8 1 12 14 16 18 Timestep 2 2 4 6 8 1 12 14 16 18 Timestep 8 7 dec17 windplus 4 DataLog 129257634 6 Y position (pixels) 5 4 3 2 1 76 765 77 775 78 785 79 795 8 85 X position (pixels) Figure 7: Results for environment with artificial wind. This environment was created by waving a large board back and forth, blowing the quadrotor off to the side. The learned model was trained on a combination of windy and non-windy data and then tested on other windy runs. As one can see in the figure, the quadrotor drifts significantly in one direction, and the prediction follows quite well. Top left: predicted vs. actual x drift. Top right: predicted vs. actual y drift; Bottom: integrated position. 9