Crater detection with segmentation-based image processing algorithm



Similar documents
Environmental Remote Sensing GEOG 2021

Synthetic Aperture Radar: Principles and Applications of AI in Automatic Target Recognition

Congresso della SAIT Museo della Scienza e della Tecnologia di Milano 15 Maggio 2014

Robot Perception Continued

Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition

Static Environment Recognition Using Omni-camera from a Moving Vehicle

VECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION

Robotic Pre-Cursor Contribution to Human NEA Mission. H. Kuninaka JSPEC/JAXA

Tracking of Small Unmanned Aerial Vehicles

3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving

RiMONITOR. Monitoring Software. for RIEGL VZ-Line Laser Scanners. Ri Software. visit our website Preliminary Data Sheet


AIDA: Asteroid Impact & Deflection Assessment A Joint ESA-NASA Mission. Joint ESA NASA AIDA Team

National Performance Evaluation Facility for LADARs

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

Space Technologies for UV in the Regional Project STEPS Maria Antonietta Perino Thales Alenia Space Italia, Turin, Italy

Limitations of Human Vision. What is computer vision? What is computer vision (cont d)?

The Scientific Data Mining Process

Module 2 Educator s Guide Investigation 4

CASE STUDY LANDSLIDE MONITORING

Vision based Vehicle Tracking using a high angle camera

Applications of Deep Learning to the GEOINT mission. June 2015

Information Contents of High Resolution Satellite Images

3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving

NAVAL POSTGRADUATE SCHOOL THESIS

Colorado School of Mines Computer Vision Professor William Hoff

Optical Flow. Shenlong Wang CSC2541 Course Presentation Feb 2, 2016

Poker Vision: Playing Cards and Chips Identification based on Image Processing

ECE 533 Project Report Ashish Dhawan Aditi R. Ganesan

Practical Tour of Visual tracking. David Fleet and Allan Jepson January, 2006

Mars Sample Return Campaign: An Overview. Dr. Firouz Naderi Associate Director NASA s JPL

1 laser altimeter. Background & Instructions. exploration extension. Instructions. Background

Analecta Vol. 8, No. 2 ISSN

LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE.

Vision-Based Blind Spot Detection Using Optical Flow

Digital Remote Sensing Data Processing Digital Remote Sensing Data Processing and Analysis: An Introduction and Analysis: An Introduction

APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder

Object Recognition and Template Matching

Part-Based Recognition

Introduction. Chapter 1

Tracking in flussi video 3D. Ing. Samuele Salti

False alarm in outdoor environments

Traffic Monitoring Systems. Technology and sensors

Real time vehicle detection and tracking on multiple lanes

IMPLICIT SHAPE MODELS FOR OBJECT DETECTION IN 3D POINT CLOUDS

High Resolution Digital Surface Models and Orthoimages for Telecom Network Planning

MODULATION TRANSFER FUNCTION MEASUREMENT METHOD AND RESULTS FOR THE ORBVIEW-3 HIGH RESOLUTION IMAGING SATELLITE

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY

Automatic Labeling of Lane Markings for Autonomous Vehicles

How Landsat Images are Made

ADVANTAGES AND DISADVANTAGES OF THE HOUGH TRANSFORMATION IN THE FRAME OF AUTOMATED BUILDING EXTRACTION

Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite

Dawn - Overview, Science Objectives, Mission Progress. Hap McSween For PI Chris Russell

Real-time Visual Tracker by Stream Processing

VCS REDD Methodology Module. Methods for monitoring forest cover changes in REDD project activities

Multisensor Data Fusion and Applications

An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network

Review for Introduction to Remote Sensing: Science Concepts and Technology

Palmprint Recognition. By Sree Rama Murthy kora Praveen Verma Yashwant Kashyap

Speed Performance Improvement of Vehicle Blob Tracking System

Face detection is a process of localizing and extracting the face region from the

Automatic 3D Mapping for Infrared Image Analysis

VIIRS-CrIS mapping. NWP SAF AAPP VIIRS-CrIS Mapping

Satellite Altimetry Missions

A Statistical Framework for Operational Infrasound Monitoring

Hyperspectral Satellite Imaging Planning a Mission

2. Orbits. FER-Zagreb, Satellite communication systems 2011/12

Lab 7: Gravity and Jupiter's Moons

Ultra-scale vehicle tracking in low spatial-resolution and low frame-rate overhead video

Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall

3D Model based Object Class Detection in An Arbitrary View

Earth in the Solar System

How To Analyze Ball Blur On A Ball Image

CE801: Intelligent Systems and Robotics Lecture 3: Actuators and Localisation. Prof. Dr. Hani Hagras

Build Panoramas on Android Phones

Simultaneous Gamma Correction and Registration in the Frequency Domain

VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS

REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING

Educator Guide to S LAR SYSTEM El Prado, San Diego CA (619)

Aerospace Engineering: Space Stream Overview

High speed 3D capture for Configuration Management DOE SBIR Phase II Paul Banks

Automated Spacecraft Scheduling The ASTER Example

Phases of the Moon. The next phase, at about day 10, we can see roughly three quarters of the moon. This is called the waxing gibbous phase.

521493S Computer Graphics. Exercise 2 & course schedule change

IP-S3 HD1. Compact, High-Density 3D Mobile Mapping System

5200 Auth Road, Camp Springs, MD USA 2 QSS Group Inc, Lanham, MD, USA

Effective Use of Android Sensors Based on Visualization of Sensor Information

Math 259 Winter Recitation Handout 1: Finding Formulas for Parametric Curves

INTRUSION PREVENTION AND EXPERT SYSTEMS

An Automatic and Accurate Segmentation for High Resolution Satellite Image S.Saumya 1, D.V.Jiji Thanka Ligoshia 2

Geospatial Cloud Computing - Perspectives for

Neural Network based Vehicle Classification for Intelligent Traffic Control

Multimodal Biometric Recognition Security System

Virtual Mouse Using a Webcam

16 th IOCCG Committee annual meeting. Plymouth, UK February mission: Present status and near future

Transcription:

Template reference : 100181708K-EN Crater detection with segmentation-based image processing algorithm M. Spigai, S. Clerc (Thales Alenia Space-France) V. Simard-Bilodeau (U. Sherbrooke and NGC Aerospace, Canada)

Outline Page 2 Context Reference Scenarios, Needs Quick state-of-the-art Algorithm principle Performances, sensitivity Conclusion/Perspective

Context : Precision landing for interplanetary mission Page 3 Mission to mars : reach points of scientific interest on Mars Mission to the Moon : future human base on the Moon Typical need of the required precision of landing is : 100 m 200 m. Asteroid exploration

Standard lander navigation A standard lander navigation Takes an initial position knowledge initialized from ground tracking Propagates the position by the measurement of non-gravitational acceleration with an IMU. The initial position error is typically around 1 km and the IMU integration error is around 600 m 10 km. Page 4 Regarding the need of precision of landing on the Moon, there is clearly a need of terrain-relative sensors to reduce navigation error (altimeter, lidar and/or camera) : Joint PhD thesis NGC Aerospace (Canadian SME) / ESA / TAS-F, 2008-2011, V. Simard- Bilodeau : absolute position and surface relative velocity using surface features during the proximity operation of a planetary mission (orbiting phase and landing phase). Internal TAS-F study on vision-based navigation to improve landing precision : crater detection and identification Moreover terrain-relative sensors are also needed for other purposes such as fine control of terminal velocity and Hazard detection and avoidance. TAS-I studies (Turin, Italy)

Lunar Lander Scenario Reference for the study At a given time, the goal is to detect in an image as much craters as possible and to match them with a reference database in order to improve the current localisation of the lander. Page 5 Craters position errors as input of an Extended Kalman Filter as state estimator

Needs for the present image processing study Reference scenario = Lunar lander braking phase Principle, on the current grey level image acquired : To detect as much craters as possible (not necessarily all the craters) To model each detected crater by an ellipse : localisation, axis, orientation Study focus on the Pd/Pfa and the precision of localisation Expected crater detection algorithm properties: A low probability of false alarm with a reasonable probability of detection A mean localization error that is lower than 3-4 pixels Relatively simple algorithm with real time capabilities Robust to changes of pose, resolution and illumination No a priori information concerning potential crater positions at the current time is taken into account in the study : We are in the worst case! Page 6

Quick State of the art Mainly coming from : TAS-I and TAS-F Cannes Phd Thesis V. Bilodeau. Hough-transform based : Pros : Robust against edge discontinuity. Cons : Requires high computation power, Sensitive to noise. Edge-based : Pros : The algorithm false detection rate is low and accuracy of the crater localisation in pixel is good. Cons : Not robust to noisy edges of old craters (but more robust than the majority of Hough-based algorithm). Requires high computation power. Local low-level features (Harris, SIFT, SURF, ) These features lead from very slow to very fast algorithms and there is a limited robustness to pose/scale/lighting conditions Ellipses Detected Autonomous ly Page 7 Image Filtered Thin Edges 50 50 50 100 pixels 100 150 200 250 pixels 100 150 200 250 pixels 150 200 250 300 300 300 350 400 100 200 300 400 500 600 700 pixels 350 400 100 200 300 400 500 600 pixels 350 400 100 200 300 400 500 600 pixels

Parallel field of research : Target recognition in SAR imagery Exemple of database (MSTAR, non confidential) Page 8 Computation of a feature vector in order to distinguish vehicle classes

Algorithm principle : High-level surface features Main hypothesis : a typical crater is composed of a shadow followed by a bright object representing the illuminated part of the crater. Page 9 Crater object = set of pixels (x1 = columns, x2 = lines), the ellipse estimation is obtained by eigenvalues/eigenvectors of the covariance matrix of (x1,x2). = σ σ 2 1 21 σ σ 12 2 2

Algorithm Steps Initial Image Unsupervised segmentation Example : K-means with N classes Page 10 Selection of potential Dark/Bright objects Prior information : Sunlight «rough» angle and elevation, Min/Max size of objects of interest Ellipse characterization Geometrical check

Data available for the study Page 11 TAS-F Cannes synthetic images With ESA software : PANGU With fine ground-truth Quantitative results Nadir/slant view, High, low and very low sun elevation TAS-F TAS-I Database Real images coming from ESA,NASA Synthetic images, ESA software : PANGU Without fine ground-truth Qualitative visual results Moon, Mars, Asteroid. NASA

Qualitative results (real images) Page 12 Tests on real images : Same algorithm performs correctly on very different images Very low level of false alarm

Qualitative results (real images) Page 13 But the algorithm is not always adapted : For instance, detection is more difficult on highly eroded martian craters Mars, Hourglass Crater, ESA Mars, Crater ice, NASE HiRISE

Quantitative results (synthetic images) Page 14 Test images generated with PANGU Images 512x512, gray level, and associated «ground truth» of crater locations and sizes Three Sun elevations: 77.5, 22.5 and 2.5 (~ Moon pole case) 2 different views: nadir and slant «Raw» images Nadir view, Sun high Nadir view, Sun low Nadir view, Sun very low

Quantitative results (synthetic images) Page 15

Quantitative results (synthetic images) Page 16

Quantitative results (synthetic images) Page 17

Quantitative results (synthetic images) Page 18

Sensitivity to algorithm parameters The general sensitivity to parameters is acceptable. Page 19 The two most critical parameters are : The number of clusters of the k-means segmentation algorithm. An interval of n=[5-6], set empirically, has given quite good robustness during exploitations. A link with the physical description of the different typical terrain of the area of landing should bring benefit to that. The maximum/minimum size of the crater (number of pixels). This parameter is critical because depends a lot on the scene, phase of descent and the resolution of the image. Nevertheless, with the knowledge of the scene, it should be easy to set this parameters. Concerning other parameters : The algorithm is quite robust. It should be noticed that some enhancements should be possible by using a priori knowledge on the scene, phase of descent, etc.

Elements for real times capabilities Page 20 To give a feeling of the RT capabilities, we give here some values of CPU time needed, and which part is the most demanding in CPU. It should be notice that : Algorithm has been coded in MATLAB without optimization. Tests have been performed on a TAS Laptop designed for non-scientific matters => So absolute performances can be improved!

Synthesis on the algorithm Working with shape rather than edges has some advantages Bright/dark pairing is easier Ellipse fitting is direct But also some drawbacks Craters are not detected when dark/bright object is connected to background CPU time / complexity seems compatible with our hypothesis Probably faster than all other algorithms Robustness Seems very good. Behavior is similar on real and synthetic images. Few parameters to adjust, knowledge of sun direction essentially Precision Seems compatible with our hypothesis Crater size error seems only weakly correlated with crater size Detection performance Detection probability is lower than expected, but still manageable Page 21

Conclusion/Perspective Conclusion The algorithm based on segmentation of the scene and ellipse estimation with pixels covariance has been defined and its performances and robustness studied in the case of the lunar lander braking phase. This algorithm seems to be a good candidate for on board navigation. Page 22 Perspective Take into accout prior information Coming from database, including the identification Coming from simple geometric considerations on craters, or in a priori information on phase of descent/position of sun, physical consitutuon of the ground, etc. Test possible enhancements Pre-processing : filtering, enhancement, etc. Other segmentation algorithms