CS 534: Computer Vision 3D Model-based recognition
|
|
|
- Erik Wilcox
- 10 years ago
- Views:
Transcription
1 CS 534: Computer Vision 3D Model-based recognition Ahmed Elgammal Dept of Computer Science CS 534 3D Model-based Vision - 1 High Level Vision Object Recognition: What it means? Two main recognition tasks:! Categorization: what is the class of the object: face, car, motorbike, airplane, etc.! Identification: John s face, my car Which of the two tasks is easier and which comes first? The answers from neuroscience and computer vision are different:! Computer vision: identification is relatively easy. Categorization very hard.! Psychologists and neuroscientists: in biological visual systems, categorization seems to be simpler and immediate stage in the recognition process. Two main schools:! Object centered approaches! View-based (image based) approaches. CS 534 3D Model-based Vision - 2 1
2 Outlines Geometric Model-Based Object Recognition Choosing features Approaches for model-based vision! Obtaining Hypotheses by Pose Consistency! Obtaining Hypotheses by Pose Clustering! Obtaining Hypotheses by Using Invariants Geometric Hashing Affine invariant geometric hashing CS 534 3D Model-based Vision - 3 Geometric Model-Based Object Recognition Object 1 Model Object 2 Model Object N Model Image Modelbase Geometric Models Problem definitions: Given geometric object models (modelbase) and an image, find out which object(s) we see in the image. CS 534 3D Model-based Vision - 4 2
3 Object Model? Geometric Transformation Pose Recovery problem Features Find Correspondences Features Solve for Transformation Pose Recovery CS 534 3D Model-based Vision - 5 Framework Object Model? Geometric Transformation Object position and orientation Pose Recovery problem Features Hypothesize Correspondences Features Solve for Transformation Pose Recovery Verification: Compare the rendering to the image. If the two are sufficiently similar, accept the hypothesis Back projection (rendering) Verification CS 534 3D Model-based Vision - 6 3
4 General idea! Hypothesize object identity and pose! Recover camera! Render object in camera (widely known as backprojection)! Compare to image (verification) Issues! Where do the hypotheses come from?! How do we compare to image (verification)? CS 534 3D Model-based Vision - 7 Choosing Features Given a 3-D object, how do we decide which points from its surface to choose as its model?! choose points that will give rise to detectable features in images! for polyhedra, the images of its vertices will be points in the images where two or more long lines meet these can be detected by edge detection methods! points on the interiors of regions, or along straight lines are not easily identified in images. CS 534 3D Model-based Vision - 8 4
5 Verification We do not know anything about scene illumination. If we know scene illumination we can render close images So, Verification should be robust to changes in illumination. Use object silhouettes rather than texture or intensities. Edge score! are there image edges near predicted object edges?! very unreliable; in texture, answer is usually yes Oriented edge score! are there image edges near predicted object edges with the right orientation?! better, but still hard to do well (see next slide) No-one s used texture! e.g. does the spanner have the same texture as the wood? model selection problem! more on these later; no-ones seen verification this way, though CS 534 3D Model-based Vision - 9 Example images CS 534 3D Model-based Vision
6 Choosing Features Example: why not choose the midpoints of the edges of a polyhedra as features! midpoints of projections of line segments are not the projections of the midpoints of line segments! if the entire line segment in the image is not identified, then we introduce error in locating midpoint CS 534 3D Model-based Vision - 11 Obtaining Hypothesis M geometric features in the image N geometric features on each object L objects Brute Force Algorithm: Enumerate all possible correspondences O(LM N )!! Instead: utilize geometric constraints small number of correspondences are needed to obtain the projection parameters, once these parameters are known the positions of the rest of all other features are known too. CS 534 3D Model-based Vision
7 Obtaining Hypotheses Three basic approaches: Obtaining Hypotheses by Pose Consistency Obtaining Hypotheses by Pose Clustering Obtaining Hypotheses by Using Invariants CS 534 3D Model-based Vision - 13 Pose Consistency Object Model Geometric Transformation Object position and orientation Pose Recovery problem? Features Hypothesize Correspondences Using a small number of features (frame group) Features Solve for Transformation Pose Recovery Back projection (rendering) Verification The rest of the features have to be consistent with the recovered pose CS 534 3D Model-based Vision
8 Pose consistency Also called Alignment: object is being aligned to the image Correspondences between image features and model features are not independent. Geometric constraints A small number of correspondences yields missing camera parameters --- the others must be consistent with this. Typically intrinsic camera parameters are assumed to be known Strategy:! Generate hypotheses using small numbers of correspondences (e.g. triples of points for a calibrated perspective camera, etc., etc.)! Backproject and verify frame groups : a group that can be used to yield a camera hypothesis. Example- for Perspective Camera:! three points;! three directions (trihedral vertex) and a point (necessary to establish scale)! Two directions emanating from a shared origin (dihedral vertex) and a point CS 534 3D Model-based Vision - 15 CS 534 3D Model-based Vision
9 Figure from Object recognition using alignment, D.P. Huttenlocher and S. Ullman, Proc. Int. Conf. Computer Vision, 1986, copyright IEEE, 1986! CS 534 3D Model-based Vision - 17 RANSAC RANdom SAmple Consensus Searching for a random sample that leads to a fit on which many of the data points agree Extremely useful concept Can fit models even if up to 50% of the points are outliers. Repeat! Choose a subset of points randomly! Fit the model to this subset! See how many points agree on this model (how many points fit that model)! Use only points which agree to re-fit a better model Finally choose the best fit CS 534 Model Fitting
10 RANSAC RANdom SAmple Consensus Four parameters n : the smallest # of points required k : the # of iterations required t : the threshold used to identify a point that fits well d : the # of nearby points required Until k iterations have occurred Pick n sample points uniformly at random Fit to that set of n points For each data point outside the sample Test distance; if the distance < t, it is close If there are d or more points close, this is a good fit. Refit the line using all these points End use the best fit CS 534 Model Fitting - 19 Pose Clustering Most objects have many frame groups There should be many correspondences between object and image frame groups that verify satisfactorily. Each of these correspondences yield approximately the same estimate for the position and orientation of the object w.r.t. the camera Wrong correspondences will yield uncorrelated estimates!! Cluster hypothesis before verification. Vote for the pose: use an accumulator array that represents pose space for each object CS 534 3D Model-based Vision
11 CS 534 3D Model-based Vision - 21 Figure from The evolution and testing of a model-based object recognition system, J.L. Mundy and A. Heller, Proc. Int. Conf. Computer Vision, 1990 copyright 1990 IEEE! CS 534 3D Model-based Vision
12 Figure from The evolution and testing of a model-based object recognition system, J.L. Mundy and A. Heller, Proc. Int. Conf. Computer Vision, 1990 copyright 1990 IEEE! CS 534 3D Model-based Vision - 23 Figure from The evolution and testing of a model-based object recognition system, J.L. Mundy and A. Heller, Proc. Int. Conf. Computer Vision, 1990 copyright 1990 IEEE! CS 534 3D Model-based Vision
13 Figure from The evolution and testing of a model-based object recognition system, J.L. Mundy and A. Heller, Proc. Int. Conf. Computer Vision, 1990 copyright 1990 IEEE! CS 534 3D Model-based Vision - 25 Figure from The evolution and testing of a model-based object recognition system, J.L. Mundy and A. Heller, Proc. Int. Conf. Computer Vision, 1990 copyright 1990 IEEE! CS 534 3D Model-based Vision
14 Problems Correspondences should not be allowed to vote for poses from which they would be invisible In images with noise or texture generates many spurious frame groups, the number of votes in the pose array corresponding to real objects may be smaller than the number of spurious votes Choosing the size of the bucket in the pose array is difficult:! too small mean no much accumulation,! too large mean too many buckets will have enough votes CS 534 3D Model-based Vision - 27 Invariants There are geometric properties that are invariant to camera transformations CS 534 3D Model-based Vision
15 Geometric Hashing A technique originally developed in computer vision for matching geometric features against a database of such features Widely used for pattern-matching, CAD/CAM, medical imaging Also used in other domains: molecular biology and medicinal chemistry! Matching is possible under transformations Matching is possible from patrial information efficient CS 534 3D Model-based Vision - 29 Geometric hashing basic idea Consider the following simple 2-D recognition problem.! We are given a set of object models, M i! each model is represented by a set of points in the plane M i = {P i,1,..., P i,ni }! We want to recognize instances of these point patterns in images from which point features (junctions, etc.) have been identified So, our input image, B, is a binary image where the 1 s are the feature points We only allow the position of the instances of the Mi in B to vary - orientation is fixed. Only translation We want our approach to work even if some points from the model are not detected in B. We will search for all possible models simultaneously Model Translation Image CS 534 3D Model-based Vision
16 Geometric hashing Consider two models! M 1 = {(0,0), (10,0), (10,10), (0,10)}! M 2 = {(0,0), (4,4), (4,-4), (-4,-4), (-4,4)} We will build a table containing, for each model, all of the relative coordinates of these points given that one of the model points is chosen as the origin of its coordinate system! this point is called a basis, because choosing it completely determines the coordinates of the remaining points.! examples for M 1 choose (0,0) as basis obtain (10,0), (10,10), (0,10) choose (10,0) as basis obtain (-10,0), (0,10), -10,10)) CS 534 3D Model-based Vision - 31 Hashing table M 1, M 1,1 M 2,2 M 2,2 M 2,2 5 M 2,1 M 2,1 2 M 1,2 1 M 1 0 M 2,2 M 1,1 1 2 M 2,1 M 2,1 M 2 M 1,2 M 1,2 CS 534 3D Model-based Vision
17 Hash table creation How many entries do we need to make in the hash table.! Mode M i has n i point each has to be chosen as the basis point coordinates of remaining n i -1 points computed with respect to basis point entry (M i, basis) entered into hash table for each of those coordinates! And this has to be done for each of the m models.! So complexity is mn 2 to build the table But the table is built only once, and then used many times. CS 534 3D Model-based Vision - 33 Using the table during recognition Pick a feature point from the image as the basis.! the algorithm may have to consider all possible points from the image as the basis Compute the coordinates of all of the remaining image feature points with respect to that basis. Use each of these coordinates as an index into the hash table! at that location of the hash table we will find a list of (M i, p j ) pairs - model basis pairs that result in some point from M i getting these coordinates when the j th point from M i is chosen as the basis! keep track of the score for each (M i, p i ) encountered! models that obtain high scores for some bases are recorded as possible detections CS 534 3D Model-based Vision
18 Some observations If the image contains n points from some model, M i, then we will detect it n times! each of the n points can serve as a basis! for each choice, the remaining n-1 points will result in table indices that contain (M i, basis) If the image contains s feature points, then what is the complexity of the recognition component of the geometric hashing algorithm?! for each of the s points we compute the new coordinates of the remaining s-1 points! and we keep track of the (model, basis) pairs retrieved from the table based on those coordinates! so, the algorithm has complexity O(s 2 ), and is independent of the number of models in the database CS 534 3D Model-based Vision - 35 Geometric Hashing Consider a more general case: translation, scaling and rotation one point is not a sufficient basis. But two points are sufficient. Model Translation, Rotation, Scaling Image Figure from Haim J. Wolfson Geometric Hashing an Overview IEEE computational Science and Engineering, Oct CS 534 3D Model-based Vision
19 Figure from Haim J. Wolfson Geometric Hashing an Overview IEEE computational Science and Engineering, Oct CS 534 3D Model-based Vision - 37 Revised geometric hashing Table construction! need to consider all pairs of points from each model for each pair, construct the coordinates of the remaining n-2 points using those two as a basis add an entry for (model, basis-pair) in the hash table complexity is now mn 3 Recognition! pick a pair of points from the image (cycling through all pairs)! compute coordinates of remaining point using this pair as a basis! look up (model, basis-pair) in table and tally votes CS 534 3D Model-based Vision
20 Recall: Affine projection models: Weak perspective projection Also called Scaled Orthographic Projection SOP is the magnification (scaling factor). When the scene depth is small compared its distance from the Camera, we can assume every thing is on one plane, m can be taken constant: weak perspective projection also called scaled orthography (every thing is a scaled version of the scene) CS 534 3D Model-based Vision - 39 Affine combinations of points t(p 2 -P 1 ) Let P 1 and P 2 be points Consider the expression P = P 1 + t(p 2 - P 1 )! P represents a point on the line joining P 1 and P 2.! if 0 <= t <= 1 then P lies between P 1 and P 2.! We can rewrite the expression as P = (1-t)P 1 + tp 2 Define an affine combination of two points to be! a 1 P 1 + a 2 P 2! where a 1 + a 2 = 1! P = (1-t)P 1 + tp 2 is an affine combination with a 2 = t. P 1 P 2 P 1 + t(p 2 -P 1 ) CS 534 3D Model-based Vision
21 Affine combinations P 1 P 3 P Generally,! if P 1,...P n is a set of points, and a 1 + +a n = 1, then! a 1 P 1 + a n P n is the point P 1 + a 2 (P 2 -P 1 ) + a n (P n -P 1 ) Let s look at affine combinations of three points. These are points! P = a 1 P 1 + a 2 P 2 + a 3 P 3 = P 1 + a 2 (P 2 -P 1 ) + a 3 (P 3 -P 1 )! where a 1 + a 2 + a 3 = 1! if 0 <= a 1, a 2, a 3, <=1 then P falls in the triangle, otherwise outside! (a 2, a 3 ) are the affine coordinates of P homogeneous representation is (1,a 2,a 3 )! P 1, P 2,, P 3 is called the affine basis P 2 CS 534 3D Model-based Vision - 41 Affine combinations Given any two points, P = (a 1, a 2 ) and Q = (a 1, a 2 )! Q-P is a vector! its affine coordinates are (a 1 -a 1, a 2 -a 2 )! Note that the affine coordinates of a point sum to 1, while the affine coordinates of a vector sum to 0. CS 534 3D Model-based Vision
22 Affine transformations Rigid transformations are of the form where a 2 + b 2 =1 An affine transformation is of the form for arbitrary a,b,c,d and is determined by the transformation of three points CS 534 3D Model-based Vision - 43 Representation in homogeneous coordinates In homogeneous coordinates, this transformation is represented as: CS 534 3D Model-based Vision
23 Affine coordinates and affine transformations The affine coordinates of a point are unchanged if the point and the affine basis are subjected to the same affine transformation Based on simple properties of affine transformations Let T be an affine transformation! T(P 1 - P 2 ) = TP 1 - TP 2! T(aP) = atp, for any scalar a. CS 534 3D Model-based Vision - 45 Proof Let P 1 = (x,y,1) and P 2 = (u,v,1)! Note that P 1 - P 2 is a vector TP 1 = (ax + by + t x, cx + dy + t y, 1) TP 2 = (au + bv + t x, cu + dv + t y,1)! TP 1 - TP 2 = (a(x-u) + b(y-v), c(x-u) + d(y-v), 0) P 1 - P 2 = (x-u, y-v,0) T(P 1 -P 2 ) = (a(x-u) + b(y-v), c(x-u)+d(y-v), 0) CS 534 3D Model-based Vision
24 Invariance for Geometric hashing Affine Coordinates represent an invariant under Affine Transformation. So can be used for Geometric hashing Let P 1, P 2, P 3 be an ordered affine basis triplet in the plane. Then the affine coordinates (",#) of a point P are:! P = "(P 2 - P 1 ) + # (P 3 - P 1 ) + P 1 Applying any affine transformation T will transform it to! TP = "(TP 2 - TP 1 ) + # (TP 3 - T P 1 ) + TP 1 So, TP has the same coordinates (",#) in the basis triplet as it did originally. CS 534 3D Model-based Vision - 47 What do affine transformations have to do with 3-D recognition Suppose our point pattern is a planar pattern - i.e., all of the points lie on the same plane.! we construct out hash table using these coordinates, choosing three at a time as a basis We position and orient this planar point pattern in space far from the camera, (So SOP is a good model) and take its image.! the transformation of the model to the image is an affine transformation! so, the affine coordinates of the points in any given basis are the same in the original 3-D planar model as they are in the image. CS 534 3D Model-based Vision
25 Preprocessing Suppose we have a model containing m feature points For each ordered noncollinear triplet of model points! compute the affine coordinates of the remaining m-3 points using that triple as a basis! each such coordinate is used as an entry into a hash table where we record the (base triplet,model) at which this coordinate was obtained. Complexity is m 4 per model. p1, p2, p3,m p1, p2, p3,m p1, p2, p3,m p1, p2, p3,m CS 534 3D Model-based Vision - 49 Recognition Scene with n interest points Choose an ordered triplet from the scene! compute the affine coordinates of remaining n-3 points in this basis! for each coordinate, check the hash table and for each entry found, tally a vote for the (basis triplet, model)! if the triplet scores high enough, we verify the match If the image does not contain an instance of any model, then we will only discover this after looking at all n 3 triples. p1, p2, p3,m p1, p2, p3,m p1, p2, p3,m p1, p2, p3,m CS 534 3D Model-based Vision
26 CS 534 3D Model-based Vision - 51 Pose Recovery Applications In many applications recovering pose is far more important than recognition. We know what we are looking at but we need to get accurate measurements of pose Examples: Medical applications. CS 534 3D Model-based Vision
27 Sources Forsyth and Ponce, Computer Vision a Modern approach: chapter 18. Slides by D. UC Berkeley Slides by L.S. UMD Haim J. Wolfson Geometric Hashing an Overview IEEE computational Science and Engineering, Oct (posted on class web page) CS 534 3D Model-based Vision
Jiří Matas. Hough Transform
Hough Transform Jiří Matas Center for Machine Perception Department of Cybernetics, Faculty of Electrical Engineering Czech Technical University, Prague Many slides thanks to Kristen Grauman and Bastian
Epipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R.
Epipolar Geometry We consider two perspective images of a scene as taken from a stereo pair of cameras (or equivalently, assume the scene is rigid and imaged with a single camera from two different locations).
Segmentation of building models from dense 3D point-clouds
Segmentation of building models from dense 3D point-clouds Joachim Bauer, Konrad Karner, Konrad Schindler, Andreas Klaus, Christopher Zach VRVis Research Center for Virtual Reality and Visualization, Institute
Geometric Camera Parameters
Geometric Camera Parameters What assumptions have we made so far? -All equations we have derived for far are written in the camera reference frames. -These equations are valid only when: () all distances
3D Tranformations. CS 4620 Lecture 6. Cornell CS4620 Fall 2013 Lecture 6. 2013 Steve Marschner (with previous instructors James/Bala)
3D Tranformations CS 4620 Lecture 6 1 Translation 2 Translation 2 Translation 2 Translation 2 Scaling 3 Scaling 3 Scaling 3 Scaling 3 Rotation about z axis 4 Rotation about z axis 4 Rotation about x axis
The Essentials of CAGD
The Essentials of CAGD Chapter 2: Lines and Planes Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/essentials-cagd c 2000 Farin & Hansford
Segmentation & Clustering
EECS 442 Computer vision Segmentation & Clustering Segmentation in human vision K-mean clustering Mean-shift Graph-cut Reading: Chapters 14 [FP] Some slides of this lectures are courtesy of prof F. Li,
BALTIC OLYMPIAD IN INFORMATICS Stockholm, April 18-22, 2009 Page 1 of?? ENG rectangle. Rectangle
Page 1 of?? ENG rectangle Rectangle Spoiler Solution of SQUARE For start, let s solve a similar looking easier task: find the area of the largest square. All we have to do is pick two points A and B and
How To Analyze Ball Blur On A Ball Image
Single Image 3D Reconstruction of Ball Motion and Spin From Motion Blur An Experiment in Motion from Blur Giacomo Boracchi, Vincenzo Caglioti, Alessandro Giusti Objective From a single image, reconstruct:
3D Human Face Recognition Using Point Signature
3D Human Face Recognition Using Point Signature Chin-Seng Chua, Feng Han, Yeong-Khing Ho School of Electrical and Electronic Engineering Nanyang Technological University, Singapore 639798 [email protected]
3D Model based Object Class Detection in An Arbitrary View
3D Model based Object Class Detection in An Arbitrary View Pingkun Yan, Saad M. Khan, Mubarak Shah School of Electrical Engineering and Computer Science University of Central Florida http://www.eecs.ucf.edu/
Solving Simultaneous Equations and Matrices
Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering
Feature Tracking and Optical Flow
02/09/12 Feature Tracking and Optical Flow Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Many slides adapted from Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve
Monash University Clayton s School of Information Technology CSE3313 Computer Graphics Sample Exam Questions 2007
Monash University Clayton s School of Information Technology CSE3313 Computer Graphics Questions 2007 INSTRUCTIONS: Answer all questions. Spend approximately 1 minute per mark. Question 1 30 Marks Total
Geometric Transformations
Geometric Transformations Definitions Def: f is a mapping (function) of a set A into a set B if for every element a of A there exists a unique element b of B that is paired with a; this pairing is denoted
Vector Notation: AB represents the vector from point A to point B on a graph. The vector can be computed by B A.
1 Linear Transformations Prepared by: Robin Michelle King A transformation of an object is a change in position or dimension (or both) of the object. The resulting object after the transformation is called
Introduction Epipolar Geometry Calibration Methods Further Readings. Stereo Camera Calibration
Stereo Camera Calibration Stereo Camera Calibration Stereo Camera Calibration Stereo Camera Calibration 12.10.2004 Overview Introduction Summary / Motivation Depth Perception Ambiguity of Correspondence
CS 4204 Computer Graphics
CS 4204 Computer Graphics 3D views and projection Adapted from notes by Yong Cao 1 Overview of 3D rendering Modeling: *Define object in local coordinates *Place object in world coordinates (modeling transformation)
Section 1.1. Introduction to R n
The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to
Combining Local Recognition Methods for Better Image Recognition
Combining Local Recognition Methods for Better Image Recognition Bart Lamiroy Patrick Gros y Sylvaine Picard INRIA CNRS DGA MOVI GRAVIR z IMAG INRIA RHÔNE-ALPES ZIRST 655, Av. de l Europe Montbonnot FRANCE
LINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1982, 2008. This chapter originates from material used by author at Imperial College, University of London, between 1981 and 1990. It is available free to all individuals,
Make and Model Recognition of Cars
Make and Model Recognition of Cars Sparta Cheung Department of Electrical and Computer Engineering University of California, San Diego 9500 Gilman Dr., La Jolla, CA 92093 [email protected] Alice Chu Department
Object Recognition. Selim Aksoy. Bilkent University [email protected]
Image Classification and Object Recognition Selim Aksoy Department of Computer Engineering Bilkent University [email protected] Image classification Image (scene) classification is a fundamental
MA 323 Geometric Modelling Course Notes: Day 02 Model Construction Problem
MA 323 Geometric Modelling Course Notes: Day 02 Model Construction Problem David L. Finn November 30th, 2004 In the next few days, we will introduce some of the basic problems in geometric modelling, and
MetropoGIS: A City Modeling System DI Dr. Konrad KARNER, DI Andreas KLAUS, DI Joachim BAUER, DI Christopher ZACH
MetropoGIS: A City Modeling System DI Dr. Konrad KARNER, DI Andreas KLAUS, DI Joachim BAUER, DI Christopher ZACH VRVis Research Center for Virtual Reality and Visualization, Virtual Habitat, Inffeldgasse
More Local Structure Information for Make-Model Recognition
More Local Structure Information for Make-Model Recognition David Anthony Torres Dept. of Computer Science The University of California at San Diego La Jolla, CA 9093 Abstract An object classification
A. OPENING POINT CLOUDS. (Notepad++ Text editor) (Cloud Compare Point cloud and mesh editor) (MeshLab Point cloud and mesh editor)
MeshLAB tutorial 1 A. OPENING POINT CLOUDS (Notepad++ Text editor) (Cloud Compare Point cloud and mesh editor) (MeshLab Point cloud and mesh editor) 2 OPENING POINT CLOUDS IN NOTEPAD ++ Let us understand
Fast and Robust Normal Estimation for Point Clouds with Sharp Features
1/37 Fast and Robust Normal Estimation for Point Clouds with Sharp Features Alexandre Boulch & Renaud Marlet University Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ParisTech Symposium on Geometry Processing
Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors
1 Chapter 13. VECTORS IN THREE DIMENSIONAL SPACE Let s begin with some names and notation for things: R is the set (collection) of real numbers. We write x R to mean that x is a real number. A real number
The Scientific Data Mining Process
Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In
A Genetic Algorithm-Evolved 3D Point Cloud Descriptor
A Genetic Algorithm-Evolved 3D Point Cloud Descriptor Dominik Wȩgrzyn and Luís A. Alexandre IT - Instituto de Telecomunicações Dept. of Computer Science, Univ. Beira Interior, 6200-001 Covilhã, Portugal
TOOLS FOR 3D-OBJECT RETRIEVAL: KARHUNEN-LOEVE TRANSFORM AND SPHERICAL HARMONICS
TOOLS FOR 3D-OBJECT RETRIEVAL: KARHUNEN-LOEVE TRANSFORM AND SPHERICAL HARMONICS D.V. Vranić, D. Saupe, and J. Richter Department of Computer Science, University of Leipzig, Leipzig, Germany phone +49 (341)
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY V. Knyaz a, *, Yu. Visilter, S. Zheltov a State Research Institute for Aviation System (GosNIIAS), 7, Victorenko str., Moscow, Russia
Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi
Geometry of Vectors Carlo Tomasi This note explores the geometric meaning of norm, inner product, orthogonality, and projection for vectors. For vectors in three-dimensional space, we also examine the
Solving Geometric Problems with the Rotating Calipers *
Solving Geometric Problems with the Rotating Calipers * Godfried Toussaint School of Computer Science McGill University Montreal, Quebec, Canada ABSTRACT Shamos [1] recently showed that the diameter of
Realtime 3D Computer Graphics Virtual Reality
Realtime 3D Computer Graphics Virtual Realit Viewing and projection Classical and General Viewing Transformation Pipeline CPU Pol. DL Pixel Per Vertex Texture Raster Frag FB object ee clip normalized device
A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow
, pp.233-237 http://dx.doi.org/10.14257/astl.2014.51.53 A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow Giwoo Kim 1, Hye-Youn Lim 1 and Dae-Seong Kang 1, 1 Department of electronices
Edge tracking for motion segmentation and depth ordering
Edge tracking for motion segmentation and depth ordering P. Smith, T. Drummond and R. Cipolla Department of Engineering University of Cambridge Cambridge CB2 1PZ,UK {pas1001 twd20 cipolla}@eng.cam.ac.uk
Affine Transformations
A P P E N D I X C Affine Transformations CONTENTS C The need for geometric transformations 335 C2 Affine transformations 336 C3 Matri representation of the linear transformations 338 C4 Homogeneous coordinates
Genetic algorithm for affine point pattern matching
Pattern Recognition Letters 24 (2003) 9 19 www.elsevier.com/locate/patrec Genetic algorithm for affine point pattern matching Lihua Zhang, Wenli Xu *, Cheng Chang Department of Automation, Tsinghua University,
Tensor Methods for Machine Learning, Computer Vision, and Computer Graphics
Tensor Methods for Machine Learning, Computer Vision, and Computer Graphics Part I: Factorizations and Statistical Modeling/Inference Amnon Shashua School of Computer Science & Eng. The Hebrew University
An Iterative Image Registration Technique with an Application to Stereo Vision
An Iterative Image Registration Technique with an Application to Stereo Vision Bruce D. Lucas Takeo Kanade Computer Science Department Carnegie-Mellon University Pittsburgh, Pennsylvania 15213 Abstract
Camera geometry and image alignment
Computer Vision and Machine Learning Winter School ENS Lyon 2010 Camera geometry and image alignment Josef Sivic http://www.di.ens.fr/~josef INRIA, WILLOW, ENS/INRIA/CNRS UMR 8548 Laboratoire d Informatique,
Section 1.4. Lines, Planes, and Hyperplanes. The Calculus of Functions of Several Variables
The Calculus of Functions of Several Variables Section 1.4 Lines, Planes, Hyperplanes In this section we will add to our basic geometric understing of R n by studying lines planes. If we do this carefully,
Computer Graphics. Geometric Modeling. Page 1. Copyright Gotsman, Elber, Barequet, Karni, Sheffer Computer Science - Technion. An Example.
An Example 2 3 4 Outline Objective: Develop methods and algorithms to mathematically model shape of real world objects Categories: Wire-Frame Representation Object is represented as as a set of points
Randomized Trees for Real-Time Keypoint Recognition
Randomized Trees for Real-Time Keypoint Recognition Vincent Lepetit Pascal Lagger Pascal Fua Computer Vision Laboratory École Polytechnique Fédérale de Lausanne (EPFL) 1015 Lausanne, Switzerland Email:
Colour Image Segmentation Technique for Screen Printing
60 R.U. Hewage and D.U.J. Sonnadara Department of Physics, University of Colombo, Sri Lanka ABSTRACT Screen-printing is an industry with a large number of applications ranging from printing mobile phone
Geometry: Unit 1 Vocabulary TERM DEFINITION GEOMETRIC FIGURE. Cannot be defined by using other figures.
Geometry: Unit 1 Vocabulary 1.1 Undefined terms Cannot be defined by using other figures. Point A specific location. It has no dimension and is represented by a dot. Line Plane A connected straight path.
3D Object Recognition in Clutter with the Point Cloud Library
3D Object Recognition in Clutter with the Point Cloud Library Federico Tombari, Ph.D [email protected] University of Bologna Open Perception Data representations in PCL PCL can deal with both organized
ECE 533 Project Report Ashish Dhawan Aditi R. Ganesan
Handwritten Signature Verification ECE 533 Project Report by Ashish Dhawan Aditi R. Ganesan Contents 1. Abstract 3. 2. Introduction 4. 3. Approach 6. 4. Pre-processing 8. 5. Feature Extraction 9. 6. Verification
Selected practice exam solutions (part 5, item 2) (MAT 360)
Selected practice exam solutions (part 5, item ) (MAT 360) Harder 8,91,9,94(smaller should be replaced by greater )95,103,109,140,160,(178,179,180,181 this is really one problem),188,193,194,195 8. On
Circle Object Recognition Based on Monocular Vision for Home Security Robot
Journal of Applied Science and Engineering, Vol. 16, No. 3, pp. 261 268 (2013) DOI: 10.6180/jase.2013.16.3.05 Circle Object Recognition Based on Monocular Vision for Home Security Robot Shih-An Li, Ching-Chang
Lecture 2: Homogeneous Coordinates, Lines and Conics
Lecture 2: Homogeneous Coordinates, Lines and Conics 1 Homogeneous Coordinates In Lecture 1 we derived the camera equations λx = P X, (1) where x = (x 1, x 2, 1), X = (X 1, X 2, X 3, 1) and P is a 3 4
2-View Geometry. Mark Fiala Ryerson University [email protected]
CRV 2010 Tutorial Day 2-View Geometry Mark Fiala Ryerson University [email protected] 3-Vectors for image points and lines Mark Fiala 2010 2D Homogeneous Points Add 3 rd number to a 2D point on image
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - [email protected]
3. INNER PRODUCT SPACES
. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.
1. A student followed the given steps below to complete a construction. Which type of construction is best represented by the steps given above?
1. A student followed the given steps below to complete a construction. Step 1: Place the compass on one endpoint of the line segment. Step 2: Extend the compass from the chosen endpoint so that the width
Review Sheet for Test 1
Review Sheet for Test 1 Math 261-00 2 6 2004 These problems are provided to help you study. The presence of a problem on this handout does not imply that there will be a similar problem on the test. And
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic
3D Modeling and Simulation using Image Stitching
3D Modeling and Simulation using Image Stitching Sean N. Braganza K. J. Somaiya College of Engineering, Mumbai, India ShubhamR.Langer K. J. Somaiya College of Engineering,Mumbai, India Pallavi G.Bhoite
Social Media Mining. Data Mining Essentials
Introduction Data production rate has been increased dramatically (Big Data) and we are able store much more data than before E.g., purchase data, social media data, mobile phone data Businesses and customers
Projection Center Calibration for a Co-located Projector Camera System
Projection Center Calibration for a Co-located Camera System Toshiyuki Amano Department of Computer and Communication Science Faculty of Systems Engineering, Wakayama University Sakaedani 930, Wakayama,
LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE. [email protected]
LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE 1 S.Manikandan, 2 S.Abirami, 2 R.Indumathi, 2 R.Nandhini, 2 T.Nanthini 1 Assistant Professor, VSA group of institution, Salem. 2 BE(ECE), VSA
SYNTHESIZING FREE-VIEWPOINT IMAGES FROM MULTIPLE VIEW VIDEOS IN SOCCER STADIUM
SYNTHESIZING FREE-VIEWPOINT IMAGES FROM MULTIPLE VIEW VIDEOS IN SOCCER STADIUM Kunihiko Hayashi, Hideo Saito Department of Information and Computer Science, Keio University {hayashi,saito}@ozawa.ics.keio.ac.jp
Arrangements And Duality
Arrangements And Duality 3.1 Introduction 3 Point configurations are tbe most basic structure we study in computational geometry. But what about configurations of more complicated shapes? For example,
MATH1231 Algebra, 2015 Chapter 7: Linear maps
MATH1231 Algebra, 2015 Chapter 7: Linear maps A/Prof. Daniel Chan School of Mathematics and Statistics University of New South Wales [email protected] Daniel Chan (UNSW) MATH1231 Algebra 1 / 43 Chapter
Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition
Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition 1. Image Pre-Processing - Pixel Brightness Transformation - Geometric Transformation - Image Denoising 1 1. Image Pre-Processing
Intersection of a Line and a Convex. Hull of Points Cloud
Applied Mathematical Sciences, Vol. 7, 213, no. 13, 5139-5149 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/1.12988/ams.213.37372 Intersection of a Line and a Convex Hull of Points Cloud R. P. Koptelov
Introduction to Computer Graphics
Introduction to Computer Graphics Torsten Möller TASC 8021 778-782-2215 [email protected] www.cs.sfu.ca/~torsten Today What is computer graphics? Contents of this course Syllabus Overview of course topics
Robust NURBS Surface Fitting from Unorganized 3D Point Clouds for Infrastructure As-Built Modeling
81 Robust NURBS Surface Fitting from Unorganized 3D Point Clouds for Infrastructure As-Built Modeling Andrey Dimitrov 1 and Mani Golparvar-Fard 2 1 Graduate Student, Depts of Civil Eng and Engineering
High-dimensional labeled data analysis with Gabriel graphs
High-dimensional labeled data analysis with Gabriel graphs Michaël Aupetit CEA - DAM Département Analyse Surveillance Environnement BP 12-91680 - Bruyères-Le-Châtel, France Abstract. We propose the use
VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS
VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS Norbert Buch 1, Mark Cracknell 2, James Orwell 1 and Sergio A. Velastin 1 1. Kingston University, Penrhyn Road, Kingston upon Thames, KT1 2EE,
3D Scanner using Line Laser. 1. Introduction. 2. Theory
. Introduction 3D Scanner using Line Laser Di Lu Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute The goal of 3D reconstruction is to recover the 3D properties of a geometric
Image Processing and Computer Graphics. Rendering Pipeline. Matthias Teschner. Computer Science Department University of Freiburg
Image Processing and Computer Graphics Rendering Pipeline Matthias Teschner Computer Science Department University of Freiburg Outline introduction rendering pipeline vertex processing primitive processing
Lecture 3: Coordinate Systems and Transformations
Lecture 3: Coordinate Systems and Transformations Topics: 1. Coordinate systems and frames 2. Change of frames 3. Affine transformations 4. Rotation, translation, scaling, and shear 5. Rotation about an
Image Segmentation and Registration
Image Segmentation and Registration Dr. Christine Tanner ([email protected]) Computer Vision Laboratory, ETH Zürich Dr. Verena Kaynig, Machine Learning Laboratory, ETH Zürich Outline Segmentation
Given a point cloud, polygon, or sampled parametric curve, we can use transformations for several purposes:
3 3.1 2D Given a point cloud, polygon, or sampled parametric curve, we can use transformations for several purposes: 1. Change coordinate frames (world, window, viewport, device, etc). 2. Compose objects
Part-Based Recognition
Part-Based Recognition Benedict Brown CS597D, Fall 2003 Princeton University CS 597D, Part-Based Recognition p. 1/32 Introduction Many objects are made up of parts It s presumably easier to identify simple
Statistics 100A Homework 7 Solutions
Chapter 6 Statistics A Homework 7 Solutions Ryan Rosario. A television store owner figures that 45 percent of the customers entering his store will purchase an ordinary television set, 5 percent will purchase
Interactive person re-identification in TV series
Interactive person re-identification in TV series Mika Fischer Hazım Kemal Ekenel Rainer Stiefelhagen CV:HCI lab, Karlsruhe Institute of Technology Adenauerring 2, 76131 Karlsruhe, Germany E-mail: {mika.fischer,ekenel,rainer.stiefelhagen}@kit.edu
Vector storage and access; algorithms in GIS. This is lecture 6
Vector storage and access; algorithms in GIS This is lecture 6 Vector data storage and access Vectors are built from points, line and areas. (x,y) Surface: (x,y,z) Vector data access Access to vector
521493S Computer Graphics. Exercise 2 & course schedule change
521493S Computer Graphics Exercise 2 & course schedule change Course Schedule Change Lecture from Wednesday 31th of March is moved to Tuesday 30th of March at 16-18 in TS128 Question 2.1 Given two nonparallel,
! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.
Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of
Local features and matching. Image classification & object localization
Overview Instance level search Local features and matching Efficient visual recognition Image classification & object localization Category recognition Image classification: assigning a class label to
The Visual Internet of Things System Based on Depth Camera
The Visual Internet of Things System Based on Depth Camera Xucong Zhang 1, Xiaoyun Wang and Yingmin Jia Abstract The Visual Internet of Things is an important part of information technology. It is proposed
Chapter 3.1 Angles. Geometry. Objectives: Define what an angle is. Define the parts of an angle.
Chapter 3.1 Angles Define what an angle is. Define the parts of an angle. Recall our definition for a ray. A ray is a line segment with a definite starting point and extends into infinity in only one direction.
Build Panoramas on Android Phones
Build Panoramas on Android Phones Tao Chu, Bowen Meng, Zixuan Wang Stanford University, Stanford CA Abstract The purpose of this work is to implement panorama stitching from a sequence of photos taken
Classification of Fingerprints. Sarat C. Dass Department of Statistics & Probability
Classification of Fingerprints Sarat C. Dass Department of Statistics & Probability Fingerprint Classification Fingerprint classification is a coarse level partitioning of a fingerprint database into smaller
Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data
CMPE 59H Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data Term Project Report Fatma Güney, Kübra Kalkan 1/15/2013 Keywords: Non-linear
Point Cloud Simulation & Applications Maurice Fallon
Point Cloud & Applications Maurice Fallon Contributors: MIT: Hordur Johannsson and John Leonard U. of Salzburg: Michael Gschwandtner and Roland Kwitt Overview : Dense disparity information Efficient Image
Automatic Labeling of Lane Markings for Autonomous Vehicles
Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 [email protected] 1. Introduction As autonomous vehicles become more popular,
Geometric description of the cross product of the vectors u and v. The cross product of two vectors is a vector! u x v is perpendicular to u and v
12.4 Cross Product Geometric description of the cross product of the vectors u and v The cross product of two vectors is a vector! u x v is perpendicular to u and v The length of u x v is uv u v sin The
Optical Tracking Using Projective Invariant Marker Pattern Properties
Optical Tracking Using Projective Invariant Marker Pattern Properties Robert van Liere, Jurriaan D. Mulder Department of Information Systems Center for Mathematics and Computer Science Amsterdam, the Netherlands
11.1. Objectives. Component Form of a Vector. Component Form of a Vector. Component Form of a Vector. Vectors and the Geometry of Space
11 Vectors and the Geometry of Space 11.1 Vectors in the Plane Copyright Cengage Learning. All rights reserved. Copyright Cengage Learning. All rights reserved. 2 Objectives! Write the component form of
Recovering Primitives in 3D CAD meshes
Recovering Primitives in 3D CAD meshes Roseline Bénière a,c, Gérard Subsol a, Gilles Gesquière b, François Le Breton c and William Puech a a LIRMM, Univ. Montpellier 2, CNRS, 161 rue Ada, 34392, France;
Linear Codes. Chapter 3. 3.1 Basics
Chapter 3 Linear Codes In order to define codes that we can encode and decode efficiently, we add more structure to the codespace. We shall be mainly interested in linear codes. A linear code of length
Euler Vector: A Combinatorial Signature for Gray-Tone Images
Euler Vector: A Combinatorial Signature for Gray-Tone Images Arijit Bishnu, Bhargab B. Bhattacharya y, Malay K. Kundu, C. A. Murthy fbishnu t, bhargab, malay, [email protected] Indian Statistical Institute,
Three-Dimensional Data Recovery Using Image-Based Modeling
Three-Dimensional Data Recovery Using Image-Based Modeling Jeremy W. Cannon Jonathan C. Derryberry Vitaly Y. Kulikov [email protected] [email protected] [email protected] Abstract 6.837: Introduction to Computer
13.4 THE CROSS PRODUCT
710 Chapter Thirteen A FUNDAMENTAL TOOL: VECTORS 62. Use the following steps and the results of Problems 59 60 to show (without trigonometry) that the geometric and algebraic definitions of the dot product
