Vision 3D articielle Session 1: Projective geometry, camera matrix, panorama
|
|
- Marjorie Simpson
- 7 years ago
- Views:
Transcription
1 Vision 3D articielle Session 1: Projective geometry, camera matrix, panorama Pascal Monasse IMAGINE, École des Ponts ParisTech
2 Contents Pinhole camera model Projective geometry Homographies Panorama Internal calibration Optimization techniques Conclusion Practical session
3 Contents Pinhole camera model Projective geometry Homographies Panorama Internal calibration Optimization techniques Conclusion Practical session
4 The pinhole camera model Projection (Source: Wikipedia) Thepinhole camera (French: sténopé): Model Ideal model with an aperture reduced to a single point. No account for blur of out of focus objects, nor for the lens geometric distortion.
5 Central projection in camera coordinate frame Rays from C are the same: Cx = λ CX In the camera coordinate frame CXYZ : x X y = λ Y f Z Thus λ = f /Z and ( x y ) = f ( X /Z Y /Z ) In pixel coordinates: ( ) ( ) ( ) u αx + cx (αf )X /Z + cx = = v αy + c y (αf )Y /Z + c y αf : focal length in pixels, (c x, c y ): position of principal point P in pixels.
6 Contents Pinhole camera model Projective geometry Homographies Panorama Internal calibration Optimization techniques Conclusion Practical session
7 Projective plane We identify two points of R 3 on the same ray from the origin through the equivalence relation: R : xry λ 0 : x = λy Projective plane: P 2 = (R 3 \ O)/R Point ( ) ( x y z = x/z y/z 1 ) if z 0. The point ( x/ɛ y/ɛ 1 ) = ( x y ɛ ) is a point far away in the ( direction of the line of slope y/x. The limit value x y 0 ) is the innite point in this direction. Given a plane of R 3 through O, of equation ax + by + cz = 0. It corresponds to a line in P 2 represented in homogeneous coordinates by ( a b c). Its equation is: ( ) ( T a b c X Y Z) = 0.
8 Projective plane Line through points x 1 and x 2 : l = x 1 x 2 since (x 1 x 2 ) T x i = x 1 x 2 x i = 0 Intersection of two lines l 1 and l 2 : x = l 1 l 2 since l T i (l 1 l 2 ) = l i l 1 l 2 = 0 Line at innity: 0 x l = 0 since l T y = Intersection of two parallel lines: a a b b b = (c 2 c 1 ) a l 0 c 1 c 2
9 Calibration matrix Let us get back to the projection equation: ( ) ( ) u fx /Z + cx = = 1 ( ) fx + cx Z v fy /Z + c y Z fy + c y Z (replacing αf by f ) We rewrite: Z u v 1 := x = f c x f c y 1 X Y Z The 3D point being expressed in another orthonormal coordinate frame: X f c x x = f c y ( ) R T Y Z 1 1
10 Calibration matrix The (internal) calibration matrix (3 3) is: f c x K = f c y 1 The projection matrix (3 4) is: ( P = K R ) T If pixels are trapezoids, we can generalize K : K = f x s c x f y c y (with s = f x cotan θ) 1 Theorem Let P be a 3 4 matrix whose left 3 3 sub-matrix is invertible. There is a unique decomposition P = K Proof: Gram-Schmidt on rows of left sub-matrix of P starting from last row (RQ decomposition), then T = K 1 P 4. ( R T ).
11 Contents Pinhole camera model Projective geometry Homographies Panorama Internal calibration Optimization techniques Conclusion Practical session
12 Homographies Let us see what happens when we take two pictures in the following particular cases: 1. Rotation around the optical center (and maybe change of internal parameters). x = K RK 1 x := Hx 2. The world is at. We observe the plane Z = 0: X x ( ) = K R 1 R 2 R 3 T Y 0 = ) (R K 1 R 2 T x := Hx 1 In both cases, we deal with a 3 3 invertible matrix H, a homography. Property: a homography preserves alignment. If x 1, x 2, x 3 aligned, then Hx 1 Hx 2 Hx 3 = H x 1 x 2 x 3 = 0 are
13 Contents Pinhole camera model Projective geometry Homographies Panorama Internal calibration Optimization techniques Conclusion Practical session
14 Panorama construction We stitch together images by correcting homographies. This assumes that the scene is at or that we are rotating the camera. Homography estimation: λx = Hx x (Hx) = 0, which amounts to 2 independent linear equations per correspondence (x, x ). 4 correspondences are enough to estimate H (but more can be used to estimate through mean squares minimization). Panorama from 14 photos
15 Algebraic error minimization x i (Hx i) = 0 is a system of three linear equations in H. We gather the unkwown coecients of H in a vector of 9 rows h = ( H 11 H H 33 ) T We write the equations as A i h = 0 with ( xi y i x x i i x y i i x i A i = x i y i 1 y x i i y y i i y i x i y y i i y y x x i i i i x y i i x i We can discard the third line and stack the dierent A i in A. h is a vector of the kernel of A (8 9 matrix) We can also suppose H 3,3 = h 9 = 1 and solve ) A :,1:8 h 1:8 = A :,9
16 Geometric error When we have more than 4 correspondences, we minimize the algebraic error ɛ = x (Hx i i) 2, i but it has no geometric meaning. A more signicant error is geometric: Image 1 Hx H d Image 2 x x d 1 H x H 1 Either d 2 = d(x, Hx) 2 (transfer error) or d 2 + d 2 = d(x, H 1 x ) 2 + d(x, Hx) 2 (Symmetric transfer error)
17 Gold standard error Actually, we can consider x and x as noisy observations of ground truth positions ^x and ^x = H^x. Image 1 H Hx d Image 2 x x d ^x ^ ɛ(h,^x) = d(x,^x) 2 + d(x, H^x) 2 Problem: this has a lot of parameters: H, {^x i } i=1...n
18 Sampson error A method that linearizes the dependency on ^x in the gold standard error so as to eliminate these unknowns. 0 = ɛ(h,^x) = ɛ(h, x) + J(^x x) with J = ɛ (H, x) x Find ^x minimizing x ^x 2 subject to J(x ^x) = ɛ Solution: x ^x = J T (JJ T ) 1 ɛ and thus: x ^x 2 = ɛ T (JJ T ) 1 ɛ (1) Here, ɛ i = A i h = x i (Hx i) is a 3-vector. For each i, there are 4 variables (x i, x ), so J is 3 4. i This is almost the algebraic error ɛ T ɛ but with adapted scalar product. The resolution, through iterative method, must be initialized with the algebraic minimization.
19 Applying homography to image Two methods: 1. push pixels to transformed image and round to the nearest pixel center. 2. pull pixels from original image by interpolation. Image 1 H push H(Image 1) pull H 1
20 Contents Pinhole camera model Projective geometry Homographies Panorama Internal calibration Optimization techniques Conclusion Practical session
21 Camera calibration by resection [R.Y. Tsai,An ecient and accurate camera calibration technique for 3D machine vision, CVPR'86] We estimate the camera internal parameters from a known rig, composed of 3D points whose coordinates are known. We have points X i and their projection x i in an image. In homogeneous coordinates: x i = PX i or the 3 equations (but only 2 of them are independent) x i (PX i ) = 0 Linear system in unknown P. There are 12 parameters in P, we need 6 points in general (actually only 5.5). Decomposition of P allows nding K. Restriction: The 6 points cannot be on a plane, otherwise we have a degenerate situation; in that case, 4 points dene the homography and the two extra points yield no additional constraint.
22 Calibration with planar rig [Z. Zhang A exible new technique for camera calibration 2000] Problem: One picture is not enough to nd K. Solution: Several snapshots are used. For each one, we determine the homography H between the rig and the image. The homography being computed with an arbitrary multiplicative factor, we write ( ) λh = K R 1 R 2 T We rewrite: λk 1 H = λ ( K 1 H 1 K 1 H 2 ) K 1 H 3 = (R 1 R 2 ) T 2 equations expressing orthonormality of R 1 and R 2 : H T 1 (K T K 1 )H 1 = H T 2 (K T K 1 )H 2 H T 1 (K T K 1 )H 2 = 0 With 3 views, we have 6 equations for the 5 parameters of K T K 1 ; then Cholesky decomposition.
23 The problem of geometric distortion At small or moderate focal length, we cannot ignore the geometric distortion due to lens curvature, especially away from image center. This is observable in the non-straightness of certain lines: Photo: pixels Deviation of 30 pixels The classical model of distortion is radial polynomial: ( ) ( ) ( ) xd dx = (1 + a 1 r 2 + a 2 r 4 x dx +... ) d y y d y y d
24 Estimation of geometric distortion If we integrate distortion coecients as unknowns, there is no more closed formula estimating K. We have a non-linear minimization problem, which can be solved by an iterative method. To initialize the minimization, we assume no distortion (a 1 = a 2 = 0) and estimate K with the previous linear procedure.
25 Contents Pinhole camera model Projective geometry Homographies Panorama Internal calibration Optimization techniques Conclusion Practical session
26 Linear least squares problem For example, when we have more than 4 point correspondences in homography estimation: A m 8 h = B m m 8 In the case of an overdetermined linear system, we minimize ɛ(x) = AX B 2 = f (X) 2 The gradient of ɛ can be easily computed: ɛ(x) = 2(A T AX A T B) The solution is obtained by equating the gradient to 0: X = (A T A) 1 A T B Remark 1: this is correct only if A T A is invertible, that is A has full rank. Remark 2: if A is square, it is the standard solution X = A 1 B Remark 3: A ( 1) = (A T A) 1 A T is called the pseudo-inverse of A, because A ( 1) A = I n.
27 Non-linear least squares problem We would like to solve as best we can f (X) = 0 with f non-linear. We thus minimize ɛ(x) = f (X) 2 Let us compute the gradient of ɛ: ɛ(x) = 2J T f (X) with J ij = f i Gradient descent: we iterate until convergence X = αj T f (X), α > 0 When we are close to the minimum, a faster convergence is obtained by Newton's method: x j ɛ(x 0 ) ɛ(x) + ɛ(x) T ( X) + ( X) T ( 2 ɛ)( X) and minimum is for X = ( 2 ɛ) 1 ɛ
28 Levenberg-Marquardt algorithm This is a mix of gradient descent and quasi-newton method (quasi since we do not compute explictly the Hessian matrix, but approximate it). The gradient of ɛ is ɛ(x) = 2J T f (X) so the Hessian matrix of ɛ is composed of sums of two terms: 1. Product of rst derivatives of f. 2. Product of f and second derivatives of f. The idea is to ignore the second terms, as they should be small when we are close to the minimum (f 0). The Hessian is thus approximated by Levenberg-Marquardt iteration: H = 2J T J X = (J T J + λi ) 1 J T f (X), λ > 0
29 Levenberg-Marquardt algorithm Principle: gradient descent when we are far from the solution (λ large) and Newton's step when we are close (λ small). 1. Start from initial X and λ = Compute X = (J T J + λi ) 1 J T f (X), λ > 0 3. Compare ɛ(x + X) and ɛ(x): 3a If ɛ(x + X) ɛ(x), nish. 3b If ɛ(x + X) < ɛ(x), X X + X λ λ/10 3c If ɛ(x + X) > ɛ(x), λ 10λ 4. Go to step 2.
30 Example of distortion correction Results of Zhang: Snapshot 1 Snapshot 2
31 Example of distortion correction Results of Zhang: Corrected image 1 Corrected image 2
32 Contents Pinhole camera model Projective geometry Homographies Panorama Internal calibration Optimization techniques Conclusion Practical session
33 Conclusion Camera matrix K (3 3) depends only on internal parameters of the camera. Projection matrix P (3 4) depends on K and position/orientation. Homogeneous coordinates are convenient as they linearize the equations. A homography between two images arises when the observed scene is at or the principal point is xed. 4 or more correspondences are enough to estimate a homography (in general)
34 Contents Pinhole camera model Projective geometry Homographies Panorama Internal calibration Optimization techniques Conclusion Practical session
35 Practical session: panorama construction Objective: the user clicks 4 or more corresponding points in left and right images. After a right button click, the program computes the homography and shows the resulting panorama in a new window. Install Imagine++ ( on your machine. Let the user click the matching points. Build the linear system to solve Ax = b; use linsolve (in LinAlg) to nd the solution x. Compute the bounding box of the panorama. Stitch the images: on overlapping area, take the average of colors at corresponding pixels in both images.
Lecture 2: Homogeneous Coordinates, Lines and Conics
Lecture 2: Homogeneous Coordinates, Lines and Conics 1 Homogeneous Coordinates In Lecture 1 we derived the camera equations λx = P X, (1) where x = (x 1, x 2, 1), X = (X 1, X 2, X 3, 1) and P is a 3 4
More information(Quasi-)Newton methods
(Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, where g : R n R n. Given a starting
More informationEpipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R.
Epipolar Geometry We consider two perspective images of a scene as taken from a stereo pair of cameras (or equivalently, assume the scene is rigid and imaged with a single camera from two different locations).
More informationGeometric Camera Parameters
Geometric Camera Parameters What assumptions have we made so far? -All equations we have derived for far are written in the camera reference frames. -These equations are valid only when: () all distances
More information3D Scanner using Line Laser. 1. Introduction. 2. Theory
. Introduction 3D Scanner using Line Laser Di Lu Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute The goal of 3D reconstruction is to recover the 3D properties of a geometric
More informationconstraint. Let us penalize ourselves for making the constraint too big. We end up with a
Chapter 4 Constrained Optimization 4.1 Equality Constraints (Lagrangians) Suppose we have a problem: Maximize 5, (x 1, 2) 2, 2(x 2, 1) 2 subject to x 1 +4x 2 =3 If we ignore the constraint, we get the
More information(a) We have x = 3 + 2t, y = 2 t, z = 6 so solving for t we get the symmetric equations. x 3 2. = 2 y, z = 6. t 2 2t + 1 = 0,
Name: Solutions to Practice Final. Consider the line r(t) = 3 + t, t, 6. (a) Find symmetric equations for this line. (b) Find the point where the first line r(t) intersects the surface z = x + y. (a) We
More information= y y 0. = z z 0. (a) Find a parametric vector equation for L. (b) Find parametric (scalar) equations for L.
Math 21a Lines and lanes Spring, 2009 Lines in Space How can we express the equation(s) of a line through a point (x 0 ; y 0 ; z 0 ) and parallel to the vector u ha; b; ci? Many ways: as parametric (scalar)
More informationLINEAR ALGEBRA. September 23, 2010
LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................
More information5. Orthogonal matrices
L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal
More informationEssential Mathematics for Computer Graphics fast
John Vince Essential Mathematics for Computer Graphics fast Springer Contents 1. MATHEMATICS 1 Is mathematics difficult? 3 Who should read this book? 4 Aims and objectives of this book 4 Assumptions made
More information3. INNER PRODUCT SPACES
. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.
More informationProblem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.
Math 312, Fall 2012 Jerry L. Kazdan Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday. In addition to the problems below, you should also know how to solve
More informationInner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More informationFactorization Theorems
Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization
More informationWii Remote Calibration Using the Sensor Bar
Wii Remote Calibration Using the Sensor Bar Alparslan Yildiz Abdullah Akay Yusuf Sinan Akgul GIT Vision Lab - http://vision.gyte.edu.tr Gebze Institute of Technology Kocaeli, Turkey {yildiz, akay, akgul}@bilmuh.gyte.edu.tr
More information3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.
Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R
More informationRecall that two vectors in are perpendicular or orthogonal provided that their dot
Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal
More informationDaniel F. DeMenthon and Larry S. Davis. Center for Automation Research. University of Maryland
Model-Based Object Pose in 25 Lines of Code Daniel F. DeMenthon and Larry S. Davis Computer Vision Laboratory Center for Automation Research University of Maryland College Park, MD 20742 Abstract In this
More information2-View Geometry. Mark Fiala Ryerson University Mark.fiala@ryerson.ca
CRV 2010 Tutorial Day 2-View Geometry Mark Fiala Ryerson University Mark.fiala@ryerson.ca 3-Vectors for image points and lines Mark Fiala 2010 2D Homogeneous Points Add 3 rd number to a 2D point on image
More informationSolutions to Math 51 First Exam January 29, 2015
Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not
More informationIntroduction to Algebraic Geometry. Bézout s Theorem and Inflection Points
Introduction to Algebraic Geometry Bézout s Theorem and Inflection Points 1. The resultant. Let K be a field. Then the polynomial ring K[x] is a unique factorisation domain (UFD). Another example of a
More informationLINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
More informationBildverarbeitung und Mustererkennung Image Processing and Pattern Recognition
Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition 1. Image Pre-Processing - Pixel Brightness Transformation - Geometric Transformation - Image Denoising 1 1. Image Pre-Processing
More informationInner product. Definition of inner product
Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product
More informationα = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
More informationSection 12.6: Directional Derivatives and the Gradient Vector
Section 26: Directional Derivatives and the Gradient Vector Recall that if f is a differentiable function of x and y and z = f(x, y), then the partial derivatives f x (x, y) and f y (x, y) give the rate
More information1 Determinants and the Solvability of Linear Systems
1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped
More informationRelating Vanishing Points to Catadioptric Camera Calibration
Relating Vanishing Points to Catadioptric Camera Calibration Wenting Duan* a, Hui Zhang b, Nigel M. Allinson a a Laboratory of Vision Engineering, University of Lincoln, Brayford Pool, Lincoln, U.K. LN6
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More informationSection 2.4: Equations of Lines and Planes
Section.4: Equations of Lines and Planes An equation of three variable F (x, y, z) 0 is called an equation of a surface S if For instance, (x 1, y 1, z 1 ) S if and only if F (x 1, y 1, z 1 ) 0. x + y
More informationProjective Geometry: A Short Introduction. Lecture Notes Edmond Boyer
Projective Geometry: A Short Introduction Lecture Notes Edmond Boyer Contents 1 Introduction 2 11 Objective 2 12 Historical Background 3 13 Bibliography 4 2 Projective Spaces 5 21 Definitions 5 22 Properties
More informationWe shall turn our attention to solving linear systems of equations. Ax = b
59 Linear Algebra We shall turn our attention to solving linear systems of equations Ax = b where A R m n, x R n, and b R m. We already saw examples of methods that required the solution of a linear system
More informationThe Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
More informationMATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.
MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column
More informationMath 241, Exam 1 Information.
Math 241, Exam 1 Information. 9/24/12, LC 310, 11:15-12:05. Exam 1 will be based on: Sections 12.1-12.5, 14.1-14.3. The corresponding assigned homework problems (see http://www.math.sc.edu/ boylan/sccourses/241fa12/241.html)
More informationOrthogonal Diagonalization of Symmetric Matrices
MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding
More informationCS3220 Lecture Notes: QR factorization and orthogonal transformations
CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss
More informationa 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
More information1 Lecture: Integration of rational functions by decomposition
Lecture: Integration of rational functions by decomposition into partial fractions Recognize and integrate basic rational functions, except when the denominator is a power of an irreducible quadratic.
More information1 Review of Least Squares Solutions to Overdetermined Systems
cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares
More informationTHE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS
THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear
More informationLinear Algebra Notes
Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note
More informationChapter 6. Orthogonality
6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be
More informationLinear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
More informationMathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
More informationTHREE DIMENSIONAL GEOMETRY
Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,
More information1 3 4 = 8i + 20j 13k. x + w. y + w
) Find the point of intersection of the lines x = t +, y = 3t + 4, z = 4t + 5, and x = 6s + 3, y = 5s +, z = 4s + 9, and then find the plane containing these two lines. Solution. Solve the system of equations
More informationSimilar matrices and Jordan form
Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive
More informationApplied Linear Algebra I Review page 1
Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties
More informationDERIVATIVES AS MATRICES; CHAIN RULE
DERIVATIVES AS MATRICES; CHAIN RULE 1. Derivatives of Real-valued Functions Let s first consider functions f : R 2 R. Recall that if the partial derivatives of f exist at the point (x 0, y 0 ), then we
More informationISOMETRIES OF R n KEITH CONRAD
ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x
More informationBuild Panoramas on Android Phones
Build Panoramas on Android Phones Tao Chu, Bowen Meng, Zixuan Wang Stanford University, Stanford CA Abstract The purpose of this work is to implement panorama stitching from a sequence of photos taken
More informationCURVE FITTING LEAST SQUARES APPROXIMATION
CURVE FITTING LEAST SQUARES APPROXIMATION Data analysis and curve fitting: Imagine that we are studying a physical system involving two quantities: x and y Also suppose that we expect a linear relationship
More informationAdding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors
1 Chapter 13. VECTORS IN THREE DIMENSIONAL SPACE Let s begin with some names and notation for things: R is the set (collection) of real numbers. We write x R to mean that x is a real number. A real number
More informationMATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.
MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar
More informationGeneral Framework for an Iterative Solution of Ax b. Jacobi s Method
2.6 Iterative Solutions of Linear Systems 143 2.6 Iterative Solutions of Linear Systems Consistent linear systems in real life are solved in one of two ways: by direct calculation (using a matrix factorization,
More informationLinear Algebra Review. Vectors
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length
More informationLinear Threshold Units
Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear
More informationSection 13.5 Equations of Lines and Planes
Section 13.5 Equations of Lines and Planes Generalizing Linear Equations One of the main aspects of single variable calculus was approximating graphs of functions by lines - specifically, tangent lines.
More information160 CHAPTER 4. VECTOR SPACES
160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results
More informationHow To Solve A Linear Dierential Equation
Dierential Equations (part 2): Linear Dierential Equations (by Evan Dummit, 2012, v. 1.00) Contents 4 Linear Dierential Equations 1 4.1 Terminology.................................................. 1 4.2
More informationProblems. Universidad San Pablo - CEU. Mathematical Fundaments of Biomedical Engineering 1. Author: First Year Biomedical Engineering
Universidad San Pablo - CEU Mathematical Fundaments of Biomedical Engineering Problems Author: First Year Biomedical Engineering Supervisor: Carlos Oscar S. Sorzano June 9, 05 Chapter Lay,.. Solve the
More informationIntroduction to the Finite Element Method (FEM)
Introduction to the Finite Element Method (FEM) ecture First and Second Order One Dimensional Shape Functions Dr. J. Dean Discretisation Consider the temperature distribution along the one-dimensional
More informationSOLUTIONS. f x = 6x 2 6xy 24x, f y = 3x 2 6y. To find the critical points, we solve
SOLUTIONS Problem. Find the critical points of the function f(x, y = 2x 3 3x 2 y 2x 2 3y 2 and determine their type i.e. local min/local max/saddle point. Are there any global min/max? Partial derivatives
More informationDecember 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation
More informationINTRODUCTION TO RENDERING TECHNIQUES
INTRODUCTION TO RENDERING TECHNIQUES 22 Mar. 212 Yanir Kleiman What is 3D Graphics? Why 3D? Draw one frame at a time Model only once X 24 frames per second Color / texture only once 15, frames for a feature
More informationThe Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method
The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method Robert M. Freund February, 004 004 Massachusetts Institute of Technology. 1 1 The Algorithm The problem
More informationEquations Involving Lines and Planes Standard equations for lines in space
Equations Involving Lines and Planes In this section we will collect various important formulas regarding equations of lines and planes in three dimensional space Reminder regarding notation: any quantity
More informationGeometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi
Geometry of Vectors Carlo Tomasi This note explores the geometric meaning of norm, inner product, orthogonality, and projection for vectors. For vectors in three-dimensional space, we also examine the
More informationEXPERIMENTAL EVALUATION OF RELATIVE POSE ESTIMATION ALGORITHMS
EXPERIMENTAL EVALUATION OF RELATIVE POSE ESTIMATION ALGORITHMS Marcel Brückner, Ferid Bajramovic, Joachim Denzler Chair for Computer Vision, Friedrich-Schiller-University Jena, Ernst-Abbe-Platz, 7743 Jena,
More information5 Homogeneous systems
5 Homogeneous systems Definition: A homogeneous (ho-mo-jeen -i-us) system of linear algebraic equations is one in which all the numbers on the right hand side are equal to : a x +... + a n x n =.. a m
More informationComputational Optical Imaging - Optique Numerique. -- Deconvolution --
Computational Optical Imaging - Optique Numerique -- Deconvolution -- Winter 2014 Ivo Ihrke Deconvolution Ivo Ihrke Outline Deconvolution Theory example 1D deconvolution Fourier method Algebraic method
More informationVideo stabilization for high resolution images reconstruction
Advanced Project S9 Video stabilization for high resolution images reconstruction HIMMICH Youssef, KEROUANTON Thomas, PATIES Rémi, VILCHES José. Abstract Super-resolution reconstruction produces one or
More informationSection 1.1. Introduction to R n
The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to
More information1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
More informationProjective Geometry. Projective Geometry
Euclidean versus Euclidean geometry describes sapes as tey are Properties of objects tat are uncanged by rigid motions» Lengts» Angles» Parallelism Projective geometry describes objects as tey appear Lengts,
More informationLecture 12: Cameras and Geometry. CAP 5415 Fall 2010
Lecture 12: Cameras and Geometry CAP 5415 Fall 2010 The midterm What does the response of a derivative filter tell me about whether there is an edge or not? Things aren't working Did you look at the filters?
More informationCONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation
Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in
More informationAlgebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year.
This document is designed to help North Carolina educators teach the Common Core (Standard Course of Study). NCDPI staff are continually updating and improving these tools to better serve teachers. Algebra
More informationMath 4310 Handout - Quotient Vector Spaces
Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable
More informationMATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.
MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α
More informationx1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.
Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability
More informationLeast-Squares Intersection of Lines
Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a
More informationContinued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
More informationAlgebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.
Chapter 1 Vocabulary identity - A statement that equates two equivalent expressions. verbal model- A word equation that represents a real-life problem. algebraic expression - An expression with variables.
More information15. Symmetric polynomials
15. Symmetric polynomials 15.1 The theorem 15.2 First examples 15.3 A variant: discriminants 1. The theorem Let S n be the group of permutations of {1,, n}, also called the symmetric group on n things.
More informationAn Iterative Image Registration Technique with an Application to Stereo Vision
An Iterative Image Registration Technique with an Application to Stereo Vision Bruce D. Lucas Takeo Kanade Computer Science Department Carnegie-Mellon University Pittsburgh, Pennsylvania 15213 Abstract
More informationPrinciples of Scientific Computing Nonlinear Equations and Optimization
Principles of Scientific Computing Nonlinear Equations and Optimization David Bindel and Jonathan Goodman last revised March 6, 2006, printed March 6, 2009 1 1 Introduction This chapter discusses two related
More information8. Linear least-squares
8. Linear least-squares EE13 (Fall 211-12) definition examples and applications solution of a least-squares problem, normal equations 8-1 Definition overdetermined linear equations if b range(a), cannot
More informationFinite cloud method: a true meshless technique based on a xed reproducing kernel approximation
INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING Int. J. Numer. Meth. Engng 2001; 50:2373 2410 Finite cloud method: a true meshless technique based on a xed reproducing kernel approximation N.
More informationHomogeneous systems of algebraic equations. A homogeneous (ho-mo-geen -ius) system of linear algebraic equations is one in which
Homogeneous systems of algebraic equations A homogeneous (ho-mo-geen -ius) system of linear algebraic equations is one in which all the numbers on the right hand side are equal to : a x + + a n x n = a
More informationUniversity of Lille I PC first year list of exercises n 7. Review
University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients
More informationA Flexible New Technique for Camera Calibration
A Flexible New Technique for Camera Calibration Zhengyou Zhang December 2, 1998 (updated on December 14, 1998) (updated on March 25, 1999) (updated on Aug. 1, 22; a typo in Appendix B) (updated on Aug.
More informationLinear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices
MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two
More informationSolution of Linear Systems
Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start
More informationPlanar Curve Intersection
Chapter 7 Planar Curve Intersection Curve intersection involves finding the points at which two planar curves intersect. If the two curves are parametric, the solution also identifies the parameter values
More informationLecture 14: Section 3.3
Lecture 14: Section 3.3 Shuanglin Shao October 23, 2013 Definition. Two nonzero vectors u and v in R n are said to be orthogonal (or perpendicular) if u v = 0. We will also agree that the zero vector in
More informationAPPLICATIONS. are symmetric, but. are not.
CHAPTER III APPLICATIONS Real Symmetric Matrices The most common matrices we meet in applications are symmetric, that is, they are square matrices which are equal to their transposes In symbols, A t =
More information1 2 3 1 1 2 x = + x 2 + x 4 1 0 1
(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which
More information