SO(3) Camillo J. Taylor and David J. Kriegman. Yale University. Technical Report No. 9405



Similar documents
Metrics on SO(3) and Inverse Kinematics

3 Orthogonal Vectors and Matrices

The cover SU(2) SO(3) and related topics

α = u v. In other words, Orthogonal Projection

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Inner Product Spaces and Orthogonality

Surface Normals and Tangent Planes

Solving Simultaneous Equations and Matrices

Orthogonal Projections

Geometric description of the cross product of the vectors u and v. The cross product of two vectors is a vector! u x v is perpendicular to u and v

Kinematical Animation

LINEAR ALGEBRA W W L CHEN

ISOMETRIES OF R n KEITH CONRAD

Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi

Essential Mathematics for Computer Graphics fast

3D Tranformations. CS 4620 Lecture 6. Cornell CS4620 Fall 2013 Lecture Steve Marschner (with previous instructors James/Bala)

LINEAR ALGEBRA W W L CHEN

A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS

(Quasi-)Newton methods

Section 1.1. Introduction to R n

Computer Animation. Lecture 2. Basics of Character Animation

Math for Game Programmers: Dual Numbers. Gino van den Bergen

17. Inner product spaces Definition Let V be a real vector space. An inner product on V is a function

11.1. Objectives. Component Form of a Vector. Component Form of a Vector. Component Form of a Vector. Vectors and the Geometry of Space

Principal Rotation Representations of Proper NxN Orthogonal Matrices

1 VECTOR SPACES AND SUBSPACES

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

Geodesic Lines, Local Gauss-Bonnet Theorem

Kinematics of Robots. Alba Perez Gracia

Solutions to old Exam 1 problems

Linear Algebra Review. Vectors

DEFINITION A complex number is a matrix of the form. x y. , y x

Two vectors are equal if they have the same length and direction. They do not

Equations Involving Lines and Planes Standard equations for lines in space

Content. Chapter 4 Functions Basic concepts on real functions 62. Credits 11

Constraint satisfaction and global optimization in robotics

Mean value theorem, Taylors Theorem, Maxima and Minima.

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

THREE DIMENSIONAL GEOMETRY

Math 241, Exam 1 Information.

Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.

Chapter 6. Linear Transformation. 6.1 Intro. to Linear Transformation

An Introduction to Applied Mathematics: An Iterative Process

Introduction to Modern Algebra

Linear Threshold Units

South Carolina College- and Career-Ready (SCCCR) Pre-Calculus

Part I. Basic Maths for Game Design

SYMMETRIC EIGENFACES MILI I. SHAH

CS 4620 Practicum Programming Assignment 6 Animation

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Thnkwell s Homeschool Precalculus Course Lesson Plan: 36 weeks

A Direct Numerical Method for Observability Analysis

CS3220 Lecture Notes: QR factorization and orthogonal transformations

We shall turn our attention to solving linear systems of equations. Ax = b

Interactive Computer Graphics

Fundamentals of Computer Animation

Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product

Introduction to Logistic Regression

13.4 THE CROSS PRODUCT

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

Numerical Analysis Lecture Notes

Lecture 18 - Clifford Algebras and Spin groups

Eigenvalues and Eigenvectors

Dimension Theory for Ordinary Differential Equations

On Motion of Robot End-Effector using the Curvature Theory of Timelike Ruled Surfaces with Timelike Directrix

Chapter 9 Unitary Groups and SU(N)

Epipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R.

x = + x 2 + x

NOV /II. 1. Let f(z) = sin z, z C. Then f(z) : 3. Let the sequence {a n } be given. (A) is bounded in the complex plane

Visualization of General Defined Space Data

MAT188H1S Lec0101 Burbulla

( ) which must be a vector

Models of Cortical Maps II

Notes on Factoring. MA 206 Kurt Bryan

ME 115(b): Solution to Homework #1

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

521493S Computer Graphics. Exercise 2 & course schedule change

Ideal Class Group and Units

B4 Computational Geometry

1 Introduction to Matrices

Origins of the Unusual Space Shuttle Quaternion Definition

Section 2.4: Equations of Lines and Planes

Two hours UNIVERSITY OF MANCHESTER SCHOOL OF COMPUTER SCIENCE. M.Sc. in Advanced Computer Science. Friday 18 th January 2008.

Geometric Camera Parameters

1996 IFAC World Congress San Francisco, July 1996

Mathematics Review for MS Finance Students

Introduction to Computer Graphics Marie-Paule Cani & Estelle Duveau

An Iterative Image Registration Technique with an Application to Stereo Vision

Pattern Analysis. Logistic Regression. 12. Mai Joachim Hornegger. Chair of Pattern Recognition Erlangen University

How To Understand The Theory Of Differential Geometry

Precalculus REVERSE CORRELATION. Content Expectations for. Precalculus. Michigan CONTENT EXPECTATIONS FOR PRECALCULUS CHAPTER/LESSON TITLES

(Refer Slide Time: 1:42)

Estimated Pre Calculus Pacing Timeline

OpenFOAM Optimization Tools

Rotation Matrices and Homogeneous Transformations

Transcription:

Minimization on the Lie Group SO(3) and Related Manifolds amillo J. Taylor and David J. Kriegman Yale University Technical Report No. 945 April, 994

Minimization on the Lie Group SO(3) and Related Manifolds amillo J. Taylor and David J. Kriegman enter for Systems Science Department of Electrical Engineering Yale University New Haven, T 652-8267 taylortrurl.eng.yale.edu, kriegmancs.yale.edu Abstract This paper presents a novel approach to carrying out numerical minimization procedures on the Lie Group SO(3) and related manifolds. The approach constructs a sequence of local parameterizations of the manifold SO(3) rather than relying on a single global parameterization such as Euler angles. Thus, the problems caused by the singularities in these global parameterizations are avoided. This research was supported by the NSF under grant number NYI IRI-925799.

Introduction A number of interesting problems in computer vision and robotics involve nding the \optimal" rotation matrix in the set of rigid rotations SO(3) = fr 2 IR 33 : R t R = I det(r) = g [GO89, Hor9]. For example, the problem of determining the position of a camera with respect to a known constellation of feature points from image data can be cast in terms of a real valued objective function O : SO(3)! IR () which measures how well the proposed rotation matrix R explains the observed image data. Most of these optimization problems do not have closed form solutions, which means that numerical optimization procedures are usually applied in order to nd the solutions. 2 In most numerical minimization paradigms, the unknown parameters are assumed to lie in some vector space isomorphic to IR n. Therefore, in order to apply these optimization techniques, the user constructs some parameterization of the set of rotation matrices, like Euler angles, that maps the vectors in IR 3 onto the three-dimensional manifold SO(3). Unfortunately, the Lie group SO(3) is not isomorphic to the vector space IR 3 [Stu64]. This means that all of these global parameterizations will exhibit singularities or other anomalies at various points in the parameter space. These anomalies can cause serious problems for gradient based minimization procedures. There are several approaches to optimization problems on a manifold such as SO(3) that involve calculating incremental steps in the tangent space to the manifold. That is, given an objective function O : M! IR and a particular element p on the manifold M we could compute the steepest descent direction, a, in the tangent space to M at p. Wewould then have to decide on a step size,, to use in the update step p = p + a. In general, the new vector p will not lie on the manifold, so some procedure will be required to rescale p back onto the manifold. The technique presented in this paper avoids this rescaling problem entirely by constructing an actual local parameterization of the manifold at each stage rather than a simple linear approximation. This means that the parameterization will map any incremental step in its domain to a distinct point on the manifold. The method also provides a systematic technique for computing the size and direction of the incremental step at each iteration. Our solution is to forgo the global parameterization approach in favor of an atlas of local parameterizations. That is, at every point R on the manifold SO(3) we construct a continuous, 2 A notable exception to this generalization is the work of Nadas[Nad78].

invertible dierentiable mapping between a neighborhood of R on the manifold and an open set in IR 3 as shown in equation (2). R(!) =R expfj(!)g! 2 IR 3 p! t!< (2) Where! =(! x! y! z ) t, exp is the matrix exponential operator [ur79], and J(!) is the skew symmetric operator J :IR 3! so(3) given by: J(!) = ;! z! y! z ;! x A : ;! y! x Equation (2) provides a one to one mapping between vectors in the open ball p! t!<and a local region of the manifold SO(3) centered around the point R. The objective function O can be expressed in terms of this local parameterization and a quadratic approximation for the objective function around the point R can be constructed as follows: O(R(!)) O(R)+g t! +! t H! (3) g and H represent the gradient and Hessian of the function respectively evaluated at the point! = which corresponds to the rotation matrix R. This quadratic approximation can be used to construct an incremental step which leads to a lower point on the error surface.! s = ;H ; g (4) This incremental step can be applied to determine the new rotation matrix as follows: R = R expfj(! s )g. Note that the user q must ensure that each minimization step lies within the range of the local parameterization, ie.! t! s s <. The updating step can be accomplished in a particularly elegant manner if the rotations are represented by unit quaternions. The rotation R can be represented by the unit quaternion q and the incremental step expfj(! s )g by the quaternion q s = (cos(=2) (sin(=2)=)!) where = p! t!. The product of these two rotations can be obtained by carrying out a standard quaternion multiplication qq s (see the Appendix for a denition of quaternion multiplication). 2

The entire minimization procedure is outlined below in psuedo-code hoose starting estimate R Loop ompute gradient g and Hessian H wrt local parameterization If local gradient is small enough end else ompute minimization step :! s ;H ; g Update R : R R expfj(! s )g. Note that the process described above isalmostidentical to the procedure employed in standard Newtonian minimization algorithms [JS83], the only real dierence is that at each iteration of this procedure the quadratic approximation and the minimization step are expressed in terms of a local parameter system rather than a global one. The general approach described above isidentical to the one proposed recently by Steven Smith [Smi93] inhiswork on minimization on Riemanninan manifolds. However, this paper deals specically with the special case of SO(3) and closely related manifolds rather than the more general problem discussed in Smith's thesis. Thus, we are able to take advantage of the special relationship between SO(3) and the group of unit quaternions Sp() to construct computationally ecient algorithms for this class of problems. It can be shown that in a Euclidean parameter space, standard Newtonian minimization algorithms exhibit quadratic convergence when provided with a starting point suciently close to a minimum [JS83]. Steven Smith was able to show that the algorithm described above will also exhibit quadratic convergence on Riemannian manifolds like SO(3) [Smi93]. 2 An Example onsider the objective function given in equation (5). The maximum of this objective function corresponds to the rotation matrix that bring the vectors u i and v i into their best alignment. Such a problem might arise in computer vision during pose estimation from range data where the u i are a set of model vectors and the v i are measurements. nx O(R) = (u t Rv i i) 2 u i v i 2 IR 3 R 2 SO(3) (5) i= 3

The method described in the previous section can be applied to nd the critical points of this function with respect to the rotation matrix R. At each stage of this minimization process, the Jacobian and Hessian of the objective function can be calculated as follows. R(!)=R expfj(!)g (6) nx O(!)= (u t R(!)v i i) 2 (7) i= nx O(!)=2 (u t! Rv i i)(u t i x i= x R(!)v i) (8) nx O(!)=2 f(u t R(!)v i i )(u t R(!)v i i )+(u t! x! y i=! x! R(!)v i i)(u t R(!)v i i )g (9) y! x! y Equations (8) and (9) refer to individual terms in the Jacobian and Hessian respectively. The derivatives of the rotation matrix can be computed quite easily as the following examples indicate.! x (R expfj(!)g) = R!=! x = RJ(^x) X n= f! g n! (J(!))n!=! x! y (R expfj(!)g)!= =! x! y R X f g! n! (J(!))n n= = R 2 (J(^x)J(^y)+J(^y)J(^x))!= Notice that these expressions are quite simple to compute, and they do not involve evaluating any transcendental functions as an Euler parameterization would. 3 Further Applications The following subsections describe how the the local parameterization of SO(3) described in the previous section can be used to construct local parameterizations of several other manifolds commonly used in vision and robotics. As in section 2, the Jacobian and Hessian of these parameterizations can be readily computed, and the optimization procedure described in section can be used to minimize objective functions over these manifolds. 3. The Group of Rigid Transformations SE(3) Elements of the group of rigid transformations SE(3) are generally denoted by a tuple S = hr T i where R 2 SO(3) and T 2 IR 3. Given a particular element of this group, we can construct a 4

local parameterization S :IR 6! SE(3) that maps elements from an open ball in IR 6 onto a neighborhood around hr Ti on the manifold as follows: S(! t) =hr expfj(!)g T + ti! t 2 IR 3 () Where the tuple h! ti can be viewed as an element ofanopenregioninir 6. 3.2 The Surface of a Sphere The parameterization of SO(3) described in the previous section can also be used to parameterize various smooth manifolds in IR 3.For example, the surface of the unit sphere in IR 3 can be denoted as follows : S 2 = f^v : ^v 2 IR 3 ^v t^v =g. Anyelement of this set can be expressed in terms of a rotation matrix R 2 SO(3) : ^v = R^z ^z =() t. 3 Given a particular element on the manifold, ^v = R^z, we can construct a local parameterization ^v :IR 2! S 2 that maps elements from an open region of IR 2 onto a neighborhood of ^v on the manifold as follows. ^v(! x! y )=R expfj! x! y Ag^z! 2 x +! 2 y < () It is trivial to show that the Jacobian of this local parameterization has rank 2 for all points on the manifold. This parameterization can be used to carry out minimization operations over the manifold. The update step at each iteration can be performed using quaternion multiplication as described earlier. 3.3 The Set of Innite Straight Lines Every straight line in IR 3 can be represented in terms of a unit vector ^v which indicates the direction of the line, and a vector d which designates the point on the line that is closest to the origin. In other words, the set of straight lines in IR 3 can be represented by the set of tuples L = fh^v di 2R 3 IR 3 : ^v t^v =,^v t d =g This set of tuples denes an algebraic set which can be viewed as a 4 dimensional manifold embedded in IR 6. 4 3 Note that for a given ^v there is a one-dimensional subset of SO(3) that will satisfy this equation. 4 A careful reader will notice that there is actually a two to one correspondence between points on this manifold and the set of innite straight lines since h^v di and h;^v di denote the same line. 5

Any point on this manifold can be represented in terms of a rotation matrix R 2 SO(3) and two scalars a b 2 IR as follows : ^v = R^z, d = R(a^x + b^y). We can construct a local parameterization around any point h^v di 2Lthat maps an open region on IR 4 onto a neighborhood of h^v di on the manifold (i.e. L :IR 4!L) as follows. L(! x! y )=hr expfj! x! y Ag^z R expfj! x! y Ag((a + )^x +(b + )^y)i :! 2 x +! 2 y < (2) Given a particular line represented by hr a bi and a particular incremental step h! x! y i the new line will be represented by hr expfj 4 onclusion! x! y Ag a+ b + i This paper describes a novel approach to carrying out numerical optimization procedures over the Lie group SO(3) and related manifolds. This technique exploits the fact that the group of rotations has a natural parameterization based on the exponential operator associated with the Lie group. This approach avoids the inevitable singularities associated with global parameterizations like Euler angles. It should also prove more eective than other techniques that approximate gradient descent on the tangent space to the manifold. It may be possible to apply similar techniques to other Lie groups by taking advantage of their exponential operators. All of the parameterizations described in this paper have been successfully implemented as part of an actual structure from motion algorithm described in [TK94]. In this application, minimization based on the \standard" global parameterizations failed to converge in some situations, whereas the presented method never encounters the singularity issues that plagued these global parameterizations. Acknowledgments: The authors wish to thank Prof. Daniel Koditschek and Prof. Roger rockett for several useful discussions which helped to crystallize the ideas presented in this monograph. Appendix : Representations of SO(3) A number of dierent representations for the orthogonal group SO(3) are used in this paper, this appendix describes how these representations are related to each other. The orthogonal group 6

SO(3) is dened as a matrix subgroup of the general linear group GL(3). SO(3) = fr 2 IR 33 : R t R = I det(r) =g (3) Following [ur79], the exponential map associated with this Lie group provides a surjective map from IR 3 to SO(3). X R(!) =expfj(!)g = f g (4) n= n! (J(!))n J(!) istheskew symmetric operatorj :IR 3! IR 33 given by: J(!) = ;! z! y! z ;! x A : ;! y! x If the vector! is rewritten in terms of a unit vector ^! and a magnitude we can explicitly compute the sum of this innite series as shown below. R(!) =R(^!) =I +sinj(^!)+(; cos )(J(^!)) 2 (5) This equation is usually referred to as the Rodrigues formula the unit vector ^! can be thought of as the axis of rotation while the scalar variable denotes the magnitude of the rotation in radians. The group of unit quaternions Sp() is dened as a set of tuples Sp() = f(u u) :u 2 IR u 2 IR 3 u 2 + u =g with the following group operation (quaternion multiplication): ut (u u)(v v) =(uv ; u t v J(u)v + uv + vu) (6) Given a quaternion q =(u u) its conjugate q is given by q =(u ;u). It is a straightforward matter to show thatqq =qq =( ), where ( ) is the identity element of the group. Sp() can also be thought of as the unit 3 sphere in IR 4,infactwe can dene a surjective mapping from IR 3 to Sp() as follows. q(!) =q(^!) =(cos(=2) sin(=2)^!) (7) Given the mappings dened in equations (5) and (7), it is possible to show that the following equation must hold for every v 2 IR 3. q(!)( v)q(!) =( R(!)v) (8) 7

This equation suggests a possible connection between SO(3) and Sp(). In fact, we can use this expression to dene a surjective homomorphism : Sp()! SO(3) that links the two groups. (q) =((u (uu2u3) t )= 2(u 2 + u2 ) ; 2(u u2 ; uu3) 2(uu3 + uu2) 2(uu2 + uu3) 2(u 2 + u2 2 ) ; 2(u 2u3 ; uu) A (9) 2(uu3 ; uu2) 2(u2u3 + uu) 2(u 2 + u 2 3) ; Note that this is not an isomorphism since the quaternions (u u) and(;u ;u) are mapped onto the same rotation matrix. Since is a homomorphism we know that it must preserve the group operation, that is (qv) =(q)(v) 8q v 2 Sp(). This means that we can represent elements of SO(3) by unit quaternions in all of our calculations and replace the matrix multiplication operations with quaternion multiplications as dened in equation (6). References [ur79] Morton urtis. Matrix Groups. Springer-Verlag, 979. [GO89] haya Gurwitz and Michael L. Overton. A globally convergent algorithm for minimizing over the rotation group of quadratic forms. IEEE Trans. Pattern Anal. Machine Intell., ():228{232, November 989. [Hor9] erthold K.P. Horn. Relative orientation. International Journal of omputer Vision, 4:59{78, 99. [JS83] J.E. Dennis Jr. and Robert. Schnabel. Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Prentice-Hall, 983. [Nad78] Arthur Nadas. Least squares and maximum likelihood estimation of rigid motion. IM Research Report R6945, IM, Yorktown Heights, 978. [Smi93] Steven Smith. Geometric Optimization Methods for Adaptive Filtering. PhD thesis, Harvard University, Division of Applied Sciences, ambridge MA, September 993. [Stu64] John Stuelpnagel. On the parameterization of the three-dimensional rotation group. SIAM Review, 6(4):422{43, October 964. [TK94] amillo J. Taylor and David J. Kriegman. Structure and motion from line segments in multiple images. enter for Systems Science 942, Yale University, January 994. 8