Modélisation et résolutions numérique et symbolique



Similar documents
Lecture 3: Finding integer solutions to systems of linear equations

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Notes on Determinant

Similarity and Diagonalization. Similar Matrices

University of Lille I PC first year list of exercises n 7. Review

LINEAR ALGEBRA. September 23, 2010

1 Sets and Set Notation.

U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, Notes on Algebra

Lecture 4: Partitioned Matrices and Determinants

Applied Linear Algebra I Review page 1

LINEAR ALGEBRA W W L CHEN

Classification of Cartan matrices

NOTES ON LINEAR TRANSFORMATIONS

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

MATH1231 Algebra, 2015 Chapter 7: Linear maps

by the matrix A results in a vector which is a reflection of the given

Chapter 17. Orthogonal Matrices and Symmetries of Space

Linear Algebra Notes

Matrix Representations of Linear Transformations and Changes of Coordinates

26. Determinants I. 1. Prehistory

How To Prove The Dirichlet Unit Theorem

Integer Factorization using the Quadratic Sieve

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Orthogonal Diagonalization of Symmetric Matrices

Mathematics Course 111: Algebra I Part IV: Vector Spaces

x = + x 2 + x

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

ABSTRACT ALGEBRA: A STUDY GUIDE FOR BEGINNERS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

A simple and fast algorithm for computing exponentials of power series

Linear Algebra Review. Vectors

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Numerical Analysis Lecture Notes

Continued Fractions and the Euclidean Algorithm

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Solution of Linear Systems

The Characteristic Polynomial

2 Polynomials over a field

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

Chapter 6. Orthogonality

The van Hoeij Algorithm for Factoring Polynomials

1 VECTOR SPACES AND SUBSPACES

CHAPTER SIX IRREDUCIBILITY AND FACTORIZATION 1. BASIC DIVISIBILITY THEORY

Inner Product Spaces and Orthogonality

17. Inner product spaces Definition Let V be a real vector space. An inner product on V is a function

Systems of Linear Equations

8 Square matrices continued: Determinants

Methods for Finding Bases

Today s Topics. Primes & Greatest Common Divisors

x + y + z = 1 2x + 3y + 4z = 0 5x + 6y + 7z = 3

The Mixed Binary Euclid Algorithm

CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #18: Dimensionality Reduc7on

Factorization Theorems

Basic Algorithms In Computer Algebra

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Lecture 13 - Basic Number Theory.

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Factoring Algorithms

Lecture 5 Principal Minors and the Hessian

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Factoring of Prime Ideals in Extensions

Operation Count; Numerical Linear Algebra

GENERATING SETS KEITH CONRAD

Name: Section Registered In:

Matrix algebra for beginners, Part I matrices, determinants, inverses

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

Mathematics (MAT) MAT 061 Basic Euclidean Geometry 3 Hours. MAT 051 Pre-Algebra 4 Hours

Mean value theorem, Taylors Theorem, Maxima and Minima.

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

= = 3 4, Now assume that P (k) is true for some fixed k 2. This means that

Vector and Matrix Norms

T ( a i x i ) = a i T (x i ).

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder

Notes on Symmetric Matrices

Linear Algebra: Determinants, Inverses, Rank

Factoring Algorithms

Factorization Algorithms for Polynomials over Finite Fields

The Determinant: a Means to Calculate Volume

Data Mining: Algorithms and Applications Matrix Math Review

Unit 18 Determinants

Factoring polynomials over finite fields

CHAPTER 5. Number Theory. 1. Integers and Division. Discussion

Linear Algebra Notes for Marsden and Tromba Vector Calculus

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Inner Product Spaces

Chapter 7. Permutation Groups

Ideal Class Group and Units

MA106 Linear Algebra lecture notes

Lecture 13: Factoring Integers

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

UNIVERSITY GRADUATE STUDIES PROGRAM SILLIMAN UNIVERSITY DUMAGUETE CITY. Master of Science in Mathematics

1 Determinants and the Solvability of Linear Systems

-4- Algorithm Fast LU Decomposition

The Ideal Class Group

Cryptography and Network Security. Prof. D. Mukhopadhyay. Department of Computer Science and Engineering. Indian Institute of Technology, Kharagpur

Factoring Polynomials

Transcription:

Modélisation et résolutions numérique et symbolique via les logiciels Maple et Matlab Jeremy Berthomieu Mohab Safey El Din Stef Graillat Mohab.Safey@lip6.fr

Outline Previous course: partial review of what you should know; Linear Algebra: basic concepts; Linear Algebra: basic algorithms; Linear Algebra: Solving and CRT Linear Algebra: how to survive in Maple

Operations on integers Assuming = size(n) size(m), Addition of n; m in O(size(n)) and size of the output is ' max(size(n); size(m)) Multiplication of n; m: M int ( ) 2 O( 2 ) M int ( ) 2 O( 1:59 ) (naive) (Karatsuba) M int ( ) 2 O( log log log ) (Fast Fourier Transform) and size of the output is ' size(n) + size(m)

GCD and Euclide s algorithm Input: A, B integers Output: R and U, V such that R = GCD(A; B) and R = AU + BV Q, RS, US, VS are local variables which will store intermediate datas Equations R = AU + BV and R 0 = AU 0 + BV 0 are loop invariants #Initialize : R := A, R := B, U := 1, V := 0, U := 0, V := 1 while (R <> 0) do Q := iquo(r, R ); RS := R, US := U, VS := V; R := R, U := U, V := V ; R := RS - Q *R, U = US - Q*U, V = VS - Q*V ; od; return R, U, V; Complexity is in O(size(A) 2 ) (assuming A B) Proof is skipped.

Chinese Remainder Theorem Let P 1 ; : : : ; P k be pairwise coprimes and A 1 ; : : : ; A k be a sequence of integers such that A i 2 Z P i Z. There exists an integer X such that X = A 1 mod P 1. X = A k mod P k Moreover, all solutions are congruent modulo N = P 1 P k. Application: Choose P 1 ; : : : ; P k prime of size smaller than the word machine.

Chinese Remainder Reconstruction Take P = P 1 P k. For 1 i k, remark that P P i and P i are co-prime. take U i as the inverse of P P i modulo P i U i P P i + V P i = 1 Construct X = A 1 P P 1 U 1 + + A k P P k U k

Examples Solve X = 2 mod 3 and X = 3 mod 5 Solve X = 1 mod 2 and X = 2 mod 3 and X = 3 mod 5

Basic algorithms in Linear Algebra One of the most fundamental computational topics. Applications in: Graphs (think about adjacency matrices) Coding and Error-correcting codes (linear codes) Artifical Intelligence (discrimination, SVD, learning algorithms, etc.) Vision and Computer Graphics etc. it is everywhere. Bibliography: Modern Computer Algebra, von zur Gathen, Gerhardt, Springer. Méthodes matricielles Introduction à la complexité algébrique, Abdeljaoued, Lombardi, Springer. Donald E. Knuth, Seminumerical algorithms, The Art of Computer Programming. We will see very elementary notions and algos with a focus on bit complexity and some tricks for implementations.

Vector spaces and linear applications Let K be a field. We say that E is a K-vector space if and only if: there exists 0 2 E such that for all u 2 E, 0 + u = u for all u 2 E, there exists v such that u + v = 0. for all u; v in E, u + v 2 E; for all 2 K and v 2 E, v 2 E. for 1 2 K and all u 2 E, 1:u = u. addition (between elements of E) is associative and commutative. Some examples: K n is a vector space; K[X] is a K-vector space.

Linear applications and finite dimensional vector spaces We will be interested in vector spaces in which we can compute! Basic operations are linear operations between vectors. Linear maps Let K be a field, E and F be two K-vector spaces. We say that a map ' : E! F is linear if and only if: for all 2 K and u 2 E, '(u) = '(u); for all u; v in E, '(u + v) = '(u) + '(v). Basis We say that B E is a basis of E if and only if it is a set of linearly independent elements of E for all u 2 E, there exists b 1 ; : : : ; b k in B and 1 ; : : : ; k in E such that u = 1 b 1 + + k b k. Example: 1; X; X 2 ; : : : ; X k ; : : : is a basis of K[X].

Matrices and linear applications Let K be a field, E and F be K-vector spaces and n; m be in N. Let ' : E! F be a linear map. Let B E ; B F be bases of E and F. Finite dimensional vector spaces We say that E has dimension n if and only if there exists a basis of B E of E of cardinality n. These are the vector spaces in which one can compute why? Linear maps in finite dimensional vector spaces The map ' is completly determined by the knowledge of '(B E ) expressed in B F.

Matrices and linear applications Let B E = (e 1 ; : : : ; e n ) (resp. B F = (f 1 ; : : : ; f m )) be a basis of E ( F ). '(e 1 ) = a 1;1 f 1 + + a m;1 f m.. The m n matrix = '(e n ) = a 1;n f 1 + + a m;n f m 2 6 4 a 1;1 a 1;n a m;1 a m;n Taking a = 1 e 1 + + n e n, '(a) is given by 2 6 4. a 1;1 a 1;n. a m;1 a m;n.. 2 3 7 5 6 4 3 7 5 is associated to '. 1.. n 3 7 5

Data-structures and matrices Data-structure: Basically, matrices are arrays on which we perform computations. Many data-structures can be used. Bi-dimensional arrays; uni-dimensional arrays (storing lines after lines or columns after columns the entries of the matrix) dedicated structures for sparse matrices dedicated structures for structured matrices less memory usage less memory usage

Basic operations on matrices (should be known) Addition of matrices (of the same sizes) is the addition of maps ( = (' + ) being defined by (u) = '(u) + (u)) O(nm) Scalar-multiplication of a matrix (by 2 K) is the multiplication (by of a map) O(nm) The set of n m matrices is a K-vector space. Multiplication matrix-vector provides the image of a vector by a map O(nm) Multiplication matrix-matrix (with compatible sizes n m and m k) is the composition of linear maps. Exercises: review algorithms and their complexities O(nmk)

Kernel and Images Let ' : E! F be a linear map. Kernel ker(') = fu 2 E j '(u) = 0g; it is a K-vector subspace Image Im(') = fv 2 F j 9u 2 E; '(u) = vg; it is a K-vector subspace Theorem (dimension formula) dim(im(')) + dim(ker(')) = n If ker(') is big, Im(') is small and reciprocally. If ker(') = f0g then ' is injective; If n = m and ker(') = f0g (square matrix), ' is an isomorphism.

The rank of a matrix The rank of a matrix A is dim(im(a)). it measures how close Im(A) is to the target space. it s also the number of linearly independent rows (resp. columns) of A. Application Let A be a n n matrix and v be a vector. The system A:X = v has a (unique) solution if ker(a) = 0, i.e. rank(a) = n! We need a measure of rank defects determinants This is relevant to computationally solve linear systems.

Determinant Now we consider square matrices with entries in a field K. Formal definition Let A = (a i;j ) 1i;jn be a n n matrix. The determinant of A is det(a) = X 2S n ( 1) sign() a 1;(1) a n;(n) where sign() = ]f1 i < j n j (i) > (j)g Geometric interpretation and properties Determinants can be obtained by ring operations. Determinants and products of matrices: det(ab) = det(a) det(b) Laplace expansion: det(a) = P 1jn ( 1)i+j a j A i;j where A i;j is the submatrix obtained by deleting the row i and column j. Geometric interpretations: det(a) = 0, rows/columns are co-linear

Determinants and solving linear systems Now, we want to solve systems like A:X = B where A is a n n matrix and B is a n 1 vector This linear system has a unique solution if det(a) 6= 0. One can use Cramer s formula avec Bi = 0 @ x i = det B i det A a0;0 : a 0;i 1 b0 a 0;i+1 : a 0;n 1 a1;0 : a 1;i 1 b1 a 1;i+1 : a 1;n 1.. :. an 1;0 : an 1;i 1 bn 1 an 1;i+1 : an 1;n 1 Without any smart algorithm for computing determinants, one uses the formula based on cofactors O(n!) Such a complexity is prohibitive 1 A

Hadamard s inequality Let A = (a i;j ) 1i;jn be a n n matrix with entries in Z. Then j det(a)j ny s X j=1 1in a 2 i;j Use in the scope of Chinese Remainder Theorem?! we need to lift elements of Z! Exercise: Provide an upper bound on the size of det(a).

Eigenvalues Eigenvectors Definitions We say that is an eigenvalue of A if and only if there exists v 6= 0 such that Av = v; in this case v is an eigenvector. The eigenspace E associated to is the set of vectors v such that Av = v. E is a vector subspace. is an eigenvalue of A if and only if it is a root of det(a LId). The ground field plays a particular role: if A is with entries in K, eigenvalues may not lie in K but in a larger field (an algebraic closure). Eigenvalues play a crucial role in many areas of computer science: data-mining (PageRank) decision theory, imagery, scient. computing, etc. BUT first we need good algorithms for solving linear systems (with polynomial bit complexity)! Determinants