LINEAR SYSTEMS. Consider the following example of a linear system:

Similar documents
SOLVING LINEAR SYSTEMS

Operation Count; Numerical Linear Algebra

1 Determinants and the Solvability of Linear Systems

6. Cholesky factorization

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Linear Algebra Notes

Vector and Matrix Norms

Linear Equations and Inequalities

160 CHAPTER 4. VECTOR SPACES

Methods for Finding Bases

General Framework for an Iterative Solution of Ax b. Jacobi s Method

Practical Guide to the Simplex Method of Linear Programming

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Systems of Linear Equations

Department of Chemical Engineering ChE-101: Approaches to Chemical Engineering Problem Solving MATLAB Tutorial VI

Direct Methods for Solving Linear Systems. Matrix Factorization

University of Lille I PC first year list of exercises n 7. Review

Least-Squares Intersection of Lines

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

1.2 Solving a System of Linear Equations

Name: Section Registered In:

Notes on Determinant

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

The Characteristic Polynomial

Solution of Linear Systems

What is Linear Programming?

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Solving simultaneous equations using the inverse matrix

LS.6 Solution Matrices

Solving Systems of Linear Equations

NOTES ON LINEAR TRANSFORMATIONS

Mathematical finance and linear programming (optimization)

Solving Linear Systems, Continued and The Inverse of a Matrix

8 Square matrices continued: Determinants

MATH APPLIED MATRIX THEORY

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

5.5. Solving linear systems by the elimination method

Continued Fractions and the Euclidean Algorithm

Similarity and Diagonalization. Similar Matrices

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices

Introduction to Engineering System Dynamics

Solving Linear Systems of Equations. Gerald Recktenwald Portland State University Mechanical Engineering Department

CURVE FITTING LEAST SQUARES APPROXIMATION

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

7 Gaussian Elimination and LU Factorization

5 Numerical Differentiation

1 Solving LPs: The Simplex Algorithm of George Dantzig

The world s largest matrix computation. (This chapter is out of date and needs a major overhaul.)

Lecture 1: Systems of Linear Equations

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Math 312 Homework 1 Solutions

Rotation Matrices and Homogeneous Transformations

Numerical Methods for Solving Systems of Nonlinear Equations

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

2.3. Finding polynomial functions. An Introduction:

Matrix Differentiation

Lecture 4: Partitioned Matrices and Determinants

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

4.5 Linear Dependence and Linear Independence

DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Linear Programming. March 14, 2014

11 Multivariate Polynomials

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

1 VECTOR SPACES AND SUBSPACES

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Iterative Methods for Solving Linear Systems

Equations, Inequalities & Partial Fractions

1 Introduction to Matrices

Matrix-vector multiplication in terms of dot-products

Scalar Valued Functions of Several Variables; the Gradient Vector

Typical Linear Equation Set and Corresponding Matrices

Lecture 3: Finding integer solutions to systems of linear equations

Suk-Geun Hwang and Jin-Woo Park

Factorization Theorems

Section Continued

Matrix Multiplication

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems

Systems of Linear Equations

3. INNER PRODUCT SPACES

Natural cubic splines

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

3.1. Solving linear equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

Numerical Analysis Lecture Notes

LINEAR ALGEBRA W W L CHEN

Solutions to Math 51 First Exam January 29, 2015

Introduction to General and Generalized Linear Models

5 Homogeneous systems

Linear Algebra Notes for Marsden and Tromba Vector Calculus

α = u v. In other words, Orthogonal Projection

The continuous and discrete Fourier transforms

Finite Difference Approach to Option Pricing

Transcription:

LINEAR SYSTEMS Consider the following example of a linear system: Its unique solution is x +2x 2 +3x 3 = 5 x + x 3 = 3 3x + x 2 +3x 3 = 3 x =, x 2 =0, x 3 = 2 In general we want to solve n equations in n unknowns. For this, we need some simplifying notation. In particular we introduce arrays. Wecanthink of these as means for storing information about the linear system in a computer. In the above case, we introduce A = 2 3 0 3 3, b = 5 3 3, x = 0 2

These arrays completely specify the linear system and its solution. We also know that we can give meaning to multiplication and addition of these quantities, calling them matrices and vectors. The linear system is then written as Ax = b with Ax denoting a matrix-vector multiplication. The general system is written as a, x + + a,n x n =. b a n, x + + a n,n x n = b n This is a system of n linear equations in the n unknowns x,...,x n. This can be written in matrixvector notation as A = a, a,n..... a n, a n,n Ax = b,b= b. b n x = x. x n

A TRIDIAGONAL SYSTEM Consider the tridiagonal linear system The solution is 3x x 2 = 2 x +3x 2 x 3 =. x n 2 +3x n x n = x n +3x n = 2 x = = x n = This has the associated arrays A = 3 0 0 3 0.... 3 0 3,b= 2. 2,x=.

SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics. They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis of some other problem. As examples of the latter, we have the construction of spline functions, the numerical solution of systems of nonlinear equations, ordinary and partial differential equations, integral equations, and the solution of optimization problems. There are many ways of classifying linear systems. Size: Small, moderate, and large. This of course varies with the machine you are using. Most PCs are now being sold with a memory of 256 to 52 megabytes (MB), and my old HP workstation has 768 MB (it is over four years old).

For a matrix A of order n n, it will take 8n 2 bytes to store it in double precision. Thus a matrix of order 8000 will need around 52 MB of storage. The latter would be too large for most present day PCs, if the matrix was to be stored in the computer s memory, although one can easily expand a PC to contain much more memory than this. Sparse vs. Dense. Many linear systems have a matrix A in which almost all the elements are zero. These matrices are said to be sparse. For example, it is quite common to work with tridiagonal matrices A = a c 0 0 b 2 a 2 c 2 0. 0 b 3 a 3 c 3.... 0 b n a n in which the order is 0 4 or much more. For such matrices, it does not make sense to store the zero elements; and the sparsity should be taken into account when solving the linear system Ax = b. Also, the sparsity need not be as regular as in this example.

BASIC DEFINITIONS & THEORY A homogeneous linear system Ax = b is one for which the right hand constants are all zero. Using vector notation, we say b is the zero vector for a homogeneous system. Otherwise the linear system is call non-homogeneous. Theorem. The following are equivalent statements. () For each b, there is exactly one solution x. (2) For each b, there is a solution x. (3) The homogeneous system Ax = 0 has only the solution x =0. (4) det (A) 6= 0. (5) A exists. [The matrix inverse and determinant are introduced in 6.2, but they belong as a part of this theorem.]

EXAMPLE. Consider again the tridiagonal system 3x x 2 = 2 x +3x 2 x 3 =. x n 2 +3x n x n = x n +3x n = 2 The homogeneous version is simply 3x x 2 = 0 x +3x 2 x 3 =. 0 x n 2 +3x n x n = 0 x n +3x n = 0 Assume x 6= 0,andthereforethatx has nonzero components. Let x k denote a component of maximum size: x k = max j n x j

Consider now equation k, and assume <k<n. Then x k +3x k x k+ = 0 x k = 3 xk + x k+ x k + x k+ x k 3 3 ( x k + x k ) = 2 3 x k This implies x k = 0, and therefore x = 0. A similar proof is valid if k =ork = n, usingthefirst or the last equation, respectively. Thus the original tridiagonal linear system Ax = b has a unique solution x for each right side b.

METHODS OF SOLUTION There are two general categories of numerical methods for solving Ax = b. Direct Methods: These are methods with a finite number of steps; and they end with the exact solution x, provided that all arithmetic operations are exact. The most used of these methods is Gaussian elimination, which we begin with. There are other direct methods, but we do not study them here. Iteration Methods: These are used in solving all types of linear systems, but they are most commonly used with large sparse systems, especially those produced by discretizing partial differential equations. This is an extremely active area of research.

MATRICES in MATLAB Consider the matrices A = 2 3 2 2 3 3 3 3, b = In MATLAB, A can be created as follows. A = [ 2 3; 2 2 3; 3 3 3]; A =[, 2, 3; 2, 2, 3; 3, 3, 3]; A = [ 2 3 2 2 3 3 3 3]; Commas can be used to replace the spaces. The vector b can be created by b = ones(3, );

Consider setting up the matrices for the system Ax = b with A i,j =max{i, j}, b i =, i, j n One way to set up the matrix A is as follows: A = zeros(n, n); for i =:n A(i, :i) =i; A(i, i +:n) =i +:n; end and set up the vector b by b = ones(n, );