Numerical Methods for Civil Engineers. Linear Equations

Similar documents
10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

Solving Systems of Linear Equations

General Framework for an Iterative Solution of Ax b. Jacobi s Method

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Solution of Linear Systems

Systems of Linear Equations

Solving simultaneous equations using the inverse matrix

5.5. Solving linear systems by the elimination method

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

Solving Systems of Linear Equations

Linear Equations ! $ & " % & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

1 Determinants and the Solvability of Linear Systems

SOLVING LINEAR SYSTEMS

Question 2: How do you solve a matrix equation using the matrix inverse?

7 Gaussian Elimination and LU Factorization

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Operation Count; Numerical Linear Algebra

Direct Methods for Solving Linear Systems. Matrix Factorization

Linear Programming. March 14, 2014

Solving Systems of Linear Equations Using Matrices

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

3.1. Solving linear equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

Lecture 1: Systems of Linear Equations

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Equations, Inequalities & Partial Fractions

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Nonlinear Programming Methods.S2 Quadratic Programming

Outline. Generalize Simple Example

1 Introduction to Matrices

Row Echelon Form and Reduced Row Echelon Form

Linear Programming. Solving LP Models Using MS Excel, 18

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Determine whether an. 2 Solve systems of linear. 3 Solve systems of linear. 4 Solve systems of linear. 5 Select the most efficient

How To Understand And Solve Algebraic Equations

Solving Linear Systems, Continued and The Inverse of a Matrix

Basic numerical skills: EQUATIONS AND HOW TO SOLVE THEM. x + 5 = = (2-2) = = 5. x = 7-5. x + 0 = 20.

Chapter 9. Systems of Linear Equations

DATA ANALYSIS II. Matrix Algorithms

Solving Simultaneous Equations and Matrices

1.2 Solving a System of Linear Equations

Continued Fractions and the Euclidean Algorithm

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Typical Linear Equation Set and Corresponding Matrices

6. Vectors Scott Surgent (surgent@asu.edu)

OPRE 6201 : 2. Simplex Method

EQUATIONS and INEQUALITIES

Solving Systems of Two Equations Algebraically

Nonlinear Algebraic Equations Example

Lecture 2 Matrix Operations

Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances

1 VECTOR SPACES AND SUBSPACES

II. Linear Systems of Equations

Using row reduction to calculate the inverse and the determinant of a square matrix

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

Let s explore the content and skills assessed by Heart of Algebra questions.

Solving Linear Programs

An Introduction to Applied Mathematics: An Iterative Process

Physics 235 Chapter 1. Chapter 1 Matrices, Vectors, and Vector Calculus

Factorization Theorems

CS3220 Lecture Notes: QR factorization and orthogonal transformations

1 Review of Least Squares Solutions to Overdetermined Systems

Indiana State Core Curriculum Standards updated 2009 Algebra I

LINEAR EQUATIONS IN TWO VARIABLES

What is Linear Programming?

Department of Chemical Engineering ChE-101: Approaches to Chemical Engineering Problem Solving MATLAB Tutorial VI

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

GEC320 COURSE COMPACT. Four hours per week for 15 weeks (60 hours)

Linear Programming in Matrix Form

Section 10.4 Vectors

Notes on Determinant

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

Temperature Scales. The metric system that we are now using includes a unit that is specific for the representation of measured temperatures.

Nonlinear Iterative Partial Least Squares Method

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.

ALGEBRAIC EIGENVALUE PROBLEM

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

5 Homogeneous systems

3 Orthogonal Vectors and Matrices

Airport Planning and Design. Excel Solver

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

In mathematics, there are four attainment targets: using and applying mathematics; number and algebra; shape, space and measures, and handling data.

Iterative Methods for Solving Linear Systems

POLYNOMIAL FUNCTIONS

We shall turn our attention to solving linear systems of equations. Ax = b

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

K80TTQ1EP-??,VO.L,XU0H5BY,_71ZVPKOE678_X,N2Y-8HI4VS,,6Z28DDW5N7ADY013

Math 215 HW #6 Solutions

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

ALGEBRA. sequence, term, nth term, consecutive, rule, relationship, generate, predict, continue increase, decrease finite, infinite

Roots of Equations (Chapters 5 and 6)

Biggar High School Mathematics Department. National 5 Learning Intentions & Success Criteria: Assessing My Progress

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Solutions to Math 51 First Exam January 29, 2015

5 Systems of Equations

Chapter 4 One Dimensional Kinematics

3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

Discrete Convolution and the Discrete Fourier Transform

Transcription:

Numerical Methods for Civil Engineers Lecture Notes CE 311K Daene C. McKinney Introduction to Computer Methods Department of Civil, Architectural and Environmental Engineering The University of Texas at Austin Introduction Linear Equations In many engineering applications it is necessary to solve systems of linear equations. Frequently, the number of equations will be equal to the number of unknowns. In such cases, we are usually able to solve for unique values of the variables. If there are more variables than equations, we expect, in general, to obtain an infinite number of solutions. Sometimes we have more equations than variables. However, not all the equations may be independent, that is, some of them can be derived from others. Under this circumstance, we try to find enough independent equations to be able to solve for all the variables. Consider the linear system of equations where A is an (n x n) matrix, b is a column vector of constants, called the right-hand-side, and x is the vector of (n) unknown solution values to be determined. This system can be written out as CE311K 1 DCM 2/8/09

Performing the matrix multiplication and writing each equation out separately, we have This system can also be written in the following manner A formal way to obtain a solution using matrix algebra is to multiply each side of the equation by the inverse of A to yield or, since we have Thus, we have obtained the solution to the system of equations. Unfortunately, this is not a very efficient way of solving the system of equations. We will discuss more efficient ways in the following sections. Example. Consider the following two equations in two unknowns: CE311K 2 DCM 2/8/09

Solve the first equation for x 2 This equation represents a straight line with an intercept of 7/2 and a slope of (-3/2). Now, solve the second equation for x 2 This is also a straight line, but with an intercept of 1 and a slope of (-4). These lines are plotted in the following Figure. The solution is the intersection of the two lines at x 1 = -1, x 2 = 5. Figure. Graphical solution of two simultaneous linear equations. CE311K 3 DCM 2/8/09

Each linear equation represents a hyperplane in an n-dimensional Euclidean space (R n ), and the system of m equations or represents m hyperplanes. The solution of the system of equations is the intersection of all of the m hyperplanes, and can be the empty set (no solution) a point (unique solution) a line (non-unique solution) a plane (non-unique solution) Direct Methods for Solving Linear Systems Gauss Elimination The method of Gaussian Elimination is based on the approach to the solution of a pair of simultaneous equations whereby a multiple of one equation is subtracted from the other to eliminate one of the two unknowns (a forward elimination step). The resulting equation is then solved for the remaining unknown, and its value is substituted back into the original equations to solve for the other (a back-substitution step). CE311K 4 DCM 2/8/09

Consider the two equations from the previous example: Divide the first equation by 3, multiply it by 4 and subtract it from the second equation, yielding the new system of equations Now we can solve the second equation for x 2 = 5. Substituting this back into the first equation, we have This approach can be extended to large sets of equations through the development of a systematic approach to forward elimination and back-substitution. Consider the following set of n equations in n unknowns: The solution vector, x, for this system of equations remains unchanged if either of the following fundamental row operations is performed: (1) Multiplication or division of any equation by a constant. CE311K 5 DCM 2/8/09

(2) Replacement of any equation by the sum (or difference) of that equation and any other equation. Gauss elimination is a sequential application of these basic row operations. To begin, (assuming that ) the first equation is multiplied by and subtracted from the second equation, yielding the new system: The primes denote elements which have been changed from their original values, e.g.,. The first equation can now be multiplied by and subtracted from the third equation, and so on, until the last equations is multiplied by and subtracted from the last equation. During these operations, the first row is termed the pivot row and is termed the pivot element. The entire first column below zero and the set of equations appears as: has now been cleared (reduced) to The second row now becomes the pivot row and the pivot element. Multiplication of the second equation by and subtraction from the third equation, and so on, until the last CE311K 6 DCM 2/8/09

equations is multiplied by second column below the main diagonal to zero: and subtracted from the last equation, clearing (reducing) the Similar operations with the remaining rows as pivot rows finally yields The number of primes indicates the number of times that a row was modified in the forward elimination process. The final equation in the last system of equations now yields directly the value of x n as Backing up, the previous equation is Since for is known from the previous, this value may be substituted in and the equation solved to yield CE311K 7 DCM 2/8/09

Repeated back substitution, moving upwards, yields one new unknown from each equation, and the unknown vector will have been completely determined when the top equation is solved for x 1. In general we can write A flow chart for Gauss elimination is shown in the following three Figures. CE311K 8 DCM 2/8/09

Figure. Flowchart for Gauss elimination. CE311K 9 DCM 2/8/09

Flowchart for forward elimination. Flowchart for back substitution. One computational difficulty can arise with the standard Gauss elimination technique. The pivot element in each row is the element on the main diagonal. By the time any given row becomes the pivot row, the diagonal element in that row will have been modified from its original value, with the elements in the lower rows being recomputed the most times. Under certain circumstances, the diagonal element can become very small in magnitude compared with the rest of the elements in the pivot row, as well as perhaps being quite inaccurate. The computation step is the source of round-off error in the algorithm, since a kj must be multiplied by a ik / a kk which will be large if the pivot element a kk is small relative to a ik. Improvements to the basic Gauss Elimination algorithm include careful selection of the pivot equation and pivot element. The CE311K 10 DCM 2/8/09

problem can be effectively treated by interchanging columns of the matrix to shift the largest element (in magnitude) in the pivot row into the diagonal position. This largest element then becomes the pivot element. This operation is repeated with each new pivot row as necessary. Every column interchange also means an interchange of the locations of the unknowns in the solution vector. The logic necessary to accomplish these column interchanges is rather complex and is implemented in most standard Gauss elimination computer codes. Example. Use Gauss Elimination to solve Forward elimination: Multiply the first equation by 3/2 and subtract this from the second, resulting in Multiply the first equation by 1/2 and subtract this from the third, to obtain or Multiply the second equation by 5/2 and subtract this from the third, resulting in CE311K 11 DCM 2/8/09

Back substitution: This result is back substituted into the second equation to give Now, back substituting again This solution (x 1 =1, x 2 =2, x 3 =3) can be checked by substituting these values back into the original equations and comparing the left and right-hand sides of the equations. Iterative Methods for Solving Linear Systems Sometimes when solving engineering problems systems of equations will results which involve large numbers of equations and unknowns (100,000s to 1,000,000s). For large systems of equations, Gauss Elimination is inefficient and prone to large roundoff errors. In this case it is often more convenient to use a solution method that involves a sequential process of generating solutions that converge on the true solutions as the number of steps in the sequence increases. or CE311K 12 DCM 2/8/09

Iterative methods of solution, as distinct from direct methods such as Gauss Elimination, begin by rearranging the system of equations so that one unknown is isolated on the left-hand side of each equation In general, we have Now, if an initial guess for all the unknowns was available, we could substitute these values into the right-hand side of the previous set of equations and compute an updated guess for the unknowns,. There are several ways to accomplish this, depending on how you use the most recently computed guesses. CE311K 13 DCM 2/8/09

Jacobi Method In the Jacobi method, all of the values of the unknowns are updated before any of the new information is used in the calculations. That is, starting with the initial guess, compute the next approximation of the solution as or, after k iterations of this process, we have More generally CE311K 14 DCM 2/8/09

Example. Consider the system of equations A convenient initial guess is usually. In this case we have or, after 1 iteration of the method we have Then, after 2 iterations of the method CE311K 15 DCM 2/8/09

Continuing this process ultimately leads, after 20 iterations, to Gauss Seidel Method When applying the Jacobi method, one may realize that information is being made available at each step of the algorithm. That is the current approximation of one of the unknowns is available for use after each step. This information could be used immediately in the calculation of the next unknown. If we implement this, our method would look like or, after k iterations of this process, we have CE311K 16 DCM 2/8/09

More generally Example. Consider again the system of equations A convenient initial guess is usually. Using the Gauss Siedel method, we have or, after 1 iteration of the method, we have Then, after 2 iterations we have CE311K 17 DCM 2/8/09

Continuing this process ultimately leads, after 9 iterations, to Notice that this method converges to the solution much faster than the Jacobi method. Exercises 1. Given the system of equations a. Solve graphically b. On the basis of the graphical solution, what do you expect regarding the condition of the system? c. Solve by elimination of unknowns. 2. Solve the following system of equations graphically: CE311K 18 DCM 2/8/09

3. Use Gauss Elimination to solve the following systems of linear equations. Show all steps in the computation. Be sure to substitute your answers into the original equation to check your answers. a. b. c. d. 4. The following systems of equations is designed to determine the concentrations (the c s in g/m 3 ) in a series of coupled reactors as a function of the amount of mass input to each reactor (right-hand sides in g/day) Solve this system using the Gauss-Seidel method to a stopping tolerance of 5%. Use an initial guess of c 1 = c 2 = c 3 = 0. 5. The series expansion for sine x is CE311K 19 DCM 2/8/09

Starting with the simplest version,, add terms, one at a time in order to estimate. After each new term is added, compute the true error and the approximate relative error. Add terms until the absolute value of the approximate relative error falls below a stopping criterion of 0.001%. Use a spreadsheet. 6. If x is approximated by = 123.456 and the relative error is 0.1, then what is the possible range of values for x? 7. An engineer needs 4800, 5810, and 5690 m3 of sand, fine gravel, and coarse gravel, respectively, at a construction site. There are three sources where these materials can be obtained and the composition of the material from these sources is % Sand % Fine Gravel % Coarse Gravel Source 1 52 30 18 Source 2 20 50 30 Source 3 25 20 55 How many cubic meters must be hauled from each source in order to meet the engineer s needs? 8. Write the following systems of equations in matrix-vector form (Ax = b). Find the transpose of the matrix A. 9. Solve the following system of equations by the Gauss-Siedel method. Use an accuracy of e = 0.001. 10. Solve the following system of equations by the Gauss-Siedel method. Use an accuracy of e = 0.001. CE311K 20 DCM 2/8/09

11. Start from an initial guess of, show the first 2 iterations of the Gauss Seidel method for the solution of this system of equations. Compute the error after the second iteration. 12. What is meant by ill conditioning of a set of linear equations? The slopes of the equations are so similar that the computing method can not distinguish between the solutions. 13. The following figure shows three reactors linked by pipes. As indicated, the rate of transfer of chemicals through each pipe is equal to a flow rate (Q, with units of cubic meters per second) multiplied by the concentration of the reactor from which the pipe originates (c, with units of milligrams per cubic meter). If the system is at a steady state, the transfer into each reactor will balance the transfer out. Develop mass balance equations for the reactors and use Gauss Elimination to solve the three simultaneous linear algrbraic equations for the unknown concentrations, c 1, c 2, and c 3. Show your work. Figure. Three reactors linked by pipes. The rate of mass transfer through each pipe is equal to the product of flow Q and concentration c of the reactor from which the flow originates. 14. Solve the following set of equations usgin the Gauss-Seidel iterative method CE311K 21 DCM 2/8/09

use the starting values x 0 = 1, y 0 = 1, and z 0 = 1. Show the computations for the first 2 iterations of the Gauss Seidel method. Be sure to show all equations for the computations. 15. Given the system of equations Make 2 iteration of the solution of this system of equations by the Gauss Seidel method. Show all steps in the computation. Start from an initial guess of. Compute the error after the second iteration. CE311K 22 DCM 2/8/09