Operation Count; Numerical Linear Algebra
|
|
|
- Roxanne Malone
- 9 years ago
- Views:
Transcription
1 10 Operation Count; Numerical Linear Algebra 10.1 Introduction Many computations are limited simply by the sheer number of required additions, multiplications, or function evaluations. If floating-point operations are the dominant cost then the computation time is proportional to the number of mathematical operations. Therefore, we should practice counting. For example, a 0 +a 1 x+a 2 x 2 involves two additions and three multiplications, because the square also requires a multiplication, but the equivalent formula a 0 +(a 1 +a 2 x)x involves only two multiplications and two additions. More generally, a N x N +...+a 1 x+a 0 involves N additions and N + (N 1) = N(N +1)/2 multiplications, but (a N x+...+a 1 )x+a 0 only N multiplications and N additions. Both require O(N) operations, but the first takes about N 2 /2 flops for large N, the latter 2N for large N. (Although the second form of polynomial evaluation is superior to the first in terms of the number of floating-point operations, in terms of roundoff it may be the other way round.) Multiplying two N N matrices obviously requires N multiplications and N 1 additions for each element. Since there are N 2 elements in the matrix this yields a total of N 2 (2N 1) floating-point operations, or about 2N 3 for large N, that is, O(N 3 ). A precise definition of the order of symbol O is in place (Big-O notation). A function is of order x p if there is a constant c such that the absolute value of the function is no larger than cx p for sufficiently large x. For example, 2N 2 + 4N + log(n) + 7 1/N is O(N 2 ). With 62
2 Chapter this definition, a function that is O(N 6 ) is also O(N 7 ), but it is usually implied that the power is the lowest possible. The analogous definition is also applicable for small numbers, as in chapter 7. More generally, a function is of order g(x) if f(x) c g(x) for x > x 0. An actual comparison of the relative speed of floating-point operations is given in table 8-I. According to that table, we do not need to distinguish between addition, subtraction, and multiplication, but divisions take somewhat longer Slow and fast method for the determinant of a matrix It is easy to exceed the computational ability of even the most powerful computer. Hence methods are needed that solve a problem quickly. As a demonstration we calculate the determinant of a matrix. Doing these calculations by hand gives us a feel for the problem. Although we are ultimately interested in the N N case, the following 3 3 matrix serves as an example: A = One way to evaluate a determinant is Cramer s rule, according to which the determinant can be calculated in terms of the determinants of submatrices. Cramer s rule using the first row yields det(a) = 1 (5 + 2) 1 (10 1) + 0 (4 + 1) = 7 9 = 2. For a matrix of size N this requires calculating N subdeterminants, each of which in turn requires N 1 subdeterminants, and so on. Hence the number of necessary operations is O(N!). A faster way of evaluating the determinant of a large matrix is to bring the matrix to upper triangular or lower triangular form by linear transformations. Appropriate linear transformations preserve the value of the determinant. The determinant is then the product of diagonal elements, as is clear from the previous definition. For our example the transforms(row2 2 row1) row2 and(row3+row1) row3 yield zeros
3 64 Lessons in Computational Physics in the first column below the first matrix element. Then the transform (row3+3 row2) row3 yields zeros below the second element on the diagonal: Now, the matrix is in triangular form and det(a) = 1 ( 1) 2 = 2. An N N matrix requires N such steps; each linear transformation involves adding a multiple of one row to another row, that is, N or fewer additions and N or fewer multiplications. Hence this is an O(N 3 ) procedure. Therefore calculation of the determinant by bringing the matrix to upper triangular form is far more efficient than either of the previous two methods. For say N = 10, the change from N! to N 3 means a speedup of very roughly a thousand. This enormous speedup is achieved through a better choice of numerical method Non-intuitive operation counts in linear algebra We all know how to solve a linear system of equations by hand, by extracting one variable at a time and repeatedly substituting it in all remaining equations, a method called Gauss elimination. This is essentially the same as we have done above in eliminating columns. The following symbolizes the procedure again on a 4 4 matrix: ** ** ** * Stars indicate nonzero elements and blank elements are zero. Eliminating the first column takes about 2N 2 floating-point operations, the second column 2(N 1) 2, the third column 2(N 2) 2, and so on. This yields a total of about 2N 3 /3 floating-point operations. (One way to see that is to approximate the sum by an integral, and the integral of N 2 is N 3 /3.)
4 Chapter time (seconds) problem size N Figure 10-1: Execution time of a program solving a system of N linear equations in N variables. For comparison, the dashed line shows an increase proportional to N 3. Once triangular form is reached, the value of one variable is known and can be substituted in all other equations, and so on. These substitutions require only O(N 2 ) operations. A count of 2N 3 /3 is less than the approximately 2N 3 operations for matrix multiplication. Solving a linear system is faster than multiplying two matrices! During Gauss elimination the right-hand side of a system of linear equations is transformed along with the matrix. Many right-hand sides can be transformed simultaneously, but they need to be known in advance. Inversion of a square matrix can be achieved by solving a system with N different right-hand sides. Since the right-hand side(s) can be carried along in the transformation process, this is still O(N 3 ). Given Ax = b, the solution x = A 1 b can be obtained by multiplying the inverse of A with b, but it is not necessary to invert a matrix to solve a linear system. Solving a linear system is faster than inverting and inverting a matrix is faster than multiplying two matrices. We have only considered efficiency. It is also desirable to avoid dividing by too small a number or to optimize roundoff behavior or to introduce parallel efficiency. Since the solution of linear systems is an important and ubiquitous application, all these issues have received detailed attention and elaborate routines are available. Figure 10-1 shows the actual time of execution for a program that solves a linear system of N equations in N variables. First of all, note the tremendous computational power of computers: Solving a linear system in 1000 variables, requiring about 600 million floating point operations, takes less than one second. (Using my workstation bought around 2010
5 66 Lessons in Computational Physics and Matlab, I can solve a linear system of about 3500 equations in one second. Matlab automatically took advantage of all 4 CPU cores.) The increase in computation time with the number of variables is not as ideal as expected from the operation count, because time is required not only for arithmetic operations but also for data movement. For this particular implementation the execution time is larger when N is a power of two. Other wiggles in the graph arise because the execution time is not exactly the same every time the program is run. Table 10-I shows the operation count for several important problems. Operation count also goes under the name of computational cost or computational complexity (of an algorithm). problem operation memory count count Solving system of N linear equations in N variables O(N 3 ) O(N 2 ) Inversion of N N matrix O(N 3 ) O(N 2 ) Inversion of tridiagonal N N matrix O(N) O(N) Sorting N real numbers O(N logn) O(N) Fourier transform of N equidistant points O(N logn) O(N) Table 10-I: Order counts and required memory for several important problems solved by standard algorithms. A matrix where most elements are zero is called sparse. An example is the tridiagonal matrix: A tridiagonal matrix has the following shape:
6 Chapter A sparse matrix, where most elements are zero, can be solved with appropriate algorithms, much faster and with less storage than a full matrix. For example, a tridiagonal system of equations can be solved with O(N) effort, not O(N 2 ) but O(N), as compared to O(N 3 ) for the a matrix. The ratio of floating-point operations to bytes of main memory accessed is called the arithmetic intensity. Large problems with low arithmetic intensity are limited by memory bandwidth, while large problems with high arithmetic intensity are floating point limited. For example, a matrix transpose has an arithmetic intensity of zero, matrix addition of double precision numbers 1/(3 8) (very low), and naive matrix multiplication 2N/24 (high) Data locality Calculations may not be limited by FLOPs/second but by data movement; hence we may also want to keep track of and minimize access to main memory. For multiplication of two N N matrices, simple row times column multiplication involves 2N memory pulls for each of the N 2 output elements. If the variables are 8-byte double precision numbers, then the arithmetic intensity is (2N 1)/(8(2N +1)) 1/8 FLOPs/byte. Hence the bottleneck may be memory bandwidth. Tiled matrix multiplication: Data traffic from main memory can be reduced by tiling the matrix. The idea is to divide the matrix into m m tiles small enough that they fit into a fast (usually on-chip) memory. For each pair of input tiles, there are 2m 2 movements out of and m 2 into the (slow) main memory. In addition, the partial sums, which will have to be stored temporarily in main memory, have to be brought in again for all
7 68 Lessons in Computational Physics but the first block; that is about another m 2 of traffic per tile. For each output tile, we have to go through N/m pairs of input tiles. In total, this amounts to (N/m) 3 4m 2 = 4N 3 /m memory movements, or 4N/m for each matrix element, versus 2N for the naive method. This reduces data access to the main memory and turns matrix multiplication into a truly floating point limited operation. There are enough highly-optimized matrix multiplication routines around that we never have to write one on our own, but the same concept grouping of data to reduce traffic to main memory can be applied to various problems with low arithmetic intensity. Recommended Reading: Golub & Van Loan, Matrix Computations is a standard work on numerical linear algebra.
Solution of Linear Systems
Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start
SOLVING LINEAR SYSTEMS
SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis
General Framework for an Iterative Solution of Ax b. Jacobi s Method
2.6 Iterative Solutions of Linear Systems 143 2.6 Iterative Solutions of Linear Systems Consistent linear systems in real life are solved in one of two ways: by direct calculation (using a matrix factorization,
8 Square matrices continued: Determinants
8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You
6. Cholesky factorization
6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
CS3220 Lecture Notes: QR factorization and orthogonal transformations
CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
Using row reduction to calculate the inverse and the determinant of a square matrix
Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n n square matrix A is called invertible
Notes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
2.3. Finding polynomial functions. An Introduction:
2.3. Finding polynomial functions. An Introduction: As is usually the case when learning a new concept in mathematics, the new concept is the reverse of the previous one. Remember how you first learned
LINEAR ALGEBRA. September 23, 2010
LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................
15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method
578 CHAPTER 1 NUMERICAL METHODS 1. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after
Direct Methods for Solving Linear Systems. Matrix Factorization
Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011
University of Lille I PC first year list of exercises n 7. Review
University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients
Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).
MAT 2 (Badger, Spring 202) LU Factorization Selected Notes September 2, 202 Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix
8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes
Solution by Inverse Matrix Method 8.2 Introduction The power of matrix algebra is seen in the representation of a system of simultaneous linear equations as a matrix equation. Matrix algebra allows us
This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.
Algebra I Overview View unit yearlong overview here Many of the concepts presented in Algebra I are progressions of concepts that were introduced in grades 6 through 8. The content presented in this course
The Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
Vector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
Lecture 3: Finding integer solutions to systems of linear equations
Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture
How To Understand And Solve Algebraic Equations
College Algebra Course Text Barnett, Raymond A., Michael R. Ziegler, and Karl E. Byleen. College Algebra, 8th edition, McGraw-Hill, 2008, ISBN: 978-0-07-286738-1 Course Description This course provides
7 Gaussian Elimination and LU Factorization
7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method
Question 2: How do you solve a matrix equation using the matrix inverse?
Question : How do you solve a matrix equation using the matrix inverse? In the previous question, we wrote systems of equations as a matrix equation AX B. In this format, the matrix A contains the coefficients
7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix
7. LU factorization EE103 (Fall 2011-12) factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization algorithm effect of rounding error sparse
Unit 18 Determinants
Unit 18 Determinants Every square matrix has a number associated with it, called its determinant. In this section, we determine how to calculate this number, and also look at some of the properties of
1 Determinants and the Solvability of Linear Systems
1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped
Solving simultaneous equations using the inverse matrix
Solving simultaneous equations using the inverse matrix 8.2 Introduction The power of matrix algebra is seen in the representation of a system of simultaneous linear equations as a matrix equation. Matrix
MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column
Equations, Inequalities & Partial Fractions
Contents Equations, Inequalities & Partial Fractions.1 Solving Linear Equations 2.2 Solving Quadratic Equations 1. Solving Polynomial Equations 1.4 Solving Simultaneous Linear Equations 42.5 Solving Inequalities
Linear Algebra Notes
Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note
Lecture 1: Systems of Linear Equations
MTH Elementary Matrix Algebra Professor Chao Huang Department of Mathematics and Statistics Wright State University Lecture 1 Systems of Linear Equations ² Systems of two linear equations with two variables
Linear Algebra: Determinants, Inverses, Rank
D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of
Linear Equations ! 25 30 35$ & " 350 150% & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development
MathsTrack (NOTE Feb 2013: This is the old version of MathsTrack. New books will be created during 2013 and 2014) Topic 4 Module 9 Introduction Systems of to Matrices Linear Equations Income = Tickets!
Solving Systems of Linear Equations
LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how
Solving Linear Systems of Equations. Gerald Recktenwald Portland State University Mechanical Engineering Department [email protected].
Solving Linear Systems of Equations Gerald Recktenwald Portland State University Mechanical Engineering Department [email protected] These slides are a supplement to the book Numerical Methods with Matlab:
Introduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.
Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation
Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year.
This document is designed to help North Carolina educators teach the Common Core (Standard Course of Study). NCDPI staff are continually updating and improving these tools to better serve teachers. Algebra
Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test
Math Review for the Quantitative Reasoning Measure of the GRE revised General Test www.ets.org Overview This Math Review will familiarize you with the mathematical skills and concepts that are important
SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison
SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89 by Joseph Collison Copyright 2000 by Joseph Collison All rights reserved Reproduction or translation of any part of this work beyond that permitted by Sections
CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA
We Can Early Learning Curriculum PreK Grades 8 12 INSIDE ALGEBRA, GRADES 8 12 CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA April 2016 www.voyagersopris.com Mathematical
Eigenvalues and Eigenvectors
Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution
Review of Fundamental Mathematics
Review of Fundamental Mathematics As explained in the Preface and in Chapter 1 of your textbook, managerial economics applies microeconomic theory to business decision making. The decision-making tools
Typical Linear Equation Set and Corresponding Matrices
EWE: Engineering With Excel Larsen Page 1 4. Matrix Operations in Excel. Matrix Manipulations: Vectors, Matrices, and Arrays. How Excel Handles Matrix Math. Basic Matrix Operations. Solving Systems of
1 Lecture: Integration of rational functions by decomposition
Lecture: Integration of rational functions by decomposition into partial fractions Recognize and integrate basic rational functions, except when the denominator is a power of an irreducible quadratic.
Solving Linear Systems, Continued and The Inverse of a Matrix
, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing
a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
Lecture 5: Singular Value Decomposition SVD (1)
EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system
by the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that
Linear Programming Notes V Problem Transformations
Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material
Continued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
MATH 551 - APPLIED MATRIX THEORY
MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points
3 Orthogonal Vectors and Matrices
3 Orthogonal Vectors and Matrices The linear algebra portion of this course focuses on three matrix factorizations: QR factorization, singular valued decomposition (SVD), and LU factorization The first
Definition 8.1 Two inequalities are equivalent if they have the same solution set. Add or Subtract the same value on both sides of the inequality.
8 Inequalities Concepts: Equivalent Inequalities Linear and Nonlinear Inequalities Absolute Value Inequalities (Sections 4.6 and 1.1) 8.1 Equivalent Inequalities Definition 8.1 Two inequalities are equivalent
Natural cubic splines
Natural cubic splines Arne Morten Kvarving Department of Mathematical Sciences Norwegian University of Science and Technology October 21 2008 Motivation We are given a large dataset, i.e. a function sampled
EL-9650/9600c/9450/9400 Handbook Vol. 1
Graphing Calculator EL-9650/9600c/9450/9400 Handbook Vol. Algebra EL-9650 EL-9450 Contents. Linear Equations - Slope and Intercept of Linear Equations -2 Parallel and Perpendicular Lines 2. Quadratic Equations
Overview. Essential Questions. Precalculus, Quarter 4, Unit 4.5 Build Arithmetic and Geometric Sequences and Series
Sequences and Series Overview Number of instruction days: 4 6 (1 day = 53 minutes) Content to Be Learned Write arithmetic and geometric sequences both recursively and with an explicit formula, use them
Solution to Homework 2
Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if
Answer Key for California State Standards: Algebra I
Algebra I: Symbolic reasoning and calculations with symbols are central in algebra. Through the study of algebra, a student develops an understanding of the symbolic language of mathematics and the sciences.
The Method of Partial Fractions Math 121 Calculus II Spring 2015
Rational functions. as The Method of Partial Fractions Math 11 Calculus II Spring 015 Recall that a rational function is a quotient of two polynomials such f(x) g(x) = 3x5 + x 3 + 16x x 60. The method
Numerical Matrix Analysis
Numerical Matrix Analysis Lecture Notes #10 Conditioning and / Peter Blomgren, [email protected] Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research
1 Solving LPs: The Simplex Algorithm of George Dantzig
Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.
Solving Systems of Linear Equations Using Matrices
Solving Systems of Linear Equations Using Matrices What is a Matrix? A matrix is a compact grid or array of numbers. It can be created from a system of equations and used to solve the system of equations.
Excel supplement: Chapter 7 Matrix and vector algebra
Excel supplement: Chapter 7 atrix and vector algebra any models in economics lead to large systems of linear equations. These problems are particularly suited for computers. The main purpose of this chapter
4.3 Lagrange Approximation
206 CHAP. 4 INTERPOLATION AND POLYNOMIAL APPROXIMATION Lagrange Polynomial Approximation 4.3 Lagrange Approximation Interpolation means to estimate a missing function value by taking a weighted average
DATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.
Chapter 1 Vocabulary identity - A statement that equates two equivalent expressions. verbal model- A word equation that represents a real-life problem. algebraic expression - An expression with variables.
3.1. Solving linear equations. Introduction. Prerequisites. Learning Outcomes. Learning Style
Solving linear equations 3.1 Introduction Many problems in engineering reduce to the solution of an equation or a set of equations. An equation is a type of mathematical expression which contains one or
MATH 60 NOTEBOOK CERTIFICATIONS
MATH 60 NOTEBOOK CERTIFICATIONS Chapter #1: Integers and Real Numbers 1.1a 1.1b 1.2 1.3 1.4 1.8 Chapter #2: Algebraic Expressions, Linear Equations, and Applications 2.1a 2.1b 2.1c 2.2 2.3a 2.3b 2.4 2.5
7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Style. Learning Outcomes
The Inverse of a Matrix 7.4 Introduction In number arithmetic every number a 0 has a reciprocal b written as a or such that a ba = ab =. Similarly a square matrix A may have an inverse B = A where AB =
Matrix Multiplication
Matrix Multiplication CPS343 Parallel and High Performance Computing Spring 2016 CPS343 (Parallel and HPC) Matrix Multiplication Spring 2016 1 / 32 Outline 1 Matrix operations Importance Dense and sparse
Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013
Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,
Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices
Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices Solving square systems of linear equations; inverse matrices. Linear algebra is essentially about solving systems of linear equations,
Iterative Methods for Solving Linear Systems
Chapter 5 Iterative Methods for Solving Linear Systems 5.1 Convergence of Sequences of Vectors and Matrices In Chapter 2 we have discussed some of the main methods for solving systems of linear equations.
In mathematics, it is often important to get a handle on the error term of an approximation. For instance, people will write
Big O notation (with a capital letter O, not a zero), also called Landau's symbol, is a symbolism used in complexity theory, computer science, and mathematics to describe the asymptotic behavior of functions.
The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression
The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every
Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities
Algebra 1, Quarter 2, Unit 2.1 Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities Overview Number of instructional days: 15 (1 day = 45 60 minutes) Content to be learned
Method To Solve Linear, Polynomial, or Absolute Value Inequalities:
Solving Inequalities An inequality is the result of replacing the = sign in an equation with ,, or. For example, 3x 2 < 7 is a linear inequality. We call it linear because if the < were replaced with
Linear Programming. March 14, 2014
Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1
8.1. Cramer s Rule for Solving Simultaneous Linear Equations. Introduction. Prerequisites. Learning Outcomes. Learning Style
Cramer s Rule for Solving Simultaneous Linear Equations 8.1 Introduction The need to solve systems of linear equations arises frequently in engineering. The analysis of electric circuits and the control
9.2 Summation Notation
9. Summation Notation 66 9. Summation Notation In the previous section, we introduced sequences and now we shall present notation and theorems concerning the sum of terms of a sequence. We begin with a
A Concrete Introduction. to the Abstract Concepts. of Integers and Algebra using Algebra Tiles
A Concrete Introduction to the Abstract Concepts of Integers and Algebra using Algebra Tiles Table of Contents Introduction... 1 page Integers 1: Introduction to Integers... 3 2: Working with Algebra Tiles...
3.6. Partial Fractions. Introduction. Prerequisites. Learning Outcomes
Partial Fractions 3.6 Introduction It is often helpful to break down a complicated algebraic fraction into a sum of simpler fractions. For 4x + 7 example it can be shown that x 2 + 3x + 2 has the same
SOLVING COMPLEX SYSTEMS USING SPREADSHEETS: A MATRIX DECOMPOSITION APPROACH
SOLVING COMPLEX SYSTEMS USING SPREADSHEETS: A MATRIX DECOMPOSITION APPROACH Kenneth E. Dudeck, Associate Professor of Electrical Engineering Pennsylvania State University, Hazleton Campus Abstract Many
DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH
DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH CHRISTOPHER RH HANUSA AND THOMAS ZASLAVSKY Abstract We investigate the least common multiple of all subdeterminants,
1.7 Graphs of Functions
64 Relations and Functions 1.7 Graphs of Functions In Section 1.4 we defined a function as a special type of relation; one in which each x-coordinate was matched with only one y-coordinate. We spent most
2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system
1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables
Cryptography and Network Security. Prof. D. Mukhopadhyay. Department of Computer Science and Engineering. Indian Institute of Technology, Kharagpur
Cryptography and Network Security Prof. D. Mukhopadhyay Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Module No. # 01 Lecture No. # 12 Block Cipher Standards
CURVE FITTING LEAST SQUARES APPROXIMATION
CURVE FITTING LEAST SQUARES APPROXIMATION Data analysis and curve fitting: Imagine that we are studying a physical system involving two quantities: x and y Also suppose that we expect a linear relationship
Vocabulary Words and Definitions for Algebra
Name: Period: Vocabulary Words and s for Algebra Absolute Value Additive Inverse Algebraic Expression Ascending Order Associative Property Axis of Symmetry Base Binomial Coefficient Combine Like Terms
Measurement with Ratios
Grade 6 Mathematics, Quarter 2, Unit 2.1 Measurement with Ratios Overview Number of instructional days: 15 (1 day = 45 minutes) Content to be learned Use ratio reasoning to solve real-world and mathematical
Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization
Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization 2.1. Introduction Suppose that an economic relationship can be described by a real-valued
Matrix Differentiation
1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have
Solving Systems of Linear Equations
LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how
Inner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
Similarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
Big Ideas in Mathematics
Big Ideas in Mathematics which are important to all mathematics learning. (Adapted from the NCTM Curriculum Focal Points, 2006) The Mathematics Big Ideas are organized using the PA Mathematics Standards
