Numerical Matrix Analysis



Similar documents
Measures of Error: for exact x and approximation x Absolute error e = x x. Relative error r = (x x )/x.

ECE 0142 Computer Organization. Lecture 3 Floating Point Representations

CS321. Introduction to Numerical Methods

CHAPTER 5 Round-off errors

This Unit: Floating Point Arithmetic. CIS 371 Computer Organization and Design. Readings. Floating Point (FP) Numbers

Divide: Paper & Pencil. Computer Architecture ALU Design : Division and Floating Point. Divide algorithm. DIVIDE HARDWARE Version 1

Binary Number System. 16. Binary Numbers. Base 10 digits: Base 2 digits: 0 1

Attention: This material is copyright Chris Hecker. All rights reserved.

Binary Division. Decimal Division. Hardware for Binary Division. Simple 16-bit Divider Circuit

2010/9/19. Binary number system. Binary numbers. Outline. Binary to decimal

Correctly Rounded Floating-point Binary-to-Decimal and Decimal-to-Binary Conversion Routines in Standard ML. By Prashanth Tilleti

Math 348: Numerical Methods with application in finance and economics. Boualem Khouider University of Victoria

Operation Count; Numerical Linear Algebra

Vector and Matrix Norms

Figure 1. A typical Laboratory Thermometer graduated in C.

Computer Science 281 Binary and Hexadecimal Review

Data Storage 3.1. Foundations of Computer Science Cengage Learning

Monday January 19th 2015 Title: "Transmathematics - a survey of recent results on division by zero" Facilitator: TheNumberNullity / James Anderson, UK

Computers. Hardware. The Central Processing Unit (CPU) CMPT 125: Lecture 1: Understanding the Computer

To convert an arbitrary power of 2 into its English equivalent, remember the rules of exponential arithmetic:

This document has been written by Michaël Baudin from the Scilab Consortium. December 2010 The Scilab Consortium Digiteo. All rights reserved.

Data Storage. Chapter 3. Objectives. 3-1 Data Types. Data Inside the Computer. After studying this chapter, students should be able to:

1. Give the 16 bit signed (twos complement) representation of the following decimal numbers, and convert to hexadecimal:

Principles of Scientific Computing. David Bindel and Jonathan Goodman

THE NUMBER OF REPRESENTATIONS OF n OF THE FORM n = x 2 2 y, x > 0, y 0

Numerical Analysis I

So let us begin our quest to find the holy grail of real analysis.

Useful Number Systems

Lies My Calculator and Computer Told Me

parent ROADMAP MATHEMATICS SUPPORTING YOUR CHILD IN HIGH SCHOOL

JUST THE MATHS UNIT NUMBER 1.8. ALGEBRA 8 (Polynomials) A.J.Hobson

Math 115 Spring 2011 Written Homework 5 Solutions

Linear Equations and Inequalities

Continued Fractions and the Euclidean Algorithm

1 Error in Euler s Method

Grade 6 Mathematics Performance Level Descriptors

A new binary floating-point division algorithm and its software implementation on the ST231 processor

0.8 Rational Expressions and Equations

FAST INVERSE SQUARE ROOT

Chapter 4. Polynomial and Rational Functions. 4.1 Polynomial Functions and Their Graphs

The string of digits in the binary number system represents the quantity

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

This 3-digit ASCII string could also be calculated as n = (Data[2]-0x30) +10*((Data[1]-0x30)+10*(Data[0]-0x30));

Fast Arithmetic Coding (FastAC) Implementations

8 Square matrices continued: Determinants

1 The Brownian bridge construction

Number Representation

What Every Computer Scientist Should Know About Floating-Point Arithmetic

Stanford Math Circle: Sunday, May 9, 2010 Square-Triangular Numbers, Pell s Equation, and Continued Fractions

Numbering Systems. InThisAppendix...

1 Short Introduction to Time Series

CS101 Lecture 11: Number Systems and Binary Numbers. Aaron Stevens 14 February 2011

THE EXACT DOT PRODUCT AS BASIC TOOL FOR LONG INTERVAL ARITHMETIC

Modeling, Computers, and Error Analysis Mathematical Modeling and Engineering Problem-Solving

A Static Analyzer for Large Safety-Critical Software. Considered Programs and Semantics. Automatic Program Verification by Abstract Interpretation

MA4001 Engineering Mathematics 1 Lecture 10 Limits and Continuity

Notes on Factoring. MA 206 Kurt Bryan

Zero: If P is a polynomial and if c is a number such that P (c) = 0 then c is a zero of P.

Roots of Polynomials

Copy in your notebook: Add an example of each term with the symbols used in algebra 2 if there are any.

26 Integers: Multiplication, Division, and Order

Some Functions Computable with a Fused-mac

Chapter 5. Binary, octal and hexadecimal numbers

Arithmetic Coding: Introduction

Lecture 2. Binary and Hexadecimal Numbers

FMA Implementations of the Compensated Horner Scheme

CONTENTS 1. Peter Kahn. Spring 2007

Binary Numbering Systems

Chapter 1. Binary, octal and hexadecimal numbers

LSN 2 Number Systems. ECT 224 Digital Computer Fundamentals. Department of Engineering Technology

DNA Data and Program Representation. Alexandre David

1 Review of Least Squares Solutions to Overdetermined Systems

Systems I: Computer Organization and Architecture

Advanced Tutorials. Numeric Data In SAS : Guidelines for Storage and Display Paul Gorrell, Social & Scientific Systems, Inc., Silver Spring, MD

Number Systems and Radix Conversion

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.

Reading 13 : Finite State Automata and Regular Expressions

Solution of Linear Systems

A numerically adaptive implementation of the simplex method

2.3. Finding polynomial functions. An Introduction:

RN-coding of Numbers: New Insights and Some Applications

Preliminary Mathematics

CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA

FLOATING-POINT ARITHMETIC IN AMD PROCESSORS MICHAEL SCHULTE AMD RESEARCH JUNE 2015

CSI 333 Lecture 1 Number Systems

Today. Binary addition Representing negative numbers. Andrew H. Fagg: Embedded Real- Time Systems: Binary Arithmetic

AMATH 352 Lecture 3 MATLAB Tutorial Starting MATLAB Entering Variables

Zeros of a Polynomial Function

Data Storage: Each time you create a variable in memory, a certain amount of memory is allocated for that variable based on its data type (or class).

Outline. hardware components programming environments. installing Python executing Python code. decimal and binary notations running Sage

Levent EREN A-306 Office Phone: INTRODUCTION TO DIGITAL LOGIC

Zuse's Z3 Square Root Algorithm Talk given at Fall meeting of the Ohio Section of the MAA October College of Wooster

MATH 60 NOTEBOOK CERTIFICATIONS

Oct: 50 8 = 6 (r = 2) 6 8 = 0 (r = 6) Writing the remainders in reverse order we get: (50) 10 = (62) 8

International Journal of Information Technology, Modeling and Computing (IJITMC) Vol.1, No.3,August 2013

Session 29 Scientific Notation and Laws of Exponents. If you have ever taken a Chemistry class, you may have encountered the following numbers:

Transcription:

Numerical Matrix Analysis Lecture Notes #10 Conditioning and / Peter Blomgren, blomgren.peter@gmail.com Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research Center San Diego State University San Diego, CA 92182-7720 http://terminus.sdsu.edu/ Spring 2016 Peter Blomgren, blomgren.peter@gmail.com / (1/23)

Outline 1 Finite Precision IEEE Binary Floating Point (from Math 541) Non-representable Values a Source of Errors 2 Theorem and Notation Fundamental Axiom of Example 3 Introduction: What is the correct answer? Accuracy Absolute and Relative Error, and Backward Peter Blomgren, blomgren.peter@gmail.com / (2/23)

IEEE Binary Floating Point (from Math 541) Non-representable Values a Source of Errors Finite Precision A 64-bit real number, double The Binary Standard 754-1985 (IEEE The Institute for Electrical and Electronics Engineers) standard specified the following layout for a 64-bit real number: Where sc 10 c 9... c 1 c 0 m 51 m 50... m 1 m 0 Symbol Bits Description s 1 The sign bit 0=positive, 1=negative c 11 The characteristic (exponent) m 52 The mantissa r = ( 1) s 2 c 1023 (1+f), c = 10 k=0 c n 2 n, f = 51 k=0 m k 2 52 k Peter Blomgren, blomgren.peter@gmail.com / (3/23)

IEEE Binary Floating Point (from Math 541) Non-representable Values a Source of Errors IEEE-754-1985 Special Signals In order to be able to represent zero, ±, and NaN (not-a-number), the following special signals are defined in the IEEE-754-1985 standard: Type S (1 bit) C (11 bits) M (52 bits) signaling NaN u 2047 (max).0uuuuu u (*) quiet NaN u 2047 (max).1uuuuu u negative infinity 1 2047 (max).000000 0 positive infinity 0 2047 (max).000000 0 negative zero 1 0.000000 0 positive zero 0 0.000000 0 (*) with at least one 1 bit. From http://www.freesoft.org/cie/rfc/1832/32.htm Peter Blomgren, blomgren.peter@gmail.com / (4/23)

IEEE Binary Floating Point (from Math 541) Non-representable Values a Source of Errors Examples: Finite Precision r = ( 1) s 2 c 1023 (1+f), c = Example #1 10 k=0 c n 2 n, f = 51 k=0 m k 2 52 k 0, 10000000000, 100000000000000000000000000000000000000000000000000 ( r 1 = ( 1) 0 2 210 1023 1+ 1 ) = 1 2 1 3 2 2 = 3.0 Example #2 (The Smallest Positive Real Number) 0, 00000000000, 000000000000000000000000000000000000000000000000001 r 2 = ( 1) 0 2 0 1023 (1+2 52) 1.113 10 308 Peter Blomgren, blomgren.peter@gmail.com / (5/23)

IEEE Binary Floating Point (from Math 541) Non-representable Values a Source of Errors Examples: Finite Precision r = ( 1) s 2 c 1023 (1+f), c = 10 k=0 c n 2 n, f = 51 k=0 m k 2 52 k Example #3 (The Largest Positive Real Number) 0, 11111111110, 111111111111111111111111111111111111111111111111111 ( r 3 = ( 1) 0 2 1023 1+ 1 2 + 1 2 2 + + 1 2 51 + 1 ) 2 52 ( = 2 1023 2 1 ) 2 52 1.798 10 308 Peter Blomgren, blomgren.peter@gmail.com / (6/23)

IEEE Binary Floating Point (from Math 541) Non-representable Values a Source of Errors That s Quite a Range! In summary, we can represent { ±0, ±1.113 10 308, ±1.798 10 308, ±, NaN } and a whole bunch of numbers in ( 1.798 10 308, 1.113 10 308 ) (1.113 10 308, 1.798 10 308 ) Bottom line: Over- or under-flowing is usually not a problem in IEEE floating point arithmetic. The problem in scientific computing is what we cannot represent. Peter Blomgren, blomgren.peter@gmail.com / (7/23)

IEEE Binary Floating Point (from Math 541) Non-representable Values a Source of Errors Fun with Matlab......Integers (2 53 +2) 2 53 = 2 (2 53 +2) (2 53 +1) = 2 (2 53 +1) 2 53 = 0 2 53 (2 53 1) = 1 realmax = 1.7977 10 308 realmin = 2.2251 10 308 eps = 2.2204 10 16 The smallest not-exactly-representable integer is (2 53 +1) = 9,007,199,254,740,993. Peter Blomgren, blomgren.peter@gmail.com / (8/23)

IEEE Binary Floating Point (from Math 541) Non-representable Values a Source of Errors Something is Missing Gaps in the Representation 1 of 3 There are gaps in the floating-point representation! Given the representation 0, 00000000000, 000000000000000000000000000000000000000000000000001 for the value v 1 = 2 1023 (1+2 52 ), the next larger floating-point value is 0, 00000000000, 000000000000000000000000000000000000000000000000010 i.e. the value v 2 = 2 1023 (1+2 51 ) The difference between these two values is 2 1023 2 52 = 2 1075 ( 10 324 ). Any number in the interval (v 1,v 2 ) is not representable! Peter Blomgren, blomgren.peter@gmail.com / (9/23)

IEEE Binary Floating Point (from Math 541) Non-representable Values a Source of Errors Something is Missing Gaps in the Representation 2 of 3 A gap of 2 1075 doesn t seem too bad... However, the size of the gap depend on the value itself... Consider r = 3.0 0, 10000000000, 100000000000000000000000000000000000000000000000000 and the next value 0, 10000000000, 100000000000000000000000000000000000000000000000001 Here, the difference is 2 2 52 = 2 51 ( 10 16 ). In general, in the interval [2 n,2 n+1 ] the gap is 2 n 52. Peter Blomgren, blomgren.peter@gmail.com / (10/23)

IEEE Binary Floating Point (from Math 541) Non-representable Values a Source of Errors Something is Missing Gaps in the Representation 3 of 3 At the other extreme, the difference between 0, 11111111110, 111111111111111111111111111111111111111111111111110 and the next value 0, 11111111110, 111111111111111111111111111111111111111111111111111 is 2 1023 2 52 = 2 971 1.996 10 292. That s a fairly significant gap!!! (A number large enough to comfortably count all the particles in the universe...) See, e.g. http://home.earthlink.net/ mrob/pub/math/numbers-10.html Peter Blomgren, blomgren.peter@gmail.com / (11/23)

IEEE Binary Floating Point (from Math 541) Non-representable Values a Source of Errors The Relative Gap It makes more sense to factor the exponent out of the discussion and talk about the relative gap: Exponent Gap Relative Gap (Gap/Exponent) 2 1023 2 1075 2 52 2.22 10 16 2 1 2 51 2 52 2 1023 2 971 2 52 Any difference between numbers smaller than the local gap is not representable, e.g. any number in the interval [ 3.0, 3.0+ 1 ) 2 51 is represented by the value 3.0. Peter Blomgren, blomgren.peter@gmail.com / (12/23)

Theorem and Notation Fundamental Axiom of Example The Floating Point Theorem ǫ mach Theorem Floating point numbers represent intervals! Notation: We let fl(x) denote the floating point representation of x R. The relative gap defines ǫ mach x R, there exists ǫ with ǫ ǫ mach, such that fl(x) = x(1+ǫ). In 64-bit floating point arithmetic ǫ mach 2.22 10 16. In matlab, the command eps returns this value. Let the symbols,,, and denote the floating-point operations: addition, subtraction, multiplication, and division. Peter Blomgren, blomgren.peter@gmail.com / (13/23)

Theorem and Notation Fundamental Axiom of Example ǫ mach All floating-point operations are performed up to some precision, i.e. x y = fl(x +y), x y = fl(x y), x y = fl(x y), x y = fl(x/y) This paired with our definition of ǫ mach gives us Axiom (The Fundamental Axiom of ) For all x,y F (where F is the set of floating point numbers), there exists ǫ with ǫ ǫ mach, such that x y = (x +y)(1+ǫ), x y = (x y)(1+ǫ), x y = (x y)(1+ǫ), x y = (x/y)(1+ǫ) That is every operation of floating point arithmetic is exact up to a relative error of size at most ǫ mach. Peter Blomgren, blomgren.peter@gmail.com / (14/23)

Theorem and Notation Fundamental Axiom of Example Example: Floating Point Error Scaled by 10 10 Consider the following polynomial on the interval [1.92, 2.08]: p(x) = (x 2) 9 = x 9 18x 8 + 144x 7 672x 6 + 2016x 5 4032x 4 + 5376x 3 4608x 2 + 2304x 512 1.5 p(x) = x 9 18x 8 +144x 7 672x 6 +2016x 5 4032x 4 +5376x 3 4608x 2 +2304x 512 p(x) as above p(x) = (x 2) 9 1 0.5 0 0.5 1 1.5 1.92 1.94 1.96 1.98 2 2.02 2.04 2.06 2.08 Peter Blomgren, blomgren.peter@gmail.com / (15/23)

Introduction: What is the correct answer? Accuracy Absolute and Relative Error, and Backward Peter Blomgren, blomgren.peter@gmail.com / (16/23)

Introduction: What is the correct answer? Accuracy Absolute and Relative Error, and Backward : Introduction 1 of 3 With the knowledge that (floating point) errors happen, we have to re-define the concept of the right answer. Previously, in the context of conditioning we defined a mathematical problem as a map f : X Y where X C n is the set of data (input), and Y C m is the set of solutions. Peter Blomgren, blomgren.peter@gmail.com / (17/23)

Introduction: What is the correct answer? Accuracy Absolute and Relative Error, and Backward : Introduction 2 of 3 We now define an implementation of an algorithm on a floating-point device, where F satisfies the fundamental axiom of floating point arithmetic as another map f : X Y i.e. f( x) Y is a numerical solution of the problem. Wiki-History: Pentium FDIV bug ( 1994) The Pentium FDIV bug was a bug in Intel s original Pentium FPU. Certain FP division operations performed with these processors would produce incorrect results. According to Intel, there were a few missing entries in the lookup table used by the divide operation algorithm. Although encountering the flaw was extremely rare in practice (Byte Magazine estimated that 1 in 9 billion FP divides with random parameters would produce inaccurate results), both the flaw and Intel s initial handling of the matter were heavily criticized. Intel ultimately recalled the defective processors. Peter Blomgren, blomgren.peter@gmail.com / (18/23)

Introduction: What is the correct answer? Accuracy Absolute and Relative Error, and Backward : Introduction 3 of 3 The task at hand is to make useful statements about f( x). Even though f( x) is affected by many factors roundoff errors, convergence tolerances, competing processes on the computer, etc; we will be able to make (maybe surprisingly) clear statements about f( x). Note that depending on the memory model, the previous state of a memory location may affect the result in e.g. the case of cancellation errors: If we subtract two 16-digit numbers with 13 common leading digits, we are left with 3 digits of valid information. We tend to view the remaining 13 digits as random. But really, there is nothing random about what happens inside the computer (we hope!) the randomness will depend on what happened previously... Peter Blomgren, blomgren.peter@gmail.com / (19/23)

Introduction: What is the correct answer? Accuracy Absolute and Relative Error, and Backward Accuracy The absolute error of a computation is and the relative error is f( x) f( x) f( x) f( x) f( x) this latter quantity will be our standard measure of error. If f is a good algorithm, we expect the relative error to be small, of the order ǫ mach. We say that f is accurate if x X f( x) f( x) f( x) = O(ǫ mach ) Peter Blomgren, blomgren.peter@gmail.com / (20/23)

Introduction: What is the correct answer? Accuracy Absolute and Relative Error, and Backward Interpretation: O(ǫ mach ) Since all floating point errors are functions of ǫ mach (the relative error in each operation is bounded by ǫ mach ), the relative error of the algorithm must be a function of ǫ mach : f( x) f( x) f( x) = e(ǫ mach ) The statement e(ǫ mach ) = O(ǫ mach ) means that C R + such that e(ǫ mach ) Cǫ mach, as ǫ mach 0 In practice ǫ mach is fixed, and the notation means that if we were to decrease ǫ mach, then our error would decrease at least proportionally to ǫ mach. Peter Blomgren, blomgren.peter@gmail.com / (21/23)

Introduction: What is the correct answer? Accuracy Absolute and Relative Error, and Backward If the problem f : X Y is ill-conditioned, then the accuracy goal f( x) f( x) f( x) may be unreasonably ambitious. = O(ǫ mach ) Instead we aim for stability. We say that f is a stable algorithm if x X f( x) f( x) = O(ǫ mach ) f( x) for some x with x x x = O(ǫ mach ) A stable algorithm gives approximately the right answer, to approximately the right question. Peter Blomgren, blomgren.peter@gmail.com / (22/23)

Introduction: What is the correct answer? Accuracy Absolute and Relative Error, and Backward Backward For many algorithms we can tighten this somewhat vague concept of stability. An algorithm f is backward stable if x X f( x) = f( x) for some x with x x x = O(ǫ mach ) A backward stable algorithm gives exactly the right answer, to approximately the right question. Next time: Examples of stable and unstable algorithms; of Householder triangularization. Peter Blomgren, blomgren.peter@gmail.com / (23/23)