Lesson 2 TLU. + w n. To emulate the generation of action potentials, we need a threshold value. 1 if a >= y = 0 if a <.

Similar documents
Figure 1.1 Vector A and Vector F

9.4. The Scalar Product. Introduction. Prerequisites. Learning Style. Learning Outcomes

9 Multiplication of Vectors: The Scalar or Dot Product

A vector is a directed line segment used to represent a vector quantity.

The Dot and Cross Products

Section V.3: Dot Product

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Section 1.1. Introduction to R n

Vector Algebra II: Scalar and Vector Products

Unified Lecture # 4 Vectors

5.3 The Cross Product in R 3

Section 9.1 Vectors in Two Dimensions

Section 10.4 Vectors

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Solving Simultaneous Equations and Matrices

Lecture 2: Homogeneous Coordinates, Lines and Conics

Vector Math Computer Graphics Scott D. Anderson

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Linear Algebra Notes

NOTES ON LINEAR TRANSFORMATIONS

12.5 Equations of Lines and Planes

Similarity and Diagonalization. Similar Matrices

Dot product and vector projections (Sect. 12.3) There are two main ways to introduce the dot product

Structural Axial, Shear and Bending Moments

Review A: Vector Analysis

Chapter 6. Orthogonality

by the matrix A results in a vector which is a reflection of the given

The Vector or Cross Product

LINEAR EQUATIONS IN TWO VARIABLES

Mechanics 1: Vectors

Math 241, Exam 1 Information.

1.3. DOT PRODUCT If θ is the angle (between 0 and π) between two non-zero vectors u and v,

Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

Vector Spaces; the Space R n

Two vectors are equal if they have the same length and direction. They do not

Binary Adders: Half Adders and Full Adders

Linear Algebra: Vectors

Vectors Math 122 Calculus III D Joyce, Fall 2012

Section 1.4. Lines, Planes, and Hyperplanes. The Calculus of Functions of Several Variables

6. Vectors Scott Surgent (surgent@asu.edu)

A Primer on Index Notation

Graphing Linear Equations

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

Section 1.1 Linear Equations: Slope and Equations of Lines

PLOTTING DATA AND INTERPRETING GRAPHS

Example SECTION X-AXIS - the horizontal number line. Y-AXIS - the vertical number line ORIGIN - the point where the x-axis and y-axis cross

The Point-Slope Form

Review of Fundamental Mathematics

a.) Write the line 2x - 4y = 9 into slope intercept form b.) Find the slope of the line parallel to part a

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

JUST THE MATHS UNIT NUMBER 8.5. VECTORS 5 (Vector equations of straight lines) A.J.Hobson

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

Systems of Linear Equations

The Graphical Method: An Example

THE DIMENSION OF A VECTOR SPACE

In order to describe motion you need to describe the following properties.

AP Physics - Vector Algrebra Tutorial

Support Vector Machine (SVM)

α = u v. In other words, Orthogonal Projection

Lecture 3: Coordinate Systems and Transformations

Orthogonal Projections

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

LINEAR INEQUALITIES. Mathematics is the art of saying many things in many different ways. MAXWELL

Elements of a graph. Click on the links below to jump directly to the relevant section

Bedford, Fowler: Statics. Chapter 4: System of Forces and Moments, Examples via TK Solver

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

Computing Orthonormal Sets in 2D, 3D, and 4D

11.1. Objectives. Component Form of a Vector. Component Form of a Vector. Component Form of a Vector. Vectors and the Geometry of Space

Introduction to Matrix Algebra

Linear Equations. Find the domain and the range of the following set. {(4,5), (7,8), (-1,3), (3,3), (2,-3)}

Definition: A vector is a directed line segment that has and. Each vector has an initial point and a terminal point.

LINES AND PLANES CHRIS JOHNSON

Solutions to Math 51 First Exam January 29, 2015

Reflection and Refraction

Content. Chapter 4 Functions Basic concepts on real functions 62. Credits 11

One advantage of this algebraic approach is that we can write down

What does the number m in y = mx + b measure? To find out, suppose (x 1, y 1 ) and (x 2, y 2 ) are two points on the graph of y = mx + b.

EDEXCEL NATIONAL CERTIFICATE/DIPLOMA MECHANICAL PRINCIPLES AND APPLICATIONS NQF LEVEL 3 OUTCOME 1 - LOADING SYSTEMS

Geometric description of the cross product of the vectors u and v. The cross product of two vectors is a vector! u x v is perpendicular to u and v

Chapter 4. Moment - the tendency of a force to rotate an object

PLANE TRUSSES. Definitions

CURVE FITTING LEAST SQUARES APPROXIMATION

Correlation key concepts:

Vector and Matrix Norms

28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE. v x. u y v z u z v y u y u z. v y v z

Unit 11 Additional Topics in Trigonometry - Classwork

Largest Fixed-Aspect, Axis-Aligned Rectangle

Lecture 8 : Coordinate Geometry. The coordinate plane The points on a line can be referenced if we choose an origin and a unit of 20

1.3 LINEAR EQUATIONS IN TWO VARIABLES. Copyright Cengage Learning. All rights reserved.

Inner Product Spaces

Numerical Analysis Lecture Notes

88 CHAPTER 2. VECTOR FUNCTIONS. . First, we need to compute T (s). a By definition, r (s) T (s) = 1 a sin s a. sin s a, cos s a

Lecture 14: Section 3.3

6. LECTURE 6. Objectives

521493S Computer Graphics. Exercise 2 & course schedule change

Answer Key for California State Standards: Algebra I

Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi

DERIVATIVES AS MATRICES; CHAIN RULE

2 Session Two - Complex Numbers and Vectors

Transcription:

Lesson 2 Threshold Logic Threshold Unit or Threshold Logic Unit (TLU) Activation a = + w 2 + w n x n Example: = (0.5, 1.0, 1.2), TLU = (1, 1, 0), then = (0.5 1) + (1.0 1) + (1.2 0) = 0.5 + 1.0 = 1.5. To emulate the generation of action potentials, we need a threshold value. y = 1 if a >= 0 if a <. In the previous example, if = 1, then y would equal 1 (because a is >= 1). Resilience To Noise and Hardware Failure Consider the TLU shown below, and whose behavior is shown in the table to the right. The TLU fires, or outputs one only when is 1: Activation Output 0 0 0 0 0 1 1 1 1 0 0 0 1 1 1 1

Now, suppose the hardware which implements the TLU is faulty, thus giving us weights encoded as (0.2, 0.8). The revised table shown above would then be as follows: Activation Output 0 0 0 0 0 1 0.8 1 1 0 0.2 0 1 1 1 1 In the revised table above, we see that the activation has changed, but the output is the same. Thus: Changes in the activation, as long as they don't cross the threshold, produce no change in the output. Non-Linear Systems In a linear system, the output is proportionally related to the input: i.e., small/large changes in the input always produce corresponding small/large changes in the output. Non-linear systems do not obey a proportionality restraint so that the magnitude of change in output does not necessarily reflect that of the input. Example: In the above TLU, consider a case where an activation change from 0.0 to 0.2 produces no change in the output, whereas an activation change from 0.49 to 0.51 produces a change in the output from 0 to 1. This is because 0.51 is larger than the threshold 0.5. Similarly, if input signals become degraded in some way, due to noise or a partial loss, once again, correct outputs occur. Example: instead of input value 1, we have 0.8, and instead of input 0, we have 0.2. Our revised table appears below:

Activation Output 0.2 0.2 0.2 0 0.2 0.8 0.8 1 0.8 0.2 0.2 0 0.8 0.8 0.8 1 non-linear behavior of TLU's Thus, TLU's are robust in the presence of noisy or corrupted input signals. Graceful degradation: In a large network, as the degree of hardware and/or signal degradation increases, the number of TLU's giving incorrect results will gradually increase as well. Contrast this with what happens in the case of conventional computers. Geometric Interpretation of TLU action A TLU separates its input patterns into two categories according to its binary response (0 or 1) to each pattern. Consider the TLU shown here which classifies its input patterns into two groups: those that give output 0, and those that give output 1. Activation Output 0 0 0 0 0 1 1 0 1 0 1 0 1 1 2 1 The pattern space of this TLU is as follows:

The Linear Separation of Classes Inputs:, Activation: a Threshold:. The critical condition for classification occurs when the activation equals the threshold we have: + w 2 = Subtracting from both sides w 2 = - + Dividing both sides by w 2 gives = - ( / w 2 ) + ( / w 2 ). This is of the general form: y = m x + b Recall that this is the linear (line) equation where m is the slope, and b is the y-intercept. Our previous example revisited: = 1, w 2 = 1, = 1.5. m = - ( / w 2 ) = -1, b = ( / w 2 ) = 1.5 / 1 = 1.5. y = -x + 1.5 x y

1.5 0 0 1.5 The two classes of the TLU output are separated by the red line. This will always be the case - in R n we will have separating hyperplanes. TLU's are linear separators and their patterns are linearly separable. Vectors A vector has a magnitude and a direction. We denote a vector v with, (in Europe,.) We denote a vector's magnitude with ; sometimes with r. A vector is defined by a pair of numbers (, ), where is the angle the vector makes with some reference direction (e.g., the x-axis). Alternate Representation - Cartesian co-ordinate system. For example, = (v 1, v 2 ), where v 1, v 2 are the components of the vector. With the above representation, the vector can now be considered as an ordered pair, or more generally, an ordered list of numbers - note that the order is important. (e.g., in general, (v 1, v 2 ) (v 2, v 1 ).) Scalar Multiplication of a vector:

Generalizing to n dimensions, we have k = (kv 1,kv 2,..., kv n ). Vector Addition: Two vectors may be added in 2D by appending one to the end of the other. Note that the vector may be drawn anywhere in the space as long as its magnitude and direction are preserved. In terms of its components, if = +, then = u 1 + v 1 w 2 = u 2 + v 2. In n-dimensions, we have: = + = (u 1 + v 1, u 2 + v 2,..., u n + v n ). Note: Addition is commutative. (i.e., + = +.) Vector Substraction The Length of a Vector in 2D, = sqrt (v 1 2 + v 2 2 ). in n-dimensions, = sqrt( v i 2 ), with i running from 1 to n. Comparing Vectors The Inner Product - Geometric Form: Useful to have some measure of how well aligned two vectors are. i.e., to know whether they point in the same or opposite direction. Inner Product: = cos where is the angle between the two. This product is also known as the scalar product. Note that if is > 360 we have equivalence to 360 -. The Inner Product - Algebraic Form: Given = (1,1), = (2,0), (note the angle between them is 45,)

observe that = sqrt(1 2 +1 2 ) = sqrt(2), and that = sqrt(2 2 +0 2 ) = 2. = sqrt(2) * 2 * (1/sqrt(2)) = 2. Alternatively, we have = v 1 + v 2 w 2 = 1*2 + 1*0 = 2. This is also called the dot product. In n-dimensions, we have = v i w i, with i running from 1 to n. If the sum is positive, then the vectors are lined up to some extent. If the sum is zero, then the vectors are at right angles. If the sum is negative, then the vectors are pointing away from each other. Vector Projection How much of lies in the direction of? Drop a perpendicular projection v w of onto, where v w = cos. Alternatively, we may write: v w = ( cos ) / = ( ) /. TLU's and Linear Separability Revisited The activation a of an n-input TLU is given by a =. What happens when the activation equals the threshold? Let n = 2. We have + w 2 = i.e., = (*) For an arbitrary, the projection of into is: x w = /.

If the constraint in (*) is imposed, we have: x w = /. (For example: consider when x = 2 and y = 3) So, assuming and are constant, the projection x w is constant and, in the case of 2D, x w must lie along the perpendicular to the weight vector. Therefore, in 2D, the relation = defines a straight line. Generalizing to n- dimensions, the counterpart is a hyperplane. When x lies on the hyperplane: = and hence y = 1. Suppose x w > /, then x must lie in region A, as > and y = 1. If x w < /, then the projection is shorter, and x must lie in region B. TLU is a linear classifier. If patterns cannot be separated by a hyperplane, then they cannot be classified with a single TLU.