Computational aspects of two-player zero-sum games Course notes for Computational Game Theory Sections 1 and 2 Fall 2010

Size: px
Start display at page:

Download "Computational aspects of two-player zero-sum games Course notes for Computational Game Theory Sections 1 and 2 Fall 2010"

Transcription

1 Computational aspects of two-player zero-sum games Course notes for Computational Game Theory Sections 1 and 2 Fall 2010 Peter Bro Miltersen November 1, 2010 Version 1.2

2 1 Introduction The topic of this course is Computational game theory. Informally, a game is a mathematical model of a situation where agents with conflicting interests interact. Computational game theory studies algorithms for solving games, i.e., algorithms where the input is a finite description of a game and the output is a solution to the game. What we mean by solution may vary. Game theory has several solution concepts. Some of these solution concepts are motivated descriptively or even predictively, such as Nash equilibrium which is, informally, a stable way of playing a game. Other solution concepts are motivated prescriptively. For such solution concepts, solutions are best thought of as advice we may give to agents about how to play the game well. An example is a maximin strategy which is a strategy achieving the best possible guaranteed outcome from the perspective of one of the agents. While there are many other solution concepts, the two above are extremely central and variations of them will take up most of our time. In fact, in this incarnation of the course we shall be concerned almost exclusively with two-player zero-sum games where the two notions will be seen to coincide. As theoretical computer scientists studying computational game theory, we shall be concerned with different ways of discretely representing games. As it happens, this is a concern shared with pure game theory, where such succinct representations as the extensive form were developed already in the 1940s. A more unique concern of computer science is our focus on worst case correctness and worst case time complexity of the algorithms we develop. The insistence of correctness may seem obvious but becomes less so when one considers that we shall be working in a domain where real numbers play a big role. We shall not be happy with methods that sometimes work, or only works if no numerical issues spoils the fun, without a thorough understanding of classes of instances where they are guaranteed to work, and a worst case analysis that makes sure that all numerical issues can be dealt with when the algorithm is implemented on a discrete computer. Many (most?) algorithms of numerical analysis in fact do not satisfy this criterion. When we consider worst case time complexity, we are as computer scientists particularly interested in the time complexity as a function of the combinatorial parameters ( the size ) of the game. In constrast, other disciplines analyzing games would often consider the game as a constant and consider the convergence rates of their methods as a function of the descired accuracy only. We will often be interested in precise big-o bounds on the complexity of our algorithms but our basic definition of computational efficiency is that the algorithms have polynomial time complexity. Sometimes, we shall not be 2

3 able to arrive at an polynomial time algorithm for the task we consider and to explain why, we shall derive computational hardness results for the task, such as NP-hardness. The prerequisites for reading these notes is familiarity with linear programming, algorithm analysis (in particular big-o time complexity ) and notions of polynomial reductions and NP-hardness and completeness. In particular, the courses dads, dopt, and dkombsoeg of the computer science program at Aarhus University make the perfect background. There are no prerequisites on game theoretic topics, but we refer to various external notes along the way that should be read together with these notes. 2 Games in Strategic Form Please read Ferguson, Game Theory, Part 2, Section 1-4, as a supplement to these notes. 2.1 Basic definitions Definition 1 A (non-cooperative) game G in strategic form is given by: A set I of players, I = {1, 2,..., l}. For each player i, a strategy space S i. For now we will assume that S i is finite. We shall also refer to S i as the set of pure strategies of Player i. For each player i, a utility/payoff function u i : S 1 S 2 S l R. The set S 1 S 2 S l is also called the set of pure strategy profiles of the game. The result of applying u i to a particular strategy profile is called the outcome. The notion of a game in strategic form can obviously be used to model situations where only one simultaneous move is performed by the players. Less obviously, by defining the strategy space appropriately, it can also be used to model games played over time. The computer scientist may find it easier to see this by interpreting the sets S i as the sets of possible deterministic programs for playing such games. We want to be able to consider randomized ways of playing a game. 3

4 Definition 2 The mixed extension G of a game G is obtained by: Extending S i to the set S i = (S i ) := the set of probability distributions on S i. The set S i is also referred to as the set of mixed strategies of Player i. Extending u i to ũ i with domain S 1 S 2 S l, where ũ i (σ 1, σ 2,..., σ l ) is defined to be the expected payoff when the mixed strategies σ i are played against each other, i.e, the expected payoff when l pure strategies are sampled independently from S 1, S 2,..., S l according to the probability distributions σ 1,..., σ l. The set S 1 S 2 S l is also called the set of mixed strategy profiles of the game. Definition 3 A two-player zero-sum game is a game where l=2 (two players) and u 2 = u 1 (zero-sum) When the strategy spaces S 1, S 2 are finite, a two-player zero-sum game is also called a matrix game. Indeed, we can use matrix notation to conveniently represent the game. Concretely, let S 1 = {1,..., n} and S 2 = {1,..., m}. Then the game can be represented as a payoff matrix A = (a ij ) with n rows and m columns and with a ij = u 1 (i, j). Henceforth, we shall also refer to Player 1 as the row player and Player 2 as the column player. Note that the matrix entries are the payoffs of the row player. The payoffs of the column player can be obtained by negating these. In an n m matrix game, a mixed strategy x of Player 1 is a member of n, the set of probability distributions on {1, 2,..., n}, and a mixed strategy y of Player 2 is a member of m. It is convenient to consider x and y to be column vectors of dimension n and m, respectively, with the i th entry being the probability that pure strategy i is played. Then, the expected payoff when x is played against y is easily seen to be given by x T Ay. The central solution concept for matrix games is the notion of a maximin strategy. Informally, a maximin strategy is a way of playing the game in a randomized way so the best possible guarantee on the expected payoff is achieved. Formally: Definition 4 A maximin strategy for Player 1 is any member of arg max min x T Ay. y m x n 4

5 For Player 2, a minimax strategy is any member of arg min y m max x n x T Ay. The guarantees obtained also have names. Definition 5 The lower value v of the game is max x The upper value v of the game is min y min x T Ay. y max x T Ay. x The lower value v is a lower bound on the amount that Player 1 will win (in expectation) when he plays by a maximin strategy. Similarly, the upper value v is an upper bound on the amount that Player 2 will lose (in expectation) when she plays by a minimax strategy. It should be obvious that v v. Maximin and minimax strategies are jointly known as optimal strategies though this terminology is a bit dangerous (and in fact often leads to misunderstandings) as such an optimal strategy is not necessarily optimal to use in any particular situation. For instance, when playing rock-scissors-paper, if you happen to know that your opponent will choose rock, the best move is not to play the maximin mixed strategy (which is derived below) but to play paper. The reason for the terminology is this: The maximin strategy provides, by definition, the best possible (i.e., optimal) guarantee on the expected payoff against an unknown opponent. The following lemma is convenient. It expresses that once Player 1 has committed to play by the mixed strategy x, Player 2 does not lose anyting by playing a pure strategy rather than a mixed one (in the lemma, e j is the j th unit vector). Lemma 6 For any x n min x T Ay = y m min j {1,...,m} (xt A) j = min j {1,...,m} xt Ae j Proof Since y is a probability distribution (a stochastic vector) x T Ay is a weighted average of x T Ae j, for j in {k y k > 0}. An average cannot be strictly smaller than all the items it is an average of! 5

6 Consider now as an example the game of rock, scissors and paper for one dollar. It can be represented using the following payoff matrix r s p R S P Consider the mixed strategy x of Player 1 that assigns a probability of 1 3 to each of the three rows. Let us analyze what guarantee this strategy obtains. If Player 2 plays rock, the expected payoff to Player 1 is ( 1) = 0. If Player 2 plays scissors, the expected payoff to Player 1 is ( 1) = 0. 3 If Player 2 plays paper, the expected payoff to Player 1 is 1 3 ( 1) = 0. The minimum of 0, 0 and 0 is 0, so the guarantee obtained by the strategy is 0. That is, the lower value of the game is at least 0. Symmetrically, we can argue by looking at the mixed strategy y of Player 2 that assigns 1/3 to each column that the upper value of the game is at most 0. Since the lower value is at most the upper value, they must both be equal to 0, and hence the uniform distributions on the rows and columns are in fact maximin and minimax strategies. As a more difficult example, we consider a modified rock-scissors-paper game we call Paper Rules. In this game, if the row player wins with paper, he wins two dollars rather than one. r s p R S P We ask: How much can the row player offer the column player for playing this game? This is given by the lower value of the game, i.e.: max min{ x S + 2x P, x R x P, x R + x S } x R,x S,x P s.t.x R + x S + x P = 1, x R, x S, x P 0. 6

7 In order to evaluate this expression, we make a reasonable guess that we must of course verify later. We guess that at the max we have the following equalities: x S + 2x P = x R x P x R x P = x R + x S Adding the equation x R + x S + x P = 1, we have 3 equations in 3 unknowns which yields the following unique solution: x R = 1 3, x S = 5 12, x P = which yields an expected payoff for Player 1 of, no matter what Player 2 12 plays. That is, whether or not our guess above is correct, we have that the lower value is at least 1. By similar reasoning, we arrive at a guess for the 12 minimax strategy of Player 2: y r = 1 4, y s = 5 12, y p = with a guarantee of. That is, the upper value is at most. Since the lower value is at most the upper value, they are in fact both equal to 1, and 12 the strategies we arrived at are in fact the maximin/minimax strategies for this game. So, if Player 1 pays more than 1 to play the game, he has paid 12 too much (feel free to try to make money out of this fact in a bar, taking the role of Player 2). 2.2 Von Neumanns s theorem In the derivation of the maximin/minimax strategies above, we depended on a lucky guess. We will now see how to derive the maximin/minimax strategies in general, using linear programming (the lucky guess above corresponds to guessing the basis of an optimal basic solution to this program). This follows from the proof of the fundamental theorem on matrix games, namely: Theorem 7 (von Neumann s min-max theorem, 1928) For all matrix games, v = v Indeed, this was the case for our two examples above. Since the lower and upper values are equal, we shall refer to both as simply the value of the game. 7

8 We will state and prove a more general result which will be useful later. In the general result, rather than taking the max and the min over probability distributions, we take them over arbitrary non-empty and bounded polytopes. Theorem 8 (Generalized min-max theorem) Given real matrixes A,E,F and real column vectors e,f so that X = {x : Ex = e, x 0} and Y = {y : F y = f, y 0} are non-empty and bounded polytopes. Then max min x X y Y xt Ay = min max y Y x X xt Ay Proof By using the duality theorem for linear programming we get max x, Ex=e, x 0 min y, F y=f, y 0 xt Ay = max x, Ex=e, x 0 max q T f (1) q, q T F x T A = max q T f (2) x,q, Ex=e, q T F x T A, x 0 We use the duality theorem a second time to obtain min y, F y=f, y 0 max x, Ex=e, x 0 xt Ay = min y, F y=f, y 0 min r T e (3) r, r T E Ay = min r T e (4) y,r, F y=f, r T E Ay, y 0 and applying for the third time the duality theorem, now on (2), we obtain (4) which proves the theorem. Note that the expression (2) is just a linear program! In particular, finding maximin (and minimax) strategies and values in matrix games reduces to solving linear programs. Here is what the program looks like for the Paper rules games. There are four variables, where x R, x S and x P are probabilities for playing each of the three pure strategies. and v is the value. Also, there is one constraint for each move the opponent might make. s.t. max v x R,x S,x P,v r: 0 x R + ( 1) x S + 2 x P v s: 1 x R + 0 x S + ( 1) x P v p: ( 1) x R + 1 x S + 0 x P v x R + x S + x P = 1 x R, x S, x P 0 8

9 Note that the reduction from solving matrix games to solvng linear programming is very simple; it essentially consists of copying coefficients of the matrix A and letting them be coefficients of the linear program. In particular, the reduction from solving matrix games to solving linear programs is a strongly polynomial time reduction. Let us remind ourselves what a strongly polynomial time algorithm is. A strongly polynomial time algorithm is an algorithm for computing a semi-algebraic function (that is, a function from real vectors to real vectors that can be defined in first order logic using the vocabulary +,, /,, ) using polynomially in the dimension of the domain of the function many arithmetic operations and comparisons. We know that linear programming has polynomial time algorithms (e.g., the ellipsoid algorithm and many interior point algorithms) but it is an open problem if it has a strongly polynomial time algorithm. In particular, the polynomial time ellipsoid algorithm is not a strongly polynomial time algorithm: It can not be defined on an arbitrary real input as an algorithm using the operations +,, /,, only. Also, while it can be used to exactly solve linear programs in Turing machine time polynomial in the bit length of a rational input, it will need more iterations and more time on inputs containings numbers with more digits, a deficiency not shared by a strongly polynomial time algorithm. It is very likely that we will someday find a strongly polynomial time algorithm for linear programming. We do have candidates for such algorithms. In particular, the simplex algorithm with some ingenious pivoting rule could very well be such an algorithm (on the other hand, the standard pivoting rules have all been shown to lead to worst case exponential time complexity). Summing up, we now know: Corollary 9 Maximin/minimax strategies and values can be found in polynomial time (given the matrix, with entries rational numbers given as fractions, of a matrix game as input). If there is a strongly polynomial time algorithm for linear programming then there is even a strongly polynomial time algorithm for computing maximin strategies for given matrices. Note that we are very careful about stating the representation we have in mind when we consider the notion of polynomial time solvability. It is very interesting to ask whether the implication in the last bullet of the corollary can be reversed. Could we hope for a strongly polynomial time algorithm for computing maximin strategies without finding one for linear programming? To kill such hopes, we have to provide a strongly polynomial reduction from solving linear programs to solving matrix games. That is, we 9

10 should postulate a black box finding maximin strategies for matrix games given to the black box as input as use such a black box to solve a given linear program using a polynomial number of arithmetic operations and applications of the black box. There seems to be a foklore belief that it is known how to do this (in fact, the lecturer has been ridiculed on more than one occasion for claiming that it is not known!). The foklore belief seems to stem from a redution due to Dantzig (1948) that does indeed in some sense reduce solving linear programs to solving matrix games but does not do quite what we want. Let us have a look at Dantzig s reduction: Given an LP (in standard form) P : max c T x Ax b x 0 we want to know if it has an optimal solution (so that it is not infeasible or unbounded). The answer is that this is the case if and only if P : Ax b A T y c b T y = c T x x, y 0 is feasible, by the duality theorem. Dantzig s observation is that P is feasible if and only if the following matrix game 0 A T c G = A 0 b c T b T 0 has some maximin strategy (x, y, z ) that plays the last row with nonzero probability z > 0. We shall not give a proof of this statement, but we remark that the x-part of the feasible solution to P is in that case given by x = x /z and the y-part is given by y = y /z. Since G is skew-symmetric, the game appear identical for the two players. We call such a game symmetric. It is easy to see that the value of a symmetric game is 0, like in the (unmodified) rock, scissors and paper game. In particular, Dantzig certainly did not reduce the general linear programming problem to computing the value of a matrix game. Thus, it seems that the following problem is still open: 10

11 Open problem 1 Is there a strongly polynomial time reduction from finding optimal solutions to linear programs to finding maximin strategies of matrix games? 2.3 Maximin strategies and Nash equilibria Definition 10 (for general games) Given σ i S i (mixed strategy for player i) and σ i j i Sj (mixed strategy for the other players). We say that σ i is a best reply or best response to σ i, if σ i is arg max π i ũ i (π i, σ i ) Definition 11 A strategy profile σ l i=1 S i is a Nash equilibrium if σ i is a best reply to σ i for all i. Nash equilibrium is a central solution concept in game theory. Note that it is conceptually very different from maximin. Nash equilibrium is a descriptive notion capturing stability while Maximin is a normative notion capturing optimal guarantees. Still, for the case of a two-player, zero-sum game, we can show that the two notions coincide. Proposition 12 For a two-player, zero-sum game, we have: Nash equilibria = Maximin strategies Minimax strategies Proof : We have a strategy profile (x, y ) where x is maximin and y be minimax. The expected outcome of play must be (x ) T A(y ) = v = v = v, as both players are guaranteed an outcome at least this good. Can any player deviate and get better outcome? No! This would violate guarantee of the other player. : Let (x, y ) be a Nash equilibrium and let v(x ) = min y m (x ) T Ay, v(y ) = max x n x T Ay (we may call v(x ) the value of strategy x ). We should prove that x is maximin and that y is minimax. We shall just show that x is maximin; the other proof is similar. So suppose, to the contrary, that x is not maximin. 11

12 That is, v(x ) v. Then v(x ) < v. If (x ) T Ay > v(x ), then player 2 can deviate and achieve v(x ), contradicting the Nash equilibrium property, i.e., we have a contradiction. On the other hand, if (x ) T Ay v(x ), then player 1 can deviate to his maximin strategy and achieve v, contradicting the Nash equilibrium property. The following corollary is often very useful when deriving maximin strategies by hand: Corollary 13 (Principle of Indifference) Given a matrix game and an maximin mixed strategy x for the row player y and a minimax mixed strategy y for the column player. If the column player plays according to y, the expected payoff for the row player is the same, no matter which pure strategy i he chooses, as long as x i > 0. Proof If not, the Nash equilibrium condition is violated. 12

4.6 Linear Programming duality

4.6 Linear Programming duality 4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

THE DIMENSION OF A VECTOR SPACE

THE DIMENSION OF A VECTOR SPACE THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field

More information

6.207/14.15: Networks Lecture 15: Repeated Games and Cooperation

6.207/14.15: Networks Lecture 15: Repeated Games and Cooperation 6.207/14.15: Networks Lecture 15: Repeated Games and Cooperation Daron Acemoglu and Asu Ozdaglar MIT November 2, 2009 1 Introduction Outline The problem of cooperation Finitely-repeated prisoner s dilemma

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Mathematical finance and linear programming (optimization)

Mathematical finance and linear programming (optimization) Mathematical finance and linear programming (optimization) Geir Dahl September 15, 2009 1 Introduction The purpose of this short note is to explain how linear programming (LP) (=linear optimization) may

More information

6.254 : Game Theory with Engineering Applications Lecture 2: Strategic Form Games

6.254 : Game Theory with Engineering Applications Lecture 2: Strategic Form Games 6.254 : Game Theory with Engineering Applications Lecture 2: Strategic Form Games Asu Ozdaglar MIT February 4, 2009 1 Introduction Outline Decisions, utility maximization Strategic form games Best responses

More information

Equilibrium computation: Part 1

Equilibrium computation: Part 1 Equilibrium computation: Part 1 Nicola Gatti 1 Troels Bjerre Sorensen 2 1 Politecnico di Milano, Italy 2 Duke University, USA Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where. Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

More information

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued. Linear Programming Widget Factory Example Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory:, Vertices, Existence of Solutions. Equivalent formulations.

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

More information

Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

More information

Solving simultaneous equations using the inverse matrix

Solving simultaneous equations using the inverse matrix Solving simultaneous equations using the inverse matrix 8.2 Introduction The power of matrix algebra is seen in the representation of a system of simultaneous linear equations as a matrix equation. Matrix

More information

MATH 340: MATRIX GAMES AND POKER

MATH 340: MATRIX GAMES AND POKER MATH 340: MATRIX GAMES AND POKER JOEL FRIEDMAN Contents 1. Matrix Games 2 1.1. A Poker Game 3 1.2. A Simple Matrix Game: Rock/Paper/Scissors 3 1.3. A Simpler Game: Even/Odd Pennies 3 1.4. Some Examples

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method Introduction to dual linear program Given a constraint matrix A, right

More information

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1. Introduction Linear Programming for Optimization Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1.1 Definition Linear programming is the name of a branch of applied mathematics that

More information

Linear Programming: Chapter 11 Game Theory

Linear Programming: Chapter 11 Game Theory Linear Programming: Chapter 11 Game Theory Robert J. Vanderbei October 17, 2007 Operations Research and Financial Engineering Princeton University Princeton, NJ 08544 http://www.princeton.edu/ rvdb Rock-Paper-Scissors

More information

This exposition of linear programming

This exposition of linear programming Linear Programming and the Simplex Method David Gale This exposition of linear programming and the simplex method is intended as a companion piece to the article in this issue on the life and work of George

More information

Minimax Strategies. Minimax Strategies. Zero Sum Games. Why Zero Sum Games? An Example. An Example

Minimax Strategies. Minimax Strategies. Zero Sum Games. Why Zero Sum Games? An Example. An Example Everyone who has studied a game like poker knows the importance of mixing strategies With a bad hand, you often fold But you must bluff sometimes Lectures in Microeconomics-Charles W Upton Zero Sum Games

More information

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2 IEOR 4404 Homework # Intro OR: Deterministic Models February 14, 011 Prof. Jay Sethuraman Page 1 of 5 Homework #.1 (a) What is the optimal solution of this problem? Let us consider that x 1, x and x 3

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

Lecture 2: August 29. Linear Programming (part I)

Lecture 2: August 29. Linear Programming (part I) 10-725: Convex Optimization Fall 2013 Lecture 2: August 29 Lecturer: Barnabás Póczos Scribes: Samrachana Adhikari, Mattia Ciollaro, Fabrizio Lecci Note: LaTeX template courtesy of UC Berkeley EECS dept.

More information

OPRE 6201 : 2. Simplex Method

OPRE 6201 : 2. Simplex Method OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2

More information

10 Evolutionarily Stable Strategies

10 Evolutionarily Stable Strategies 10 Evolutionarily Stable Strategies There is but a step between the sublime and the ridiculous. Leo Tolstoy In 1973 the biologist John Maynard Smith and the mathematician G. R. Price wrote an article in

More information

Linear Programming. March 14, 2014

Linear Programming. March 14, 2014 Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

1 Determinants and the Solvability of Linear Systems

1 Determinants and the Solvability of Linear Systems 1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS Sensitivity Analysis 3 We have already been introduced to sensitivity analysis in Chapter via the geometry of a simple example. We saw that the values of the decision variables and those of the slack and

More information

ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2015

ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2015 ECON 459 Game Theory Lecture Notes Auctions Luca Anderlini Spring 2015 These notes have been used before. If you can still spot any errors or have any suggestions for improvement, please let me know. 1

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the

More information

Row Echelon Form and Reduced Row Echelon Form

Row Echelon Form and Reduced Row Echelon Form These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (3rd edition) These notes are intended primarily for in-class presentation

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

8 Divisibility and prime numbers

8 Divisibility and prime numbers 8 Divisibility and prime numbers 8.1 Divisibility In this short section we extend the concept of a multiple from the natural numbers to the integers. We also summarize several other terms that express

More information

We shall turn our attention to solving linear systems of equations. Ax = b

We shall turn our attention to solving linear systems of equations. Ax = b 59 Linear Algebra We shall turn our attention to solving linear systems of equations Ax = b where A R m n, x R n, and b R m. We already saw examples of methods that required the solution of a linear system

More information

Applied Algorithm Design Lecture 5

Applied Algorithm Design Lecture 5 Applied Algorithm Design Lecture 5 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 1 / 86 Approximation Algorithms Pietro Michiardi (Eurecom) Applied Algorithm Design

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

Computational Learning Theory Spring Semester, 2003/4. Lecture 1: March 2

Computational Learning Theory Spring Semester, 2003/4. Lecture 1: March 2 Computational Learning Theory Spring Semester, 2003/4 Lecture 1: March 2 Lecturer: Yishay Mansour Scribe: Gur Yaari, Idan Szpektor 1.1 Introduction Several fields in computer science and economics are

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

What is Linear Programming?

What is Linear Programming? Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to

More information

5.1 Bipartite Matching

5.1 Bipartite Matching CS787: Advanced Algorithms Lecture 5: Applications of Network Flow In the last lecture, we looked at the problem of finding the maximum flow in a graph, and how it can be efficiently solved using the Ford-Fulkerson

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

The Equivalence of Linear Programs and Zero-Sum Games

The Equivalence of Linear Programs and Zero-Sum Games The Equivalence of Linear Programs and Zero-Sum Games Ilan Adler, IEOR Dep, UC Berkeley adler@ieor.berkeley.edu Abstract In 1951, Dantzig showed the equivalence of linear programming problems and two-person

More information

Linear Codes. Chapter 3. 3.1 Basics

Linear Codes. Chapter 3. 3.1 Basics Chapter 3 Linear Codes In order to define codes that we can encode and decode efficiently, we add more structure to the codespace. We shall be mainly interested in linear codes. A linear code of length

More information

1 Nonzero sum games and Nash equilibria

1 Nonzero sum games and Nash equilibria princeton univ. F 14 cos 521: Advanced Algorithm Design Lecture 19: Equilibria and algorithms Lecturer: Sanjeev Arora Scribe: Economic and game-theoretic reasoning specifically, how agents respond to economic

More information

Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand

Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand Notes V General Equilibrium: Positive Theory In this lecture we go on considering a general equilibrium model of a private ownership economy. In contrast to the Notes IV, we focus on positive issues such

More information

Optimization in ICT and Physical Systems

Optimization in ICT and Physical Systems 27. OKTOBER 2010 in ICT and Physical Systems @ Aarhus University, Course outline, formal stuff Prerequisite Lectures Homework Textbook, Homepage and CampusNet, http://kurser.iha.dk/ee-ict-master/tiopti/

More information

Game Theory and Nash Equilibrium

Game Theory and Nash Equilibrium Game Theory and Nash Equilibrium by Jenny Duffy A project submitted to the Department of Mathematical Sciences in conformity with the requirements for Math 4301 (Honours Seminar) Lakehead University Thunder

More information

Module1. x 1000. y 800.

Module1. x 1000. y 800. Module1 1 Welcome to the first module of the course. It is indeed an exciting event to share with you the subject that has lot to offer both from theoretical side and practical aspects. To begin with,

More information

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes Solution by Inverse Matrix Method 8.2 Introduction The power of matrix algebra is seen in the representation of a system of simultaneous linear equations as a matrix equation. Matrix algebra allows us

More information

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

More information

Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

More information

Linearly Independent Sets and Linearly Dependent Sets

Linearly Independent Sets and Linearly Dependent Sets These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation

More information

The Graphical Method: An Example

The Graphical Method: An Example The Graphical Method: An Example Consider the following linear program: Maximize 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2 0, where, for ease of reference,

More information

(67902) Topics in Theory and Complexity Nov 2, 2006. Lecture 7

(67902) Topics in Theory and Complexity Nov 2, 2006. Lecture 7 (67902) Topics in Theory and Complexity Nov 2, 2006 Lecturer: Irit Dinur Lecture 7 Scribe: Rani Lekach 1 Lecture overview This Lecture consists of two parts In the first part we will refresh the definition

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

More information

Linear Programming. April 12, 2005

Linear Programming. April 12, 2005 Linear Programming April 1, 005 Parts of this were adapted from Chapter 9 of i Introduction to Algorithms (Second Edition) /i by Cormen, Leiserson, Rivest and Stein. 1 What is linear programming? The first

More information

NP-Completeness and Cook s Theorem

NP-Completeness and Cook s Theorem NP-Completeness and Cook s Theorem Lecture notes for COM3412 Logic and Computation 15th January 2002 1 NP decision problems The decision problem D L for a formal language L Σ is the computational task:

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Games Manipulators Play

Games Manipulators Play Games Manipulators Play Umberto Grandi Department of Mathematics University of Padova 23 January 2014 [Joint work with Edith Elkind, Francesca Rossi and Arkadii Slinko] Gibbard-Satterthwaite Theorem All

More information

24. The Branch and Bound Method

24. The Branch and Bound Method 24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

More information

3. Mathematical Induction

3. Mathematical Induction 3. MATHEMATICAL INDUCTION 83 3. Mathematical Induction 3.1. First Principle of Mathematical Induction. Let P (n) be a predicate with domain of discourse (over) the natural numbers N = {0, 1,,...}. If (1)

More information

Adaptive Online Gradient Descent

Adaptive Online Gradient Descent Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Tiers, Preference Similarity, and the Limits on Stable Partners

Tiers, Preference Similarity, and the Limits on Stable Partners Tiers, Preference Similarity, and the Limits on Stable Partners KANDORI, Michihiro, KOJIMA, Fuhito, and YASUDA, Yosuke February 7, 2010 Preliminary and incomplete. Do not circulate. Abstract We consider

More information

Duality in Linear Programming

Duality in Linear Programming Duality in Linear Programming 4 In the preceding chapter on sensitivity analysis, we saw that the shadow-price interpretation of the optimal simplex multipliers is a very useful concept. First, these shadow

More information

Linear Programming Notes VII Sensitivity Analysis

Linear Programming Notes VII Sensitivity Analysis Linear Programming Notes VII Sensitivity Analysis 1 Introduction When you use a mathematical model to describe reality you must make approximations. The world is more complicated than the kinds of optimization

More information

Polynomial Invariants

Polynomial Invariants Polynomial Invariants Dylan Wilson October 9, 2014 (1) Today we will be interested in the following Question 1.1. What are all the possible polynomials in two variables f(x, y) such that f(x, y) = f(y,

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem

More information

Game Theory: Supermodular Games 1

Game Theory: Supermodular Games 1 Game Theory: Supermodular Games 1 Christoph Schottmüller 1 License: CC Attribution ShareAlike 4.0 1 / 22 Outline 1 Introduction 2 Model 3 Revision questions and exercises 2 / 22 Motivation I several solution

More information

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm. Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three

More information

Transportation Polytopes: a Twenty year Update

Transportation Polytopes: a Twenty year Update Transportation Polytopes: a Twenty year Update Jesús Antonio De Loera University of California, Davis Based on various papers joint with R. Hemmecke, E.Kim, F. Liu, U. Rothblum, F. Santos, S. Onn, R. Yoshida,

More information

Lecture 7: Finding Lyapunov Functions 1

Lecture 7: Finding Lyapunov Functions 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1

More information

MOP 2007 Black Group Integer Polynomials Yufei Zhao. Integer Polynomials. June 29, 2007 Yufei Zhao yufeiz@mit.edu

MOP 2007 Black Group Integer Polynomials Yufei Zhao. Integer Polynomials. June 29, 2007 Yufei Zhao yufeiz@mit.edu Integer Polynomials June 9, 007 Yufei Zhao yufeiz@mit.edu We will use Z[x] to denote the ring of polynomials with integer coefficients. We begin by summarizing some of the common approaches used in dealing

More information

Optimization Modeling for Mining Engineers

Optimization Modeling for Mining Engineers Optimization Modeling for Mining Engineers Alexandra M. Newman Division of Economics and Business Slide 1 Colorado School of Mines Seminar Outline Linear Programming Integer Linear Programming Slide 2

More information

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d. DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms

More information

1.2 Solving a System of Linear Equations

1.2 Solving a System of Linear Equations 1.. SOLVING A SYSTEM OF LINEAR EQUATIONS 1. Solving a System of Linear Equations 1..1 Simple Systems - Basic De nitions As noticed above, the general form of a linear system of m equations in n variables

More information

Minimally Infeasible Set Partitioning Problems with Balanced Constraints

Minimally Infeasible Set Partitioning Problems with Balanced Constraints Minimally Infeasible Set Partitioning Problems with alanced Constraints Michele Conforti, Marco Di Summa, Giacomo Zambelli January, 2005 Revised February, 2006 Abstract We study properties of systems of

More information

Methods for Finding Bases

Methods for Finding Bases Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

More information

5 Homogeneous systems

5 Homogeneous systems 5 Homogeneous systems Definition: A homogeneous (ho-mo-jeen -i-us) system of linear algebraic equations is one in which all the numbers on the right hand side are equal to : a x +... + a n x n =.. a m

More information

Linear Programming I

Linear Programming I Linear Programming I November 30, 2003 1 Introduction In the VCR/guns/nuclear bombs/napkins/star wars/professors/butter/mice problem, the benevolent dictator, Bigus Piguinus, of south Antarctica penguins

More information

it is easy to see that α = a

it is easy to see that α = a 21. Polynomial rings Let us now turn out attention to determining the prime elements of a polynomial ring, where the coefficient ring is a field. We already know that such a polynomial ring is a UF. Therefore

More information

Online Appendix to Stochastic Imitative Game Dynamics with Committed Agents

Online Appendix to Stochastic Imitative Game Dynamics with Committed Agents Online Appendix to Stochastic Imitative Game Dynamics with Committed Agents William H. Sandholm January 6, 22 O.. Imitative protocols, mean dynamics, and equilibrium selection In this section, we consider

More information

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 2

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 2 CS 70 Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 2 Proofs Intuitively, the concept of proof should already be familiar We all like to assert things, and few of us

More information

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation The Optimization Tree Linear Programming

More information

11 Multivariate Polynomials

11 Multivariate Polynomials CS 487: Intro. to Symbolic Computation Winter 2009: M. Giesbrecht Script 11 Page 1 (These lecture notes were prepared and presented by Dan Roche.) 11 Multivariate Polynomials References: MC: Section 16.6

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

More information

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α

More information