Approximative Methods for Monotone Systems of min-max-polynomial Equations

Size: px
Start display at page:

Download "Approximative Methods for Monotone Systems of min-max-polynomial Equations"

Transcription

1 Approximative Methods or Monotone Systems o min-max-polynomial Equations Javier Esparza, Thomas Gawlitza, Stean Kieer, and Helmut Seidl Institut ür Inormatik Technische Universität München, Germany {esparza,gawlitza,kieer,seidl}@in.tum.de Abstract. A monotone system o min-max-polynomial equations (min-max- MSPE) over the variables X 1,..., X n has or every i exactly one equation o the orm X i = i(x 1,..., X n) where each i(x 1,..., X n) is an expression built up rom polynomials with non-negative coeicients, minimum- and maximum-operators. The question o computing least solutions o min-max- MSPEs arises naturally in the analysis o recursive stochastic games [5, 6, 14]. Min-max-MSPEs generalize MSPEs or which convergence speed results o Newton s method are established in [11, 3]. We present the irst methods or approximatively computing least solutions o min-max-mspes which converge at least linearly. Whereas the irst one converges aster, a single step o the second method is cheaper. Furthermore, we compute ǫ-optimal positional strategies or the player who wants to maximize the outcome in a recursive stochastic game. 1 Introduction In this paper we study monotone systems o min-max polynomial equations (min-max- MSPEs). A min-max-mspe over the variables X 1,...,X n contains or every 1 i n exactly one equation o the orm X i = i (X 1,...,X n ) where every i (X 1,...,X n ) is an expression built up rom polynomials with non-negative coeicients, minimumand maximum-operators. An example o such an equation is X 1 = 3X 1 X 2 +5X 2 1 4X 2 (where is the minimum-operator). The variables range over non-negative reals. Minmax-MSPEs are called monotone because i is a monotone unction in all arguments. Min-max-MSPEs naturally appear in the study o two-player stochastic games and competitive Markov decision processes, in which, broadly speaking, the next move is decided by one o the two players or by tossing a coin, depending on the game s position (see e.g. [12, 7]). The min and max operators model the competition between the players. The product operator, which leads to non-linear equations, allows to deal with recursive stochastic games [5, 6], a class o games with an ininite number o positions, and having as special case extinction games, games in which players inluence with their actions the development o a population whose members reproduce and die, and the player s goals are to extinguish the population or keep it alive (see Section 3). Min-max-MSPEs generalize several other classes o equation systems. I product is disallowed, we obtain systems o min-max linear equations, which appear in classical This work was in part supported by the DFG project Algorithms or Sotware Model Checking.

2 two-person stochastic games with a inite number o game positions. The problem o solving these systems has been thoroughly studied [1, 8, 9]. I both min and max are disallowed, we obtain monotone systems o polynomial equations, which are central to the study o recursive Markov chains and probabilistic pushdown systems, and have been recently studied in [4, 11, 3]. I only one o min or max is disallowed, we obtain a class o systems corresponding to recursive Markov decision processes [5]. All these models have applications in the analysis o probabilistic programs with procedures [14]. In vector orm we denote a min-max-mspe by X = (X) where X denotes the vector (X 1,...,X n ) and denotes the vector ( 1,..., n ). By Kleene s theorem, i a min-max-mspe has a solution then it also has a least one, denoted by µ, which is also the relevant solution or the applications mentioned above. Kleene s theorem also ensures that the iterative process κ (0) = 0, κ (k+1) = (κ (k) ), k N, the so-called Kleene sequence, converges to µ. However, this procedure can converge very slowly: in the worst case, the number o accurate bits o the approximation grows with the logarithm o the number o iterations (c. [4]). Thus, the goal is to replace the unction by an operator G : R n R n such that the respective iterative process also converges to µ but aster. In [4, 11, 3] this problem was studied or min-max-mspes without the min and max operator. There, G was chosen as one step o the well-known Newton s method (c. or instance [13]). This means that, or a given approximate x (k), the next approximate x (k+1) = G(x (k) ) is determined by the unique solution o a linear equation system which is obtained rom the irst order Taylor approximation o at x (k). It was shown that this choice guarantees linear convergence, i.e., the number o accurate bits grows linearly in the number o iterations. Notice that when characterizing the convergence behavior the term linear does not reer to the size o. However, this technique no longer works or arbitrary min-max-mspes. I we approximate at x (k) through its irst order Taylor approximation at x (k) there is no guarantee that the next approximate still lies below the least solution, and the sequence o approximants may even diverge. For this reason, the PReMo tool [14] uses roundrobin iteration or min-max-mspes, an optimization o Kleene iteration. Unortunately, this technique also exhibits logarithmic convergence behavior in the worst case. In this paper we overcome the problem o Newton s method. Instead o approximating (at the current approximate x (k) ) by a linear unction, both o our methods approximate by a piecewise linear unction. In contrast to the applications o Newton s method in [4, 11, 3], this approximation may not have a unique ixpoint, but it has a least ixpoint which we use as the next approximate x (k+1) = G(x (k) ). Our irst method uses an approximation o at x (k) whose least ixpoint can be determined using the algorithm or systems o rational equations rom [9]. The approximation o at x (k) used by our second method allows to use linear programming to compute x (k+1). Our methods are the irst algorithms or approximatively computing µ which converge at least linearly, provided that is quadratic, an easily achievable normal orm. The rest o the paper is organized as ollows. In Section 2 we introduce basic concepts and state some important acts about min-max-mspes. A class o games which can be analyzed using our techniques is presented in Section 3. Our main contribution, the two approximation methods, is presented and analyzed in Sections 4 and 5. In Sec- 2

3 tion 6 we study the relation between our two approaches and compare them to previous work. We conclude in Section 7. Missing proos can be ound in a technical report [2]. 2 Notations, Basic Concepts and a Fundamental Theorem As usual, R and N denote the set o real and natural numbers. We assume 0 N. We write R 0 or the set o non-negative real numbers. We use bold letters or vectors, e.g. x R n. In particular 0 denotes the vector (0,...,0). The transpose o a matrix or a vector is indicated by the superscript. We assume that the vector x R n has the components x 1,...,x n. Similarly, the i-th component o a unction : R n R m is denoted by i. As in [3], we say that x R n has i N valid bits o y R n i x j y j 2 i y j or j = 1,...,n. We identiy a linear unction rom R n to R m with its representation as a matrix rom R m n. The identity matrix is denoted by I. The Jacobian o a unction : R n R m at x R n is the matrix o all irst-order partial derivatives o at x, i.e., the m n-matrix with the entry i X j (x) in the i-th row and the j-th column. We denote it by (x). The partial order on R n is deined by setting x y i x i y i or all i = 1,...,n. We write x < y i x y and x y. The operators and are deined by x y := min{x,y} and x y := max{x,y} or x,y R. These operators are also extended component-wise to R n and point-wise to R n -valued unctions. A unction : D R n R m it called monotone on M D i (x) (y) or every x,y M with x y. Let X R n and : X X. A vector x X is called ixpoint o i x = (x). It is the least ixpoint o i y x or every ixpoint y X o. I it exists we denote the least ixpoint o by µ. We call easible i has some ixpoint x X. Let us ix a set X = {X 1,...,X n } o variables. We call a vector = ( 1,..., m ) o polynomials 1,..., m in the variables X 1,...,X n a system o polynomials. is called linear (resp. quadratic) i the degree o each i is at most 1 (resp. 2), i.e., every monomial contains at most one variable (resp. two variables). As usual, we identiy with its interpretation as a unction rom R n to R m. As in [11, 3] we call a monotone system o polynomials (MSP or short) i all coeicients are non-negative. Min-max-MSPs. Given polynomials 1,..., k we call 1 k a min-polynomial and 1 k a max-polynomial. A unction that is either a min- or a maxpolynomial is also called min-max-polynomial. We call = ( 1,..., n ) a system o min-polynomials i every component i is a min-polynomial. The deinition o systems o max-polynomials and systems o min-max-polynomials is analogous. A system o min-max-polynomials is called linear (resp. quadratic) i all occurring polynomials are linear (resp. quadratic). By introducing auxiliary variables every system o minmax-polymials can be transormed into a quadratic one in time linear in the size o the system (c. [11]). A system o min-max-polynomials where all coeicients are rom R n 0 is called a monotone system o min-max-polynomials (min-max-msp) or short. The terms min-msp and max-msp are deined analogously. Example 1. (x 1,x 2 ) = ( 1 2 x ,x 1 2) is a quadratic min-max-msp. 3

4 A min-max-msp = ( 1,..., n ) can be considered as a mapping rom R n 0 to R n 0. The Kleene sequence (κ(k) ) k N is deined by κ (k) := k (0), k N. We have: Lemma 1. Let : R n 0 Rn 0 be a min-max-msp. Then: (1) is monotone and continuous on R n 0 ; and (2) I is easible (i.e., has some ixpoint), then has a least ixpoint µ and µ = lim k κ (k). Strategies. Assume that denotes a system o min-max-polynomials. A -strategy σ or is a unction that maps every max-polynomial i = i,1 i,ki occurring in to one o the i,j s and every min-polynomial i to i. We also write σ i or σ( i ). Accordingly, a -strategy π or is a unction that maps every min-polynomial i = i,1 i,k occurring in to one o the i,j s and every max-polynomial i to i. We denote the set o -strategies or by Σ and the set o -strategies or by Π. For s Σ Π, we write s or ( s 1,..., s n). We deine Π := {π Π π is easible}. We drop the subscript whenever it is clear rom the context. Example 2. Consider rom Example 1. Then π : 1 2 x , x 1 2 x 1 2 is a -strategy. The max-msp π is given by π (x 1,x 2 ) = (3,x 1 2). We collect some elementary acts concerning strategies. Lemma 2. Let be a easible min-max-msp. Then (1) µ σ µ or every σ Σ; (2) µ π µ or every π Π ; (3) µ π = µ or some π Π. In [5] the authors consider a subclass o recursive stochastic games or which they prove that a positional optimal strategy exists or the player who wants to maximize the outcome (Theorem 2). The outcome o such a game is the least ixpoint o some min-max-msp. In our setting, Theorem 2 o [5] implies that there exists a -strategy σ such that µ σ = µ provided that is derived rom such a recursive stochastic game. Example 3 shows that this property does not hold or arbitrary min-max-msps. Example 3. Consider rom Example 1. Let σ 1,σ 2 Σ be deined by σ 1 (x 1 2) = x 1 and σ 2 (x 1 2) = 2. Then µ σ1 = (1,1), µ σ2 = ( 5 2,2) and µ = (3,3). The proo o the ollowing undamental result is inspired by the proo o Theorem 2 in [5]. Although the result looks very natural it is non-trivial to prove. Theorem 1. Let be a easible max-msp. Then µ σ = µ or some σ Σ. 3 A Class o Applications: Extinction Games In order to illustrate the interest o min-max-msps we consider extinction games, which are special stochastic games. Consider a world o n dierent species s 1,...,s n. Each species s i is controlled by one o two adversarial players. For each s i there is a nonempty set A i o actions. An action a A i replaces a single individual o species s i by other individuals speciied by the action a. The actions can be probabilistic. E.g., an action could transorm an adult rabbit to zero individuals with probability 0.2, to an adult rabbit with probability 0.3 and to an adult and a baby rabbit with probability

5 Another action could transorm an adult rabbit to a at rabbit. The max-player (minplayer) wants to maximize (minimize) the probability that some initial population is extinguished. During the game each player continuously chooses an individual o a species s i controlled by her/him and applies an action rom A i to it. Note that actions on dierent species are never in conlict and the execution order is irrelevant. What is the probability that the population is extinguished i the players ollow optimal strategies? To answer those questions we set up a min-max-msp with one min-maxpolynomial or each species, thereby ollowing [10, 5]. The variables X i represent the probability that a population with only a single individual o species s i is extinguished. In the rabbit example we have X adult = X adult + 0.5X adult X baby X at, assuming that the adult rabbits are controlled by the max-player. The probability that an initial population with p i individuals o species s i is extinguished is given by n i=1 ((µ) i) pi. The stochastic termination games o [5, 6, 14] can be considered as extinction games. In the ollowing we present another instance. The primaries game. Hillary Clinton has to decide her strategy in the primaries. Her team estimates that undecided voters have not yet decided to vote or her or three possible reasons: they consider her (a) cold and calculating, (b) too much part o Washington s establishment, or (c) they listen to Obama s campaign. So the team decides to model those problems as species in an extinction game. The larger the population o a species, the more inluenced is an undecided voter by the problem. The goal o Clinton s team is to maximize the extinction probabilities. Clinton s possible actions or problem (a) are showing emotions or concentrating on her program. I she shows emotions, her team estimates that the individual o problem (a) is removed with probability 0.3, but with probability 0.7 the action backires and produces yet another individual o (a). This and the eect o concentrating on her program can be read o rom Equation (1) below. For problem (b), Clinton can choose between concentrating on her voting record or her statement I ll be ready rom day 1. Her team estimates the eect as given in Equation (2). Problem (c) is controlled by Obama, who has the choice between his change message, or attacking Clinton or her position on Iraq, see Equation (3). X a = X 2 a X c (1) X b = X c 0.4X b + 0.3X c (2) X c = 0.5X b + 0.3X 2 b X a + 0.4X a X b + 0.1X b (3) What should Clinton and Obama do? What are the extinction probabilities, assuming perect strategies? In the next sections we show how to eiciently solve these problems. 4 The τ -Method Assume that denotes a easible min-max-msp. In this section we present our irst method or computing µ approximatively. We call it τ -method. This method computes, or each approximate x (i), the next approximate x (i+1) as the least ixpoint o a piecewise linear approximation L(,x (i) ) x (i) (see below) o at x (i). This approximation is a system o linear min-max-polynomials where all coeicients o monomials 5

6 o degree 1 are non-negative. Here, we call such a system a monotone linear min-maxsystem (min-max-mls or short). Note that a min-max-mls is not necessarily a minmax-msp, since negative coeicients o monomials o degree 0 are allowed, e.g. the min-max-mls (x 1 ) = x 1 1 is not a min-max-msp. In [9] a min-max-mls is considered as a system o equations (called system o rational equations in [9]) which we denote by X = (X) in vector orm. We identiy a min-max-mls with its interpretation as a unction rom R n to R n (R denotes the complete lattice R {, }). Since is monotone on R n, it has a least ixpoint µ R n which can be computed using the strategy improvement algorithm rom [9]. We now deine the min-max-mls L(,y), a piecewise linear approximation o at y. As a irst step, let us consider a monotone polynomial : R n 0 R 0. Given some approximate y R n 0, a linear approximation L(,y) : Rn R o at y is given by the irst order Taylor approximation at y, i.e., L(,y)(x) := (y) + (y)(x y), x R n. This is precisely the linear approximation which is used or Newton s method. Now consider a max-polynomial = 1 k : R n R. We deine the approximation L(,y) : R n R o at y by L(,y) := L( 1,y) L( k,y). We emphasize that in this case, L(,y) is in general not a linear unction but a linear max-polynomial. Accordingly, or a min-msp = 1 k : R n R, we deine L(,y) := L( 1,y) L( k,y). In this case L(,y) is a linear min-polynomial. Finally, or a min-max-msp : R n R n, we deine the approximation L(,y) : R n R n o at y by L(,y) := (L( 1,y),...,L( n,y)) which is a min-max-mls. Example 4. Consider the min-max-msp rom Example 1. The approximation L(,( 1 2, 1 2 )) is given by L(,(1 2, 1 2 ))(x 1,x 2 ) = ( 1 2 x , x 1 2 ). Using the approximation L(,x (i) ) we deine the operator N : R n 0 Rn 0 which gives us, or an approximate x (i), the next approximate x (i+1) by N (x) := µ(l(,x) x), x R n 0. Observe that L(, x) x is still a min-max-mls (at least ater introducing auxiliary variables in order to eliminate components which contain - and -operators). Example 5. In Example 4 we have: N ( 1 2, 1 2 )=µ(l(,(1 2, 1 2 )) (1 2, 1 2 ) )=( 11 8,2). We collect basic properties o N in the ollowing lemma: Lemma 3. Let be a easible min-max-msp and x,y R n 0. Then: 1. x,(x) N (x); 2. x = N (x) whenever x = (x); 3. (Monotonicity o N ) N (x) N (y) whenever x y; 4. N (x) (N (x)) whenever x (x); 5. N (x) N σ(x) or every -strategy σ Σ; 6. N (x) N π(x) or every -strategy π Π; 7. N (x) = N π(x) or some -strategy π Π. 6

7 In particular Lemma 3 implies that the least ixpoint o N is equal to the least ixpoint o. Moreover, iteration based on N is at least as ast as Kleene iteration. We thereore use this operator or computing approximates to the least ixpoint. Formally, we deine: Deinition 1. We call the sequence (τ (k) (k) ) o approximates deined by τ := N k(0) or k N the τ -sequence or. We drop the subscript i it is clear rom the context. Proposition 1. Let be a easible min-max-msp. The τ -sequence (τ (k) ) or (see deinition 1) is monotonically increasing, bounded rom above by µ, and converges to µ. Moreover, κ (k) τ (k) or all k N. We now show that the new approximation method converges at least linearly to the least ixpoint. Theorem 6.2 o [3] implies the ollowing lemma about the convergence o Newton s method or MSPs, i.e., systems without maxima and minima. Lemma 4. Let be a easible quadratic MSP. The sequence (τ (k) ) k N converges linearly to µ. More precisely, there is a k N such that τ (k +i (n+1) 2 n) has at least i valid bits o µ or every i N. We emphasize that linear convergence is the worst case. In many practical examples, in particular i the matrix I (µ) is invertible, Newton s method converges exponentially. We mean by this that the number o accurate bits o the approximation grows exponentially in the number o iterations. As a irst step towards our main result or this section, we use Lemma 4 to show that our approximation method converges linearly whenever is a max-msps. In this case we obtain the same convergence speed as or MSPs. Lemma 5. Let be a easible max-msp. Let M := {σ Σ µ σ = µ}. The set M is non-empty and τ (i) τ (i) σ or all σ M and i N. Proo. Theorem 1 implies that there exists a -strategy σ Σ such that µ σ = µ. Thus M is non-empty. Let σ M. By induction on k Lemma 3 implies τ (k) N k (0) N k σ(0) = τ(k) σ or every k N. Combining Lemma 4 and Lemma 5 we get linear convergence or max-msps: = Theorem 2. Let be a easible quadratic max-msp. The τ -sequence (τ (k) ) or (see deinition 1) converges linearly to µ. More precisely, there is a k N such that τ (k +i (n+1) 2 n) has at least i valid bits o µ or every i N. A direct consequence o Lemma 5 is that the τ -sequence (τ (i) ) converges exponentially i (τ (i) σ) converges exponentially or some σ Σ with µσ = µ. This is in particular the case i the matrix I ( σ ) (µ) is invertible. In order to extend this result to minmax-msps we state the ollowing lemma which enables us to relate the sequence (τ (i) ) to the sequences (τ (i) π) where µπ = µ. Lemma 6. Let be a easible min-max-msp and m denote the number o strategies π Π with µ = µ π. There is a constant k N such that or all i N there exists some strategy π Π with µ = µ π and τ (i) τ (k+m i) π. 7

8 We now present the main result o this section which states that our approximation method converges at least linearly also in the general case, i.e., or min-max-msps. Theorem 3. Let be a easible quadratic min-max-msp and m denote the number o strategies π Π with µ = µ π. The τ -sequence (τ (k) ) or (see deinition 1) converges linearly to µ. More precisely, there is a k N such that τ (k +i m (n+1) 2 n ) has at least i valid bits o µ or every i N. The upper bound on the convergence rate provided by Theorem 2 is by the actor m worse than the upper bound obtained or MSPs. Since m is the number o strategies π Π with µ π = µ, m is trivially bounded by Π but is usually much smaller. The τ -sequence (τ (i) (i) ) converges exponentially whenever (τ π) converges exponentially or every π with µ π = µ (see [2]). The latter condition is typically satisied (see the discussion ater Theorem 2). In order to determine the approximate τ (i+1) = N (τ (i) ) rom τ (i) we must compute the least ixpoint o the min-max-mls L(,τ (i) ) τ (i). This can be done by using the strategy improvement algorithm rom [9]. The algorithm iterates over -strategies. For each strategy it solves a linear program or alternatively iterates over -strategies. The number o -strategies used by this algorithm is trivially bounded by the number o -strategies or L(,τ (i) ) τ (i) which is exponentially in the number o -expressions occurring in L(,τ (i) ) τ (i). However, we do not know an example or which the algorithm considers more than linearly many strategies. 5 The ν-method The τ -method, presented in the previous section, uses strategy iteration over - strategies to compute N (y). This could be expensive, as there may be exponentially many -strategies. Thereore, we derive an alternative generalization o Newton s method that in each step picks the currently most promising -strategy directly, without strategy iteration. Consider again a ixed easible min-max-msp whose least ixpoint we want to approximate. Assume that y is some approximation o µ. Instead o applying N to y, as the τ -method, we now choose a strategy σ Σ such that (y) = σ (y), and compute N σ(y), where N σ was deined in Section 4 as N σ(y) := µ(l( σ,y) y). In the ollowing we write N σ instead o N σ i is understood. Assume or a moment that is a max-msp and that there is a unique σ Σ such that (y) = σ (y). The approximant N σ (y) is the result o applying one iteration o Newton s method, because L( σ,y) is not only a linearization o σ, but the irst order Taylor approximation o at y. More precisely, L( σ,y)(x) = (y)+ (y) (x y), and N σ (y) is obtained by solving x = L( σ,y)(x). In this sense, the ν-method is a more direct generalization o Newton s method than the τ -method. Formally, we deine the ν-method by a sequence o approximates, the ν-sequence. Deinition 2 (ν-sequence). A sequence (ν (k) ) k N is called ν-sequence o a min-max- MSP i ν (0) = 0 and or each k there is a strategy σ (k) Σ with (ν (k) ) = σ(k) (k) (ν ) and ν(k+1) = N σ (k) (ν (k) ). We may drop the subscript i is understood. 8

9 Notice the nondeterminism here i there is more than one -strategy that attains (ν (k) ). The ollowing proposition is analogous to Proposition 1 and states some basic properties o ν-sequences. Proposition 2. Let be a easible min-max-msp. The sequence (ν (k) ) is monotonically increasing, bounded rom above by µ, and converges to µ. More precisely, we have κ (k) ν (k) (ν (k) ) ν (k+1) µ or all k N. The goal o this section is again to strengthen Proposition 2 towards quantitative convergence results or ν-sequences. To achieve this goal we again relate the convergence o ν-sequences to the convergence o Newton s method or MSPs. I is an MSP, Lemma 4 allows to argue about the Newton operator N when applied to approximates x µ. To transer this result to min-max-msps we need an invariant like ν (k) µ σ(k) or ν-sequences. As a irst step to such an invariant we urther restrict the selection o the σ (k). Roughly speaking, the strategy in a component i is only changed when it is immediate that component i has not yet reached its ixpoint. Deinition 3 (lazy strategy update). Let x σ (x) or a σ Σ. We say that σ Σ is obtained rom x and σ by a lazy strategy update i (x) = σ (x) and σ ( i ) = σ( i ) holds or all components i with i (x) = x i. We call a ν-sequence (ν (k) ) k N lazy i or all k, the strategy σ (k) is obtained rom ν (k) and σ (k 1) by a lazy strategy update. The key property o lazy ν-sequences is the ollowing non-trivial invariant. Lemma 7. Let (ν (k) ) k N be a lazy ν-sequence. Then ν (k) µ σ(k)π holds or all k N and all π Π. The ollowing example shows that lazy strategy updates are essential to Lemma 7 even or max-msps. Example 6. Consider the MSP (x,y) = ( 1 2 x,xy ). Let σ(0) ( 1 2 x) = 1 2 and σ (1) ( 1 2 x) = x. Then there is a ν-sequence (ν(k) ) with ν (0) = 0, ν (1) = N σ (0)(0) = ( 1 2,0), ν(2) = N σ (1)(ν (1) ). However, the conclusion o Lemma 7 does not hold, because ( 1 2,0) = ν(1) µ σ(1) = (0, 1 2 ). Notice that σ(1) is not obtained by a lazy strategy update, as 1 (ν (1) ) = ν (1) 1. Lemma 7 alls short o our subgoal to establish ν (k) µ σ(k), because Π \ Π might be non-empty. In act, we provide an example in [2] showing that ν (k) µ σ(k)π does not always hold or all π Π, even when σ(k)π is easible. Luckily, Lemma 7 will suice or our convergence speed result. The let procedure o Algorithm 1 summarizes the lazy ν-method which works by computing lazy ν-sequences. The ollowing lemma relates the ν-method or min-max- MSPs to Newton s method or MSPs. Lemma 8. Let be a easible min-max-msp and (ν (k) ) a lazy ν-sequence. Let m be the number o strategy pairs (σ,π) Σ Π with µ = µ σπ. Then m 1 and there is a constant k as N such that, or all k N, there exist strategies σ Σ, π Π with µ = µ σπ and ν (kas+m k) τ (k) σπ. 9

10 Algorithm 1 lazy ν-method procedure lazy-ν(, k) assumes: is a min-max-msp returns: ν (k), σ (k) obtained by k iterations o the lazy ν-method ν 0 σ any σ Σ such that (0) = σ (0) or i rom 1 to k do ν N σ(ν) σ lazy strategy update rom ν and σ od return ν, σ procedure N (y) assumes: is a min-msp, y R n 0 returns: µ(l(, y) y) g linear min-msp with g(d) = L(, y)(y + d) y u κ (n) g g ( g 1,... {, g n) where 0 i ui = 0 g i = g i i u i > 0 d maximize x x n subject to 0 x g(x) by 1 LP return y + d In typical cases, i.e., i I ( σπ ) (µ) is invertible or all σ Σ and π Π with µ σπ = µ, Newton s method converges exponentially. The ollowing theorem captures the worst-case, in which the lazy ν-method still converges linearly. Theorem 4. Let be a quadratic easible min-max-msp. The lazy ν-sequence (ν (k) ) k N converges linearly to µ. More precisely, let m be the number o strategy pairs (σ,π) Σ Π with µ = µ σπ. Then there is a k N such that ν (k +i m (n+1) 2 n) has at least i valid bits o µ or every i N. Next we show that N σ(y) can be computed exactly by solving a single LP. The right procedure o Algorithm 1 accomplishes this by taking advantage o the ollowing proposition which states that N σ(y) can be determined by computing the least ixpoint o some linear min-msp g. Proposition 3. Let y σ (y) µ. Then N σ(y) = y+µg or the linear min-msp g with g(d) = L( σ,y)(y + d) y. Ater having computed the linear min-msp g, Algorithm 1 determines the 0- components o µg. This can be done by perorming n Kleene steps, since (µg) i = 0 whenever (κ (n) g ) i = 0. Let g be the linear min-msp obtained rom g by substituting the constant 0 or all components g i with (µg) i = 0. The least ixpoint o g can be computed by solving a single LP, as implied by the ollowing lemma. The correctness o Algorithm 1 ollows. Lemma 9. Let g be a linear min-msp such that g i = 0 whenever (µg) i = 0 or all components i. Then µg is the greatest vector x with x g(x). The ollowing theorem is a direct consequence o Lemma 7 or the case where Π = Π. It shows the second major advantage o the lazy ν-method, namely, that that the strategies σ (k) are meaningul in terms o games. Theorem 5. Let Π = Π. Let (ν (k) ) k N be a lazy ν-sequence. Then ν (k) µ σ(k) holds or all k N. As (ν (k) ) converges to µ, the max-strategy σ (k) can be considered ǫ-optimal. In terms o games, Theorem 5 states that the strategy σ (k) guarantees the max-player an outcome 10

11 o at least ν (k). It is open whether an analogous theorem holds or the τ -method. Application to the primaries example. We solved the equation system o Section 3 approximatively by perorming 5 iterations o the lazy ν-method. Using Theorem 5 we ound that Clinton can extinguish a problem (a) individual with a probability o at least X a = by concentrating on her program and her ready rom day 1 message. (More than 70 Kleene iterations would be needed to iner that X a is at least 0.49.) As ν (5) seems to solve above equation system quite well in the sense that (ν (5) ) ν (5) is small, we are pretty sure about Obama s optimal strategy: he should talk about Iraq. As ν (2) X 1 > 0.38 and σ (2) maps 1 to X1 2, Clinton s team can use Theorem 5 to iner that X a 0.38 by showing emotions and using her ready rom day 1 message. 6 Discussion In order to compare our two methods in terms o convergence speed, assume that denotes a easible min-max-msp. Since N (x) N σ (Lemma 3.5), it ollows that τ (i) ν (i) holds or all i N. This means that the τ -method is as least as ast as the ν-method i one counts the number o approximation steps. Next, we construct an example which shows that the number o approximation steps needed by the lazy ν-method can be much larger than the respective number needed by the τ -method. It is parameterized with an arbitrary k N and given by (x 1,x 2 ) = ( x2 2, x x (k+1)). Since the constant 2 2(k+1) is represented using O(k) bits, it is o size linear in k. It can be shown (see [2]) that the lazy ν- method needs at least k steps. More precisely, ν τ (k) (1.5,1.95). The τ -method needs exactly 2 steps. We now compare our approaches with the tool PReMo [14]. PReMo employs 4 dierent techniques to approximate µ or min-max-msps : It uses Newton s method only or MSPs without min or max. In this case both o our methods coincide with Newton s method. For min-max-msps, PReMo uses Kleene iteration, round-robin iteration (called Gauss-Seidel in [14]), and an optimistic variant o Kleene which is not guaranteed to converge. In the ollowing we compare our algorithms only with Kleene iteration, as our algorithms are guaranteed to converge and a round-robin step is not aster than n Kleene steps. Our methods improve on Kleene iteration in the sense that κ (i) τ (i),ν (i) holds or all i N, and our methods converge linearly, whereas Kleene iteration does not converge linearly in general. For example, consider the MSP g(x) = 1 2 x with µg = 1. Kleene iteration needs exponentially many iterations or j bits [4], whereas Newton s method gives exactly 1 bit per iteration. For the slightly modiied MSP g(x) = g(x) 1 which has the same ixpoint, PReMo no longer uses Newton s method, as g contains a minimum. Our algorithms still produce exactly 1 bit per iteration. In the case o linear min-max systems our methods compute the precise solution and not only an approximation. This applies, or example, to the max-linear system o [14] describing the expected time o termination o a nondeterministic variant o Quicksort. Notice that Kleene iteration does not compute the precise solution (except or trivial instances), even or linear MSPs without min or max. 11

12 We implemented our algorithms prototypically in Maple and ran them on the quadratic nonlinear min-max-msp describing the termination probabilities o a recursive simple stochastic game. This game stems rom the example suite o PReMo (rssg2.c) and we used PReMo to produce the equations. Both o our algorithms reached the least ixpoint ater 2 iterations. So we could compute the precise µ and optimal strategies or both players, whereas PReMo computes only approximations o µ. 7 Conclusion We have presented the irst methods or approximatively computing the least ixpoint o min-max-msps, which are guaranteed to converge at least linearly. Both o them are generalizations o Newton s method. Whereas the τ -method converges aster in terms o number o approximation steps, one approximation step o the ν-method is cheaper. Furthermore, we have shown that the ν-method computes ǫ-optimal strategies or games. Whether such a result can also be established or the τ -method is still open. A direction or uture research is to evaluate our methods in practice. In particular, the inluence o imprecise computation through loating point arithmetic should be studied. It would also be desirable to ind a bound on the threshold k. Acknowledgement. We thank the anonymous reerees or valuable comments. Reerences 1. A. Condon. The complexity o stochastic games. In. and Comp., 96(2): , J. Esparza, T. Gawlitza, S. Kieer, and H. Seidl. Approximative methods or monotone systems o min-max-polynomial equations. Technical report, Technische Universität München, Institut ür Inormatik, February J. Esparza, S. Kieer, and M. Luttenberger. Convergence thresholds o Newton s method or monotone polynomial equations. In Proceedings o STACS, pages , K. Etessami and M. Yannakakis. Recursive Markov chains, stochastic grammars, and monotone systems o nonlinear equations. In STACS, pages , K. Etessami and M. Yannakakis. Recursive Markov decision processes and recursive stochastic games. In Proc. ICALP 2005, volume 3580 o LNCS. Springer, K. Etessami and M. Yannakakis. Eicient qualitative analysis o classes o recursive Markov decision processes and simple stochastic games. In STACS, pages , J. Filar and K. Vrieze. Competitive Markov Decision processes. Springer, T. Gawlitza and H. Seidl. Precise ixpoint computation through strategy iteration. In European Symposium on Programming (ESOP), LNCS 4421, pages Springer, T. Gawlitza and H. Seidl. Precise relational invariants through strategy iteration. In J. Duparc and T. A. Henzinger, editors, CSL, volume 4646 o LNCS, pages Springer, T. Harris. The Theory o Branching Processes. Springer, S. Kieer, M. Luttenberger, and J. Esparza. On the convergence o Newton s method or monotone systems o polynomial equations. In STOC, pages ACM, A. Neyman and S. Sorin. Stochastic Games and Applications. Kluwer Academic Press, J. Ortega and W. Rheinboldt. Iterative solution o nonlinear equations in several variables. Academic Press, D. Wojtczak and K. Etessami. PReMo: An analyzer or probabilistic recursive models. In Proc. o TACAS, volume 4424 o LNCS, pages Springer,

FIXED INCOME ATTRIBUTION

FIXED INCOME ATTRIBUTION Sotware Requirement Speciication FIXED INCOME ATTRIBUTION Authors Risto Lehtinen Version Date Comment 0.1 2007/02/20 First Drat Table o Contents 1 Introduction... 3 1.1 Purpose o Document... 3 1.2 Glossary,

More information

Polynomials with non-negative coefficients

Polynomials with non-negative coefficients Polynomials with non-negative coeicients R. W. Barnard W. Dayawansa K. Pearce D.Weinberg Department o Mathematics, Texas Tech University, Lubbock, TX 79409 1 Introduction Can a conjugate pair o zeros be

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

Copyright 2005 IEEE. Reprinted from IEEE MTT-S International Microwave Symposium 2005

Copyright 2005 IEEE. Reprinted from IEEE MTT-S International Microwave Symposium 2005 Copyright 2005 IEEE Reprinted rom IEEE MTT-S International Microwave Symposium 2005 This material is posted here with permission o the IEEE. Such permission o the IEEE does t in any way imply IEEE endorsement

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

SECTION 6: FIBER BUNDLES

SECTION 6: FIBER BUNDLES SECTION 6: FIBER BUNDLES In this section we will introduce the interesting class o ibrations given by iber bundles. Fiber bundles lay an imortant role in many geometric contexts. For examle, the Grassmaniann

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α

More information

1 if 1 x 0 1 if 0 x 1

1 if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

More information

LOGARITHMIC FUNCTIONAL AND THE WEIL RECIPROCITY LAW

LOGARITHMIC FUNCTIONAL AND THE WEIL RECIPROCITY LAW 85 LOGARITHMIC FUNCTIONAL AND THE WEIL RECIPROCITY LAW KHOVANSKII A. Department o Mathematics, University o Toronto, Toronto, Ontario, Canada E-mail: askold@math.toronto.edu http://www.math.toronto.edu

More information

The van Hoeij Algorithm for Factoring Polynomials

The van Hoeij Algorithm for Factoring Polynomials The van Hoeij Algorithm for Factoring Polynomials Jürgen Klüners Abstract In this survey we report about a new algorithm for factoring polynomials due to Mark van Hoeij. The main idea is that the combinatorial

More information

Linearly Independent Sets and Linearly Dependent Sets

Linearly Independent Sets and Linearly Dependent Sets These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

Section 1.3 P 1 = 1 2. = 1 4 2 8. P n = 1 P 3 = Continuing in this fashion, it should seem reasonable that, for any n = 1, 2, 3,..., = 1 2 4.

Section 1.3 P 1 = 1 2. = 1 4 2 8. P n = 1 P 3 = Continuing in this fashion, it should seem reasonable that, for any n = 1, 2, 3,..., = 1 2 4. Difference Equations to Differential Equations Section. The Sum of a Sequence This section considers the problem of adding together the terms of a sequence. Of course, this is a problem only if more than

More information

Foundations of mathematics. 2. Set theory (continued)

Foundations of mathematics. 2. Set theory (continued) 2.1. Tuples, amilies Foundations o mathematics 2. Set theory (continued) Sylvain Poirier settheory.net A tuple (or n-tuple, or any integer n) is an interpretation o a list o n variables. It is thus a meta-unction

More information

Power functions: f(x) = x n, n is a natural number The graphs of some power functions are given below. n- even n- odd

Power functions: f(x) = x n, n is a natural number The graphs of some power functions are given below. n- even n- odd 5.1 Polynomial Functions A polynomial unctions is a unction o the orm = a n n + a n-1 n-1 + + a 1 + a 0 Eample: = 3 3 + 5 - The domain o a polynomial unction is the set o all real numbers. The -intercepts

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

Row Echelon Form and Reduced Row Echelon Form

Row Echelon Form and Reduced Row Echelon Form These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (3rd edition) These notes are intended primarily for in-class presentation

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Zeros of a Polynomial Function

Zeros of a Polynomial Function Zeros of a Polynomial Function An important consequence of the Factor Theorem is that finding the zeros of a polynomial is really the same thing as factoring it into linear factors. In this section we

More information

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all. 1. Differentiation The first derivative of a function measures by how much changes in reaction to an infinitesimal shift in its argument. The largest the derivative (in absolute value), the faster is evolving.

More information

Math 120 Final Exam Practice Problems, Form: A

Math 120 Final Exam Practice Problems, Form: A Math 120 Final Exam Practice Problems, Form: A Name: While every attempt was made to be complete in the types of problems given below, we make no guarantees about the completeness of the problems. Specifically,

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

How To Calculate Max Likelihood For A Software Reliability Model

How To Calculate Max Likelihood For A Software Reliability Model International Journal of Reliability, Quality and Safety Engineering c World Scientific Publishing Company MAXIMUM LIKELIHOOD ESTIMATES FOR THE HYPERGEOMETRIC SOFTWARE RELIABILITY MODEL FRANK PADBERG Fakultät

More information

Financial Services [Applications]

Financial Services [Applications] Financial Services [Applications] Tomáš Sedliačik Institute o Finance University o Vienna tomas.sedliacik@univie.ac.at 1 Organization Overall there will be 14 units (12 regular units + 2 exams) Course

More information

Regular Languages and Finite Automata

Regular Languages and Finite Automata Regular Languages and Finite Automata 1 Introduction Hing Leung Department of Computer Science New Mexico State University Sep 16, 2010 In 1943, McCulloch and Pitts [4] published a pioneering work on a

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Notes on Factoring. MA 206 Kurt Bryan

Notes on Factoring. MA 206 Kurt Bryan The General Approach Notes on Factoring MA 26 Kurt Bryan Suppose I hand you n, a 2 digit integer and tell you that n is composite, with smallest prime factor around 5 digits. Finding a nontrivial factor

More information

THE DIMENSION OF A VECTOR SPACE

THE DIMENSION OF A VECTOR SPACE THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field

More information

3. Mathematical Induction

3. Mathematical Induction 3. MATHEMATICAL INDUCTION 83 3. Mathematical Induction 3.1. First Principle of Mathematical Induction. Let P (n) be a predicate with domain of discourse (over) the natural numbers N = {0, 1,,...}. If (1)

More information

Using quantum computing to realize the Fourier Transform in computer vision applications

Using quantum computing to realize the Fourier Transform in computer vision applications Using quantum computing to realize the Fourier Transorm in computer vision applications Renato O. Violin and José H. Saito Computing Department Federal University o São Carlos {renato_violin, saito }@dc.uscar.br

More information

A NOVEL ALGORITHM WITH IM-LSI INDEX FOR INCREMENTAL MAINTENANCE OF MATERIALIZED VIEW

A NOVEL ALGORITHM WITH IM-LSI INDEX FOR INCREMENTAL MAINTENANCE OF MATERIALIZED VIEW A NOVEL ALGORITHM WITH IM-LSI INDEX FOR INCREMENTAL MAINTENANCE OF MATERIALIZED VIEW 1 Dr.T.Nalini,, 2 Dr. A.Kumaravel, 3 Dr.K.Rangarajan Dept o CSE, Bharath University Email-id: nalinicha2002@yahoo.co.in

More information

Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities

Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities Algebra 1, Quarter 2, Unit 2.1 Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities Overview Number of instructional days: 15 (1 day = 45 60 minutes) Content to be learned

More information

Adaptive Online Gradient Descent

Adaptive Online Gradient Descent Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

Computing divisors and common multiples of quasi-linear ordinary differential equations

Computing divisors and common multiples of quasi-linear ordinary differential equations Computing divisors and common multiples of quasi-linear ordinary differential equations Dima Grigoriev CNRS, Mathématiques, Université de Lille Villeneuve d Ascq, 59655, France Dmitry.Grigoryev@math.univ-lille1.fr

More information

1 Homework 1. [p 0 q i+j +... + p i 1 q j+1 ] + [p i q j ] + [p i+1 q j 1 +... + p i+j q 0 ]

1 Homework 1. [p 0 q i+j +... + p i 1 q j+1 ] + [p i q j ] + [p i+1 q j 1 +... + p i+j q 0 ] 1 Homework 1 (1) Prove the ideal (3,x) is a maximal ideal in Z[x]. SOLUTION: Suppose we expand this ideal by including another generator polynomial, P / (3, x). Write P = n + x Q with n an integer not

More information

Linear Codes. Chapter 3. 3.1 Basics

Linear Codes. Chapter 3. 3.1 Basics Chapter 3 Linear Codes In order to define codes that we can encode and decode efficiently, we add more structure to the codespace. We shall be mainly interested in linear codes. A linear code of length

More information

arxiv:1112.0829v1 [math.pr] 5 Dec 2011

arxiv:1112.0829v1 [math.pr] 5 Dec 2011 How Not to Win a Million Dollars: A Counterexample to a Conjecture of L. Breiman Thomas P. Hayes arxiv:1112.0829v1 [math.pr] 5 Dec 2011 Abstract Consider a gambling game in which we are allowed to repeatedly

More information

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

Some Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.

Some Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom. Some Polynomial Theorems by John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.com This paper contains a collection of 31 theorems, lemmas,

More information

DERIVATIVES AS MATRICES; CHAIN RULE

DERIVATIVES AS MATRICES; CHAIN RULE DERIVATIVES AS MATRICES; CHAIN RULE 1. Derivatives of Real-valued Functions Let s first consider functions f : R 2 R. Recall that if the partial derivatives of f exist at the point (x 0, y 0 ), then we

More information

DIFFERENTIABILITY OF COMPLEX FUNCTIONS. Contents

DIFFERENTIABILITY OF COMPLEX FUNCTIONS. Contents DIFFERENTIABILITY OF COMPLEX FUNCTIONS Contents 1. Limit definition of a derivative 1 2. Holomorphic functions, the Cauchy-Riemann equations 3 3. Differentiability of real functions 5 4. A sufficient condition

More information

Real Roots of Univariate Polynomials with Real Coefficients

Real Roots of Univariate Polynomials with Real Coefficients Real Roots of Univariate Polynomials with Real Coefficients mostly written by Christina Hewitt March 22, 2012 1 Introduction Polynomial equations are used throughout mathematics. When solving polynomials

More information

Lies My Calculator and Computer Told Me

Lies My Calculator and Computer Told Me Lies My Calculator and Computer Told Me 2 LIES MY CALCULATOR AND COMPUTER TOLD ME Lies My Calculator and Computer Told Me See Section.4 for a discussion of graphing calculators and computers with graphing

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z

FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z DANIEL BIRMAJER, JUAN B GIL, AND MICHAEL WEINER Abstract We consider polynomials with integer coefficients and discuss their factorization

More information

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions. Algebra I Overview View unit yearlong overview here Many of the concepts presented in Algebra I are progressions of concepts that were introduced in grades 6 through 8. The content presented in this course

More information

Theoretical Computer Science

Theoretical Computer Science Theoretical Computer Science 410 (2009) 4489 4503 Contents lists available at ScienceDirect Theoretical Computer Science journal homepage: www.elsevier.com/locate/tcs A push relabel approximation algorithm

More information

A characterization of trace zero symmetric nonnegative 5x5 matrices

A characterization of trace zero symmetric nonnegative 5x5 matrices A characterization of trace zero symmetric nonnegative 5x5 matrices Oren Spector June 1, 009 Abstract The problem of determining necessary and sufficient conditions for a set of real numbers to be the

More information

Rainfall generator for the Meuse basin

Rainfall generator for the Meuse basin KNMI publication; 196 - IV Rainall generator or the Meuse basin Description o 20 000-year simulations R. Leander and T.A. Buishand De Bilt, 2008 KNMI publication = KNMI publicatie; 196 - IV De Bilt, 2008

More information

1.2 Solving a System of Linear Equations

1.2 Solving a System of Linear Equations 1.. SOLVING A SYSTEM OF LINEAR EQUATIONS 1. Solving a System of Linear Equations 1..1 Simple Systems - Basic De nitions As noticed above, the general form of a linear system of m equations in n variables

More information

CONTINUED FRACTIONS AND PELL S EQUATION. Contents 1. Continued Fractions 1 2. Solution to Pell s Equation 9 References 12

CONTINUED FRACTIONS AND PELL S EQUATION. Contents 1. Continued Fractions 1 2. Solution to Pell s Equation 9 References 12 CONTINUED FRACTIONS AND PELL S EQUATION SEUNG HYUN YANG Abstract. In this REU paper, I will use some important characteristics of continued fractions to give the complete set of solutions to Pell s equation.

More information

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics No: 10 04 Bilkent University Monotonic Extension Farhad Husseinov Discussion Papers Department of Economics The Discussion Papers of the Department of Economics are intended to make the initial results

More information

Linear Programming Notes V Problem Transformations

Linear Programming Notes V Problem Transformations Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material

More information

INTERPOLATION. Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y).

INTERPOLATION. Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y). INTERPOLATION Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y). As an example, consider defining and x 0 =0, x 1 = π 4, x

More information

We shall turn our attention to solving linear systems of equations. Ax = b

We shall turn our attention to solving linear systems of equations. Ax = b 59 Linear Algebra We shall turn our attention to solving linear systems of equations Ax = b where A R m n, x R n, and b R m. We already saw examples of methods that required the solution of a linear system

More information

Polynomial Invariants

Polynomial Invariants Polynomial Invariants Dylan Wilson October 9, 2014 (1) Today we will be interested in the following Question 1.1. What are all the possible polynomials in two variables f(x, y) such that f(x, y) = f(y,

More information

SOLUTIONS TO EXERCISES FOR. MATHEMATICS 205A Part 3. Spaces with special properties

SOLUTIONS TO EXERCISES FOR. MATHEMATICS 205A Part 3. Spaces with special properties SOLUTIONS TO EXERCISES FOR MATHEMATICS 205A Part 3 Fall 2008 III. Spaces with special properties III.1 : Compact spaces I Problems from Munkres, 26, pp. 170 172 3. Show that a finite union of compact subspaces

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

What is Linear Programming?

What is Linear Programming? Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to

More information

Nonlinear Algebraic Equations Example

Nonlinear Algebraic Equations Example Nonlinear Algebraic Equations Example Continuous Stirred Tank Reactor (CSTR). Look for steady state concentrations & temperature. s r (in) p,i (in) i In: N spieces with concentrations c, heat capacities

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

Gröbner Bases and their Applications

Gröbner Bases and their Applications Gröbner Bases and their Applications Kaitlyn Moran July 30, 2008 1 Introduction We know from the Hilbert Basis Theorem that any ideal in a polynomial ring over a field is finitely generated [3]. However,

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

CONTINUED FRACTIONS AND FACTORING. Niels Lauritzen

CONTINUED FRACTIONS AND FACTORING. Niels Lauritzen CONTINUED FRACTIONS AND FACTORING Niels Lauritzen ii NIELS LAURITZEN DEPARTMENT OF MATHEMATICAL SCIENCES UNIVERSITY OF AARHUS, DENMARK EMAIL: niels@imf.au.dk URL: http://home.imf.au.dk/niels/ Contents

More information

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method 578 CHAPTER 1 NUMERICAL METHODS 1. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after

More information

8 Divisibility and prime numbers

8 Divisibility and prime numbers 8 Divisibility and prime numbers 8.1 Divisibility In this short section we extend the concept of a multiple from the natural numbers to the integers. We also summarize several other terms that express

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

Tiers, Preference Similarity, and the Limits on Stable Partners

Tiers, Preference Similarity, and the Limits on Stable Partners Tiers, Preference Similarity, and the Limits on Stable Partners KANDORI, Michihiro, KOJIMA, Fuhito, and YASUDA, Yosuke February 7, 2010 Preliminary and incomplete. Do not circulate. Abstract We consider

More information

Notes 11: List Decoding Folded Reed-Solomon Codes

Notes 11: List Decoding Folded Reed-Solomon Codes Introduction to Coding Theory CMU: Spring 2010 Notes 11: List Decoding Folded Reed-Solomon Codes April 2010 Lecturer: Venkatesan Guruswami Scribe: Venkatesan Guruswami At the end of the previous notes,

More information

Integrated airline scheduling

Integrated airline scheduling Computers & Operations Research 36 (2009) 176 195 www.elsevier.com/locate/cor Integrated airline scheduling Nikolaos Papadakos a,b, a Department o Computing, Imperial College London, 180 Queen s Gate,

More information

Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products

Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products Chapter 3 Cartesian Products and Relations The material in this chapter is the first real encounter with abstraction. Relations are very general thing they are a special type of subset. After introducing

More information

Polynomial Degree and Lower Bounds in Quantum Complexity: Collision and Element Distinctness with Small Range

Polynomial Degree and Lower Bounds in Quantum Complexity: Collision and Element Distinctness with Small Range THEORY OF COMPUTING, Volume 1 (2005), pp. 37 46 http://theoryofcomputing.org Polynomial Degree and Lower Bounds in Quantum Complexity: Collision and Element Distinctness with Small Range Andris Ambainis

More information

Lecture 2: August 29. Linear Programming (part I)

Lecture 2: August 29. Linear Programming (part I) 10-725: Convex Optimization Fall 2013 Lecture 2: August 29 Lecturer: Barnabás Póczos Scribes: Samrachana Adhikari, Mattia Ciollaro, Fabrizio Lecci Note: LaTeX template courtesy of UC Berkeley EECS dept.

More information

Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand

Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand Notes V General Equilibrium: Positive Theory In this lecture we go on considering a general equilibrium model of a private ownership economy. In contrast to the Notes IV, we focus on positive issues such

More information

Time-Memory Trade-Offs: False Alarm Detection Using Checkpoints

Time-Memory Trade-Offs: False Alarm Detection Using Checkpoints Time-Memory Trade-Os: False Alarm Detection Using Checkpoints Gildas Avoine 1, Pascal Junod 2, and Philippe Oechslin 1,3 1 EPFL, Lausanne, Switzerland 2 Nagravision SA (Kudelski Group), Switzerland 3 Objecti

More information

Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

More information

Mathematical finance and linear programming (optimization)

Mathematical finance and linear programming (optimization) Mathematical finance and linear programming (optimization) Geir Dahl September 15, 2009 1 Introduction The purpose of this short note is to explain how linear programming (LP) (=linear optimization) may

More information

Scalar Valued Functions of Several Variables; the Gradient Vector

Scalar Valued Functions of Several Variables; the Gradient Vector Scalar Valued Functions of Several Variables; the Gradient Vector Scalar Valued Functions vector valued function of n variables: Let us consider a scalar (i.e., numerical, rather than y = φ(x = φ(x 1,

More information

Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

More information

Is the trailing-stop strategy always good for stock trading?

Is the trailing-stop strategy always good for stock trading? Is the trailing-stop strategy always good or stock trading? Zhe George Zhang, Yu Benjamin Fu December 27, 2011 Abstract This paper characterizes the trailing-stop strategy or stock trading and provides

More information

Undergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics

Undergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics Undergraduate Notes in Mathematics Arkansas Tech University Department of Mathematics An Introductory Single Variable Real Analysis: A Learning Approach through Problem Solving Marcel B. Finan c All Rights

More information

1 Lecture: Integration of rational functions by decomposition

1 Lecture: Integration of rational functions by decomposition Lecture: Integration of rational functions by decomposition into partial fractions Recognize and integrate basic rational functions, except when the denominator is a power of an irreducible quadratic.

More information

Constrained optimization.

Constrained optimization. ams/econ 11b supplementary notes ucsc Constrained optimization. c 2010, Yonatan Katznelson 1. Constraints In many of the optimization problems that arise in economics, there are restrictions on the values

More information

Properties of sequences Since a sequence is a special kind of function it has analogous properties to functions:

Properties of sequences Since a sequence is a special kind of function it has analogous properties to functions: Sequences and Series A sequence is a special kind of function whose domain is N - the set of natural numbers. The range of a sequence is the collection of terms that make up the sequence. Just as the word

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete

More information

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions. Chapter 1 Vocabulary identity - A statement that equates two equivalent expressions. verbal model- A word equation that represents a real-life problem. algebraic expression - An expression with variables.

More information

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called

More information

1 Review of Newton Polynomials

1 Review of Newton Polynomials cs: introduction to numerical analysis 0/0/0 Lecture 8: Polynomial Interpolation: Using Newton Polynomials and Error Analysis Instructor: Professor Amos Ron Scribes: Giordano Fusco, Mark Cowlishaw, Nathanael

More information

ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2015

ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2015 ECON 459 Game Theory Lecture Notes Auctions Luca Anderlini Spring 2015 These notes have been used before. If you can still spot any errors or have any suggestions for improvement, please let me know. 1

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information