EE364a Homework 3 solutions
|
|
|
- Paula Freeman
- 9 years ago
- Views:
Transcription
1 EE364a, Winter Prof. S. Boyd EE364a Homework 3 solutions 3.42 Approximation width. Let f 0,...,f n : R R be given continuous functions. We consider the problem of approximating f 0 as a linear combination of f 1,...,f n. For x R n, we say that f = x 1 f x n f n approximates f 0 with tolerance ǫ > 0 over the interval [0,T] if f(t) f 0 (t) ǫ for 0 t T. Now we choose a fixed tolerance ǫ > 0 and define the approximation width as the largest T such that f approximates f 0 over the interval [0,T]: W(x) = sup{t x 1 f 1 (t) + + x n f n (t) f 0 (t) ǫ for 0 t T }. Show that W is quasiconcave. Solution. To show that W is quasiconcave we show that the sets {x W(x) α} are convex for all α. We have W(x) α if and only if ǫ x 1 f 1 (t) + + x n f n (t) f 0 (t) ǫ for all t [0,α). Therefore the set {x W(x) α} is an intersection of infinitely many halfspaces (two for each t), hence a convex set Log-concavity of Gaussian cumulative distribution function. The cumulative distribution function of a Gaussian random variable, f(x) = 1 2π x e t2 /2 dt, is log-concave. This follows from the general result that the convolution of two logconcave functions is log-concave. In this problem we guide you through a simple self-contained proof that f is log-concave. Recall that f is log-concave if and only if f (x)f(x) f (x) 2 for all x. (a) Verify that f (x)f(x) f (x) 2 for x 0. That leaves us the hard part, which is to show the inequality for x < 0. (b) Verify that for any t and x we have t 2 /2 x 2 /2 + xt. (c) Using part (b) show that e t2 /2 e x2 /2 xt. Conclude that x x e t2 /2 dt e x2 /2 e xt dt. (d) Use part (c) to verify that f (x)f(x) f (x) 2 for x 0. Solution. The derivatives of f are f (x) = e x2 /2 / 2π, f (x) = xe x2 /2 / 2π. 1
2 (a) f (x) 0 for x 0. (b) Since t 2 /2 is convex we have This is the general inequality t 2 /2 x 2 /2 + x(t x) = xt x 2 /2. g(t) g(x) + g (x)(t x), which holds for any differentiable convex function, applied to g(t) = t 2 /2. Another (easier?) way to establish t 2 /2 x 2 /2 + xt is to note that t 2 /2 + x 2 /2 xt = (1/2)(x t) 2 0. Now just move x 2 /2 xt to the other side. (c) Take exponentials and integrate. (d) This basic inequality reduces to i.e., x xe x2 /2 x This follows from part (c) because x e t2 /2 dt e x2 e t2 /2 dt e x2 /2 x. e xt dt = e x2 x Show that the function f(x) = X 1 is matrix convex on S n ++. Solution. We must show that for arbitrary v R n, the function g(x) = v T X 1 v. is convex in X on S n ++. This follows from example Consider the optimization problem minimize f 0 (x 1,x 2 ) subject to 2x 1 + x 2 1 x 1 + 3x 2 1 x 1 0, x 2 0. Make a sketch of the feasible set. For each of the following objective functions, give the optimal set and the optimal value. 2
3 (a) f 0 (x 1,x 2 ) = x 1 + x 2. (b) f 0 (x 1,x 2 ) = x 1 x 2. (c) f 0 (x 1,x 2 ) = x 1. (d) f 0 (x 1,x 2 ) = max{x 1,x 2 }. (e) f 0 (x 1,x 2 ) = x x 2 2. Solution. The feasible set is shown in the figure. x 2 (0, 1) (a) x = (2/5, 1/5). (b) Unbounded below. (2/5, 1/5) (c) X opt = {(0,x 2 ) x 2 1}. (d) x = (1/3, 1/3). (1, 0) (e) x = (1/2, 1/6). This is optimal because it satisfies 2x 1 +x 2 = 7/6 > 1, x 1 +3x 2 = 1, and f 0 (x ) = (1, 3) is perpendicular to the line x 1 + 3x 2 = [P. Parrilo] Symmetries and convex optimization. Suppose G = {Q 1,...,Q k } R n n is a group, i.e., closed under products and inverse. We say that the function f : R n R is G-invariant, or symmetric with respect to G, if f(q i x) = f(x) holds for all x and i = 1,...,k. We define x = (1/k) k i=1 Q i x, which is the average of x over its G-orbit. We define the fixed subspace of G as F = {x Q i x = x, i = 1,...,k}. (a) Show that for any x R n, we have x F. (b) Show that if f : R n R is convex and G-invariant, then f(x) f(x). x 1 3
4 (c) We say the optimization problem minimize f 0 (x) subject to f i (x) 0, i = 1,...,m is G-invariant if the objective f 0 is G-invariant, and the feasible set is G-invariant, which means f 1 (x) 0,...,f m (x) 0 = f 1 (Q i x) 0,...,f m (Q i x) 0, for i = 1,...,k. Show that if the problem is convex and G-invariant, and there exists an optimal point, then there exists an optimal point in F. In other words, we can adjoin the equality constraints x F to the problem, without loss of generality. (d) As an example, suppose f is convex and symmetric, i.e., f(px) = f(x) for every permutation P. Show that if f has a minimizer, then it has a minimizer of the form α1. (This means to minimize f over x R n, we can just as well minimize f(t1) over t R.) Solution. (a) We first observe that when you multiply each Q i by some fixed Q j, you get a permutation of the Q i s: Q j Q i = Q σ(i), i = 1,...,k, where σ is a permutation. This is a basic result in group theory, but it s easy enough for us to show it. First we note that by closedness, each Q j Q i is equal to some Q s. Now suppose that Q j Q i = Q k Q i = Q s. Multiplying by Q 1 i on the right, we see that Q j = Q k. Thus the mapping from the index i to the index s is one-to-one, i.e., a permutation. Now we have k k k Q j x = (1/k) Q j Q i x = (1/k) Q σ(i) x = (1/k) Q i x = x. i=1 i=1 i=1 This holds for j, so we have x F. (b) Using convexity and invariance of f, k k f(x) (1/k) f(q i x) = (1/k) f(x) = f(x). i=1 i=1 4
5 (c) Suppose x is an optimal solution. Then x is feasible, with Therefore x is also optimal. k f 0 (x ) = f 0 ((1/k) Q i x) i=1 k (1/k) f 0 (Q i x) i=1 = f 0 (x ). (d) Suppose x is a minimizer of f. Let x = (1/n!) P Px, where the sum is over all permutations. Since x is invariant under any permutation, we conclude that x = α1 for some α R. By Jensen s inequality we have f(x) (1/n!) P f(px ) = f(x ), which shows that x is also a minimizer. 4.8 Some simple LPs. Give an explicit solution of each of the following LPs. (a) Minimizing a linear function over an affine set. minimize c T x subject to Ax = b. Solution. We distinguish three possibilities. The problem is infeasible (b R(A)). The optimal value is. The problem is feasible, and c is orthogonal to the nullspace of A. We can decompose c as c = A T λ + ĉ, Aĉ = 0. (ĉ is the component in the nullspace of A; A T λ is orthogonal to the nullspace.) If ĉ = 0, then on the feasible set the objective function reduces to a constant: c T x = λ T Ax + ĉ T x = λ T b. The optimal value is λ T b. All feasible solutions are optimal. The problem is feasible, and c is not in the range of A T (ĉ 0). The problem is unbounded (p = ). To verify this, note that x = x 0 tĉ is feasible for all t; as t goes to infinity, the objective value decreases unboundedly. In summary, + b R(A) p = λ T b c = A T λ for some λ otherwise. 5
6 (b) Minimizing a linear function over a halfspace. minimize c T x subject to a T x b, where a 0. Solution. This problem is always feasible. The vector c can be decomposed into a component parallel to a and a component orthogonal to a: with a T ĉ = 0. c = aλ + ĉ, If λ > 0, the problem is unbounded below. Choose x = ta, and let t go to infinity: c T x = tc T a = tλa T a and a T x b = ta T a b 0 for large t, so x is feasible for large t. Intuitively, by going very far in the direction a, we find feasible points with arbitrarily negative objective values. If ĉ 0, the problem is unbounded below. Choose x = ba tĉ and let t go to infinity. If c = aλ for some λ 0, the optimal value is c T ab = λb. In summary, the optimal value is { λb c = aλ for some λ 0 p = otherwise. (c) Minimizing a linear function over a rectangle. minimize c T x subject to l x u, where l and u satisfy l u. Solution. The objective and the constraints are separable: The objective is a sum of terms c i x i, each dependent on one variable only; each constraint depends on only one variable. We can therefore solve the problem by minimizing over each component of x independently. The optimal x i minimizes c i x i subject to the constraint l i x i u i. If c i > 0, then x i = l i ; if c i < 0, then x i = u i ; if c i = 0, then any x i in the interval [l i,u i ] is optimal. Therefore, the optimal value of the problem is p = l T c + + u T c, where c + i = max{c i, 0} and c i = max{ c i, 0}. 6
7 (d) Minimizing a linear function over the probability simplex. minimize c T x subject to 1 T x = 1, x 0. What happens if the equality constraint is replaced by an inequality 1 T x 1? We can interpret this LP as a simple portfolio optimization problem. The vector x represents the allocation of our total budget over different assets, with x i the fraction invested in asset i. The return of each investment is fixed and given by c i, so our total return (which we want to maximize) is c T x. If we replace the budget constraint 1 T x = 1 with an inequality 1 T x 1, we have the option of not investing a portion of the total budget. Solution. Suppose the components of c are sorted in increasing order with We have c 1 = c 2 = = c k < c k+1 c n. c T x c 1 (1 T x) = c min for all feasible x, with equality if and only if x x k = 1, x 1 0,...,x k 0, x k+1 = = x n = 0. We conclude that the optimal value is p = c 1 = c min. In the investment interpretation this choice is quite obvious. If the returns are fixed and known, we invest our total budget in the investment with the highest return. If we replace the equality with an inequality, the optimal value is equal to p = min{0,c min }. (If c min 0, we make the same choice for x as above. Otherwise, we choose x = 0.) (e) Minimizing a linear function over a unit box with a total budget constraint. minimize c T x subject to 1 T x = α, 0 x 1, where α is an integer between 0 and n. What happens if α is not an integer (but satisfies 0 α n)? What if we change the equality to an inequality 1 T x α? Solution. We first consider the case of integer α. Suppose c 1 c i 1 < c i = = c α = = c k < c k+1 c n. The optimal value is c 1 + c c α 7
8 i.e., the sum of the smallest α elements of c. x is optimal if and only if x 1 = = x i 1 = 1, x i + + x k = α i + 1, x k+1 = = x n = 0. If α is not an integer, the optimal value is p = c 1 + c c α + c 1+ α (α α ). In the case of an inequality constraint 1 T x α, with α an integer between 0 and n, the optimal value is the sum of the α smallest nonpositive coefficients of c Optimal activity levels. We consider the selection of n nonnegative activity levels, denoted x 1,...,x n. These activities consume m resources, which are limited. Activity j consumes A ij x j of resource i, where A ij are given. The total resource consumption is additive, so the total of resource i consumed is c i = n j=1 A ij x j. (Ordinarily we have A ij 0, i.e., activity j consumes resource i. But we allow the possibility that A ij < 0, which means that activity j actually generates resource i as a by-product.) Each resource consumption is limited: we must have c i c max i, where c max i are given. Each activity generates revenue, which is a piecewise-linear concave function of the activity level: { pj x r j (x j ) = j 0 x j q j p j q j + p disc j (x j q j ) x j q j. Here p j > 0 is the basic price, q j > 0 is the quantity discount level, and p disc j is the quantity discount price, for (the product of) activity j. (We have 0 < p disc j < p j.) The total revenue is the sum of the revenues associated with each activity, i.e., n j=1 r j (x j ). The goal is to choose activity levels that maximize the total revenue while respecting the resource limits. Show how to formulate this problem as an LP. Solution. The basic problem can be expressed as maximize nj=1 r j (x j ) subject to x 0 Ax c max. This is a convex optimization problem since the objective is concave and the constraints are a set of linear inequalities. To transform it to an equivalent LP, we first express the revenue functions as r j (x j ) = min{p j x j,p j q j + p disc j (x j q j )}, which holds since r j is concave. It follows that r j (x j ) u j if and only if p j x j u j, p j q j + p disc j (x j q j ) u j. 8
9 We can form an LP as maximize 1 T u subject to x 0 Ax c max p j x j u j, p j q j + p disc j (x j q j ) u j, j = 1,...,n, with variables x and u. To show that this LP is equivalent to the original problem, let us fix x. The last set of constraints in the LP ensure that u i r i (x), so we conclude that for every feasible x, u in the LP, the LP objective is less than or equal to the total revenue. On the other hand, we can always take u i = r i (x), in which case the two objectives are equal. 9
10 Solutions to additional exercises 1. Optimal activity levels. Solve the optimal activity level problem described in exercise 4.17 in Convex Optimization, for the instance with problem data A = , c max = , p = , pdisc = , q = You can do this by forming the LP you found in your solution of exercise 4.17, or more directly, using cvx. Give the optimal activity levels, the revenue generated by each one, and the total revenue generated by the optimal solution. Also, give the average price per unit for each activity level, i.e., the ratio of the revenue associated with an activity, to the activity level. (These numbers should be between the basic and discounted prices for each activity.) Give a very brief story explaining, or at least commenting on, the solution you find. Solution. The following Matlab/CVX code solves the problem. (Here we write the problem in a form close to its original statement, and let CVX do the work of reformulating it as an LP!) A=[ ; ; ; ; ]; cmax=[100;100;100;100;100]; p=[3;2;7;6]; pdisc=[2;1;4;2]; q=[4; 10 ;5; 10]; cvx_begin variable x(4) maximize( sum(min(p.*x,p.*q+pdisc.*(x-q))) ) subject to x >= 0; A*x <= cmax cvx_end x r=min(p.*x,p.*q+pdisc.*(x-q))
11 totr=sum(r) avgprice=r./x The result of the code is x = r = totr = avgprice = We notice that the 3rd activity level is the highest and is also the one with the highest basic price. Since it also has a high discounted price its activity level is higher than the discount quantity level and it produces the highest contribution to the total revenue. The 4th activity has a discounted price which is substantially lower then the basic price and its activity is therefore lower that the discount quantity level. Moreover it require the use of a lot of resources and therefore its activity level is low. 2. Reformulating constraints in cvx. Each of the following cvx code fragments describes a convex constraint on the scalar variables x, y, and z, but violates the cvx rule set, 11
12 and so is invalid. Briefly explain why each fragment is invalid. Then, rewrite each one in an equivalent form that conforms to the cvx rule set. In your reformulations, you can use linear equality and inequality constraints, and inequalities constructed using cvx functions. You can also introduce additional variables, or use LMIs. Be sure to explain (briefly) why your reformulation is equivalent to the original constraint, if it is not obvious. Check your reformulations by creating a small problem that includes these constraints, and solving it using cvx. Your test problem doesn t have to be feasible; it s enough to verify that cvx processes your constraints without error. Remark. This looks like a problem about how to use cvx software, or tricks for using cvx. But it really checks whether you understand the various composition rules, convex analysis, and constraint reformulation rules. (a) norm( [ x + 2*y, x - y ] ) == 0 (b) square( square( x + y ) ) <= x - y (c) 1/x + 1/y <= 1; x >= 0; y >= 0 (d) norm([ max( x, 1 ), max( y, 2 ) ]) <= 3*x + y (e) x*y >= 1; x >= 0; y >= 0 (f) ( x + y )^2 / sqrt( y ) <= x - y + 5 (g) x^3 + y^3 <= 1; x>=0; y>=0 (h) x+z <= 1+sqrt(x*y-z^2); x>=0; y>=0 Solution. (a) The lefthand side is correctly identified as convex, but equality constraints are only valid with affine left and right hand sides. Since the norm of a vector is zero if and only if the vector is zero, we can express the constraint as x+2*y==0; x-y==0, or simply x==0; y==0. (b) The problem is that square() can only accept affine arguments, because it is convex, but not increasing. To correct this use square_pos() instead: square_pos( square( x + y ) ) <= x - y We can also reformulate this constraint by introducing an additional variable. variable t square( x+y ) <= t square( t ) <= x - y Note that, in general, decomposing the objective by introducing new variables doesn t need to work. It works in this case because the outer square function is convex and monotonic over R +. Alternatively, we can rewrite the constraint as 12
13 ( x + y )^4 <= x - y (c) 1/x isn t convex, unless you restrict the domain to R ++. We can write this one as inv_pos(x)+inv_pos(y)<=1. The inv_pos function has domain R ++ so the constraints x > 0, y > 0 are (implicitly) included. (d) The problem is that norm() can only accept affine argument since it is convex but not increasing. One way to correct this is to introduce new variables u and v: norm( [ u, v ] ) <= 3*x + y max( x, 1 ) <= u max( y, 2 ) <= v Decomposing the objective by introducing new variables work here because norm is convex and monotonic over R 2 +, and in particular over (1, ) (2, ). (e) xy isn t concave, so this isn t going to work as stated. But we can express the constraint as x>=inv_pos(y). (You can switch around x and y here.) Another solution is to write the constraint as geomean([x,y])>=1. We can also give an LMI representation: [ x 1; 1 y ] == semidefinite(2) (f) This fails when we attempt to divide a convex function by a concave one. We can write this as quad_over_lin(x+y,sqrt(y)) <= x-y+5 This works because quad_over_lin is monotone descreasing in the second argument, so it can accept a concave function here, and sqrt is concave. (g) The function x 3 + y 3 is convex for x 0, y 0. But x 3 isn t convex for x < 0, so cvx is going to reject this statement. One way to rewrite this constraint is quad_pos_over_lin(square(x),x) + quad_pos_over_lin(square(y),y) <= 1 This works because quad_pos_over_lin is convex and increasing in its first argument, hence accepts a convex function in its first argument. (The function quad_over_lin, however, is not increasing in its first argument, and so won t work.) Alternatively, and more simply, we can rewrite the constraint as pow_pos(x,3) + pow_pos(y,3) <= 1 (h) The problem here is that xy isn t concave, which causes cvx to reject the statement. To correct this, notice that so we can reformulate the constraint as xy z 2 = y(x z 2 /y), 13
14 x+z <= 1+geomean([x-quad_over_lin(z,y),y]) This works, since geomean is concave and nondecreasing in each argument. It therefore accepts a concave function in its first argument. We can check our reformulations by writing the following feasibility problem in cvx (which is obviously infeasible) cvx_begin variables x y u v z x == 0; y == 0; ( x + y )^4 <= x - y; inv_pos(x) + inv_pos(y) <= 1; norm( [ u ; v ] ) <= 3*x + y; max( x, 1 ) <= u; max( y, 2 ) <= v; x >= inv_pos(y); x >= 0; y >= 0; quad_over_lin(x + y, sqrt(y)) <= x - y + 5; pow_pos(x,3) + pow_pos(y,3) <= 1; x+z <= 1+geomean([x-quad_over_lin(z,y),y]) cvx_end 3. The illumination problem. This exercise concerns the illumination problem described in lecture 1 (pages 9 11). We ll take I des = 1 and p max = 1, so the problem is minimize f 0 (p) = max k=1,...,n log(a T k p) subject to 0 p j 1, j = 1,...,m, (1) with variable p R n. You will compute several approximate solutions, and compare the results to the exact solution, for a specific problem instance. As mentioned in the lecture, the problem is equivalent to minimize max k=1,...,n h(a T k p) subject to 0 p j 1, j = 1,...,m, (2) where h(u) = max{u, 1/u} for u > 0. The function h, shown in the figure below, is nonlinear, nondifferentiable, and convex. To see the equivalence between (1) and (2), we note that f 0 (p) = max k=1,...,n log(at k p) = max k=1,...,n max{log(at k p), log(1/a T k p)} 14
15 = log max k=1,...,n max{at k p, 1/a T k p} = log max k=1,...,n h(at k p), and since the logarithm is a monotonically increasing function, minimizing f 0 is equivalent to minimizing max k=1,...,n h(a T k p). 5 4 h(u) u The problem instance. The specific problem data are for the geometry shown below, using the formula a kj = rkj 2 max{cos θ kj, 0} from the lecture. There are 10 lamps (m = 10) and 20 patches (n = 20). We take I des = 1 and p max = 1. The problem data are given in the file illum_data.m on the course website. Running this script will construct the matrix A (which has rows a T k ), and plot the lamp/patch geometry as shown below
16 Equal lamp powers. Take p j = γ for j = 1,...,m. Plot f 0 (p) versus γ over the interval [0, 1]. Graphically determine the optimal value of γ, and the associated objective value. You can evaluate the objective function f 0 (p) in Matlab as max(abs(log(a*p))). Least-squares with saturation. Solve the least-squares problem minimize n k=1 (a T k p 1) 2 = Ap If the solution has negative values for some p i, set them to zero; if some values are greater than 1, set them to 1. Give the resulting value of f 0 (p). Least-squares solutions can be computed using the Matlab backslash operator: A\b returns the solution of the least-squares problem minimize Ax b 2 2. Regularized least-squares. Solve the regularized least-squares problem minimize n k=1 (a T k p 1) 2 + ρ m j=1 (p j 0.5) 2 = Ap ρ p (1/2)1 2 2, where ρ > 0 is a parameter. Increase ρ until all coefficients of p are in the interval [0, 1]. Give the resulting value of f 0 (p). You can use the backslash operator in Matlab to solve the regularized least-squares problem. Chebyshev approximation. Solve the problem minimize max k=1,...,n a T k p 1 = Ap 1 subject to 0 p j 1, j = 1,...,m. We can think of this problem as obtained by approximating the nonlinear function h(u) by a piecewise-linear function u As shown in the figure below, this is a good approximation around u = 1. 16
17 u u You can solve the Chebyshev approximation problem using cvx. The (convex) function Ap 1 can be expressed in cvx as norm(a*p-ones(n,1),inf). Give the resulting value of f 0 (p). Exact solution. Finally, use cvx to solve minimize max k=1,...,n max(a T k p, 1/a T k p) subject to 0 p j 1, j = 1,...,m exactly. You may find the inv_pos() function useful. Give the resulting (optimal) value of f 0 (p). Solution: The following Matlab script finds the approximate solutions using the heuristic methods proposed, as well as the exact solution. % illum_sol: finds approximate and exact solutions of % the illumination problem clear all; % load input data illum_data; % heuristic method 1: equal lamp powers % nopts=1000; p = logspace(-3,0,nopts); f = zeros(size(p)); for k=1:nopts f(k) = max(abs(log(a*p(k)*ones(m,1)))); 17
18 end; [val_equal,imin] = min(f); p_equal = p(imin)*ones(m,1); % heuristic method 2: least-squares with saturation % p_ls_sat = A\ones(n,1); p_ls_sat = max(p_ls_sat,0); % rounding negative p_i to 0 p_ls_sat = min(p_ls_sat,1); % rounding p_i > 1 to 1 val_ls_sat = max(abs(log(a*p_ls_sat))); % heuristic method 3: regularized least-squares % rhos = linspace(1e-3,1,nopts); crit = []; for j=1:nopts p = [A; sqrt(rhos(j))*eye(m)]\[ones(n,1); sqrt(rhos(j))*0.5*ones(m,1)]; crit = [ crit norm(p-0.5,inf) ]; end idx = find(crit <= 0.5); rho = rhos(idx(1)); % smallest rho s.t. p is in [0,1] p_ls_reg = [A; sqrt(rho)*eye(m)]\[ones(n,1); sqrt(rho)*0.5*ones(m,1)]; val_ls_reg = max(abs(log(a*p_ls_reg))); % heuristic method 4: chebyshev approximation % cvx_begin variable p_cheb(m) minimize(norm(a*p_cheb-1, inf)) subject to p_cheb >= 0 p_cheb <= 1 cvx_end val_cheb = max(abs(log(a*p_cheb))); % exact solution: % cvx_begin variable p_exact(m) minimize(max([a*p_exact; inv_pos(a*p_exact)])) subject to p_exact >= 0 18
19 p_exact <= 1 cvx_end val_exact = max(abs(log(a*p_exact))); % Results % [p_equal p_ls_sat p_ls_reg p_cheb p_exact] [val_equal val_ls_sat val_ls_reg val_cheb val_exact] The results are summarized in the following table. method 1 method 2 method 3 method 4 exact f 0 (p) p p p p p p p p p p
3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions
3. Convex functions Convex Optimization Boyd & Vandenberghe basic properties and examples operations that preserve convexity the conjugate function quasiconvex functions log-concave and log-convex functions
1 Solving LPs: The Simplex Algorithm of George Dantzig
Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.
Linear Programming Notes V Problem Transformations
Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material
What is Linear Programming?
Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to
Linear Programming. March 14, 2014
Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1
Vector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
1 Norms and Vector Spaces
008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)
Proximal mapping via network optimization
L. Vandenberghe EE236C (Spring 23-4) Proximal mapping via network optimization minimum cut and maximum flow problems parametric minimum cut problem application to proximal mapping Introduction this lecture:
1 Review of Least Squares Solutions to Overdetermined Systems
cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares
2.3 Convex Constrained Optimization Problems
42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions
THREE DIMENSIONAL GEOMETRY
Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,
To give it a definition, an implicit function of x and y is simply any relationship that takes the form:
2 Implicit function theorems and applications 21 Implicit functions The implicit function theorem is one of the most useful single tools you ll meet this year After a while, it will be second nature to
1.3. DOT PRODUCT 19. 6. If θ is the angle (between 0 and π) between two non-zero vectors u and v,
1.3. DOT PRODUCT 19 1.3 Dot Product 1.3.1 Definitions and Properties The dot product is the first way to multiply two vectors. The definition we will give below may appear arbitrary. But it is not. It
Lecture 13 Linear quadratic Lyapunov theory
EE363 Winter 28-9 Lecture 13 Linear quadratic Lyapunov theory the Lyapunov equation Lyapunov stability conditions the Lyapunov operator and integral evaluating quadratic integrals analysis of ARE discrete-time
24. The Branch and Bound Method
24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no
1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.
Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S
Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725
Duality in General Programs Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T
Linear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013
Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,
TOPIC 4: DERIVATIVES
TOPIC 4: DERIVATIVES 1. The derivative of a function. Differentiation rules 1.1. The slope of a curve. The slope of a curve at a point P is a measure of the steepness of the curve. If Q is a point on the
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method
Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation The Optimization Tree Linear Programming
Lecture 7: Finding Lyapunov Functions 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1
x 2 + y 2 = 1 y 1 = x 2 + 2x y = x 2 + 2x + 1
Implicit Functions Defining Implicit Functions Up until now in this course, we have only talked about functions, which assign to every real number x in their domain exactly one real number f(x). The graphs
Math 4310 Handout - Quotient Vector Spaces
Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable
v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.
3. Cross product Definition 3.1. Let v and w be two vectors in R 3. The cross product of v and w, denoted v w, is the vector defined as follows: the length of v w is the area of the parallelogram with
Walrasian Demand. u(x) where B(p, w) = {x R n + : p x w}.
Walrasian Demand Econ 2100 Fall 2015 Lecture 5, September 16 Outline 1 Walrasian Demand 2 Properties of Walrasian Demand 3 An Optimization Recipe 4 First and Second Order Conditions Definition Walrasian
Linear Programming. April 12, 2005
Linear Programming April 1, 005 Parts of this were adapted from Chapter 9 of i Introduction to Algorithms (Second Edition) /i by Cormen, Leiserson, Rivest and Stein. 1 What is linear programming? The first
Several Views of Support Vector Machines
Several Views of Support Vector Machines Ryan M. Rifkin Honda Research Institute USA, Inc. Human Intention Understanding Group 2007 Tikhonov Regularization We are considering algorithms of the form min
I. GROUPS: BASIC DEFINITIONS AND EXAMPLES
I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called
Chapter 17. Orthogonal Matrices and Symmetries of Space
Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length
Linear Programming Notes VII Sensitivity Analysis
Linear Programming Notes VII Sensitivity Analysis 1 Introduction When you use a mathematical model to describe reality you must make approximations. The world is more complicated than the kinds of optimization
Inner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
Linear Programming in Matrix Form
Linear Programming in Matrix Form Appendix B We first introduce matrix concepts in linear programming by developing a variation of the simplex method called the revised simplex method. This algorithm,
Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015.
Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment -3, Probability and Statistics, March 05. Due:-March 5, 05.. Show that the function 0 for x < x+ F (x) = 4 for x < for x
Lecture 7: Continuous Random Variables
Lecture 7: Continuous Random Variables 21 September 2005 1 Our First Continuous Random Variable The back of the lecture hall is roughly 10 meters across. Suppose it were exactly 10 meters, and consider
Special Situations in the Simplex Algorithm
Special Situations in the Simplex Algorithm Degeneracy Consider the linear program: Maximize 2x 1 +x 2 Subject to: 4x 1 +3x 2 12 (1) 4x 1 +x 2 8 (2) 4x 1 +2x 2 8 (3) x 1, x 2 0. We will first apply the
Math 3000 Section 003 Intro to Abstract Math Homework 2
Math 3000 Section 003 Intro to Abstract Math Homework 2 Department of Mathematical and Statistical Sciences University of Colorado Denver, Spring 2012 Solutions (February 13, 2012) Please note that these
Math 120 Final Exam Practice Problems, Form: A
Math 120 Final Exam Practice Problems, Form: A Name: While every attempt was made to be complete in the types of problems given below, we make no guarantees about the completeness of the problems. Specifically,
The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method
The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method Robert M. Freund February, 004 004 Massachusetts Institute of Technology. 1 1 The Algorithm The problem
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
Section 9.5: Equations of Lines and Planes
Lines in 3D Space Section 9.5: Equations of Lines and Planes Practice HW from Stewart Textbook (not to hand in) p. 673 # 3-5 odd, 2-37 odd, 4, 47 Consider the line L through the point P = ( x, y, ) that
Some Lecture Notes and In-Class Examples for Pre-Calculus:
Some Lecture Notes and In-Class Examples for Pre-Calculus: Section.7 Definition of a Quadratic Inequality A quadratic inequality is any inequality that can be put in one of the forms ax + bx + c < 0 ax
Notes on Symmetric Matrices
CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.
Least-Squares Intersection of Lines
Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a
Lecture Notes on Elasticity of Substitution
Lecture Notes on Elasticity of Substitution Ted Bergstrom, UCSB Economics 210A March 3, 2011 Today s featured guest is the elasticity of substitution. Elasticity of a function of a single variable Before
1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
OPRE 6201 : 2. Simplex Method
OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2
18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in 2-106. Total: 175 points.
806 Problem Set 4 Solution Due Wednesday, March 2009 at 4 pm in 2-06 Total: 75 points Problem : A is an m n matrix of rank r Suppose there are right-hand-sides b for which A x = b has no solution (a) What
Answer Key for California State Standards: Algebra I
Algebra I: Symbolic reasoning and calculations with symbols are central in algebra. Through the study of algebra, a student develops an understanding of the symbolic language of mathematics and the sciences.
MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform
MATH 433/533, Fourier Analysis Section 11, The Discrete Fourier Transform Now, instead of considering functions defined on a continuous domain, like the interval [, 1) or the whole real line R, we wish
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written
Maximum Likelihood Estimation
Math 541: Statistical Theory II Lecturer: Songfeng Zheng Maximum Likelihood Estimation 1 Maximum Likelihood Estimation Maximum likelihood is a relatively simple method of constructing an estimator for
Chapter 4. Duality. 4.1 A Graphical Example
Chapter 4 Duality Given any linear program, there is another related linear program called the dual. In this chapter, we will develop an understanding of the dual linear program. This understanding translates
Chapter 4 Lecture Notes
Chapter 4 Lecture Notes Random Variables October 27, 2015 1 Section 4.1 Random Variables A random variable is typically a real-valued function defined on the sample space of some experiment. For instance,
Discrete Optimization
Discrete Optimization [Chen, Batson, Dang: Applied integer Programming] Chapter 3 and 4.1-4.3 by Johan Högdahl and Victoria Svedberg Seminar 2, 2015-03-31 Todays presentation Chapter 3 Transforms using
Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS
Sensitivity Analysis 3 We have already been introduced to sensitivity analysis in Chapter via the geometry of a simple example. We saw that the values of the decision variables and those of the slack and
Introduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
Section 4.4 Inner Product Spaces
Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer
Unified Lecture # 4 Vectors
Fall 2005 Unified Lecture # 4 Vectors These notes were written by J. Peraire as a review of vectors for Dynamics 16.07. They have been adapted for Unified Engineering by R. Radovitzky. References [1] Feynmann,
Practice with Proofs
Practice with Proofs October 6, 2014 Recall the following Definition 0.1. A function f is increasing if for every x, y in the domain of f, x < y = f(x) < f(y) 1. Prove that h(x) = x 3 is increasing, using
7 Gaussian Elimination and LU Factorization
7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method
Linear Programming. Solving LP Models Using MS Excel, 18
SUPPLEMENT TO CHAPTER SIX Linear Programming SUPPLEMENT OUTLINE Introduction, 2 Linear Programming Models, 2 Model Formulation, 4 Graphical Linear Programming, 5 Outline of Graphical Procedure, 5 Plotting
Linear Programming I
Linear Programming I November 30, 2003 1 Introduction In the VCR/guns/nuclear bombs/napkins/star wars/professors/butter/mice problem, the benevolent dictator, Bigus Piguinus, of south Antarctica penguins
Nonlinear Programming Methods.S2 Quadratic Programming
Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective
1 Portfolio mean and variance
Copyright c 2005 by Karl Sigman Portfolio mean and variance Here we study the performance of a one-period investment X 0 > 0 (dollars) shared among several different assets. Our criterion for measuring
Review of Fundamental Mathematics
Review of Fundamental Mathematics As explained in the Preface and in Chapter 1 of your textbook, managerial economics applies microeconomic theory to business decision making. The decision-making tools
Inner product. Definition of inner product
Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product
MATH 425, PRACTICE FINAL EXAM SOLUTIONS.
MATH 45, PRACTICE FINAL EXAM SOLUTIONS. Exercise. a Is the operator L defined on smooth functions of x, y by L u := u xx + cosu linear? b Does the answer change if we replace the operator L by the operator
CHAPTER 5 Round-off errors
CHAPTER 5 Round-off errors In the two previous chapters we have seen how numbers can be represented in the binary numeral system and how this is the basis for representing numbers in computers. Since any
Understanding Basic Calculus
Understanding Basic Calculus S.K. Chung Dedicated to all the people who have helped me in my life. i Preface This book is a revised and expanded version of the lecture notes for Basic Calculus and other
Lecture L3 - Vectors, Matrices and Coordinate Transformations
S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between
IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2
IEOR 4404 Homework # Intro OR: Deterministic Models February 14, 011 Prof. Jay Sethuraman Page 1 of 5 Homework #.1 (a) What is the optimal solution of this problem? Let us consider that x 1, x and x 3
Notes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
LINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
Click on the links below to jump directly to the relevant section
Click on the links below to jump directly to the relevant section What is algebra? Operations with algebraic terms Mathematical properties of real numbers Order of operations What is Algebra? Algebra is
Math 461 Fall 2006 Test 2 Solutions
Math 461 Fall 2006 Test 2 Solutions Total points: 100. Do all questions. Explain all answers. No notes, books, or electronic devices. 1. [105+5 points] Assume X Exponential(λ). Justify the following two
Cost Minimization and the Cost Function
Cost Minimization and the Cost Function Juan Manuel Puerta October 5, 2009 So far we focused on profit maximization, we could look at a different problem, that is the cost minimization problem. This is
Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test
Math Review for the Quantitative Reasoning Measure of the GRE revised General Test www.ets.org Overview This Math Review will familiarize you with the mathematical skills and concepts that are important
DATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
5. Orthogonal matrices
L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal
Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.
Chapter 1 Vocabulary identity - A statement that equates two equivalent expressions. verbal model- A word equation that represents a real-life problem. algebraic expression - An expression with variables.
Lecture 3: Finding integer solutions to systems of linear equations
Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture
T ( a i x i ) = a i T (x i ).
Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)
BALTIC OLYMPIAD IN INFORMATICS Stockholm, April 18-22, 2009 Page 1 of?? ENG rectangle. Rectangle
Page 1 of?? ENG rectangle Rectangle Spoiler Solution of SQUARE For start, let s solve a similar looking easier task: find the area of the largest square. All we have to do is pick two points A and B and
Convex analysis and profit/cost/support functions
CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences Convex analysis and profit/cost/support functions KC Border October 2004 Revised January 2009 Let A be a subset of R m
Lecture 5 Principal Minors and the Hessian
Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and
Chapter 3 RANDOM VARIATE GENERATION
Chapter 3 RANDOM VARIATE GENERATION In order to do a Monte Carlo simulation either by hand or by computer, techniques must be developed for generating values of random variables having known distributions.
Recall that two vectors in are perpendicular or orthogonal provided that their dot
Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal
Numerical methods for American options
Lecture 9 Numerical methods for American options Lecture Notes by Andrzej Palczewski Computational Finance p. 1 American options The holder of an American option has the right to exercise it at any moment
4. How many integers between 2004 and 4002 are perfect squares?
5 is 0% of what number? What is the value of + 3 4 + 99 00? (alternating signs) 3 A frog is at the bottom of a well 0 feet deep It climbs up 3 feet every day, but slides back feet each night If it started
Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8
Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e
6.2 Permutations continued
6.2 Permutations continued Theorem A permutation on a finite set A is either a cycle or can be expressed as a product (composition of disjoint cycles. Proof is by (strong induction on the number, r, of
October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix
Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,
PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.
PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include
This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.
Algebra I Overview View unit yearlong overview here Many of the concepts presented in Algebra I are progressions of concepts that were introduced in grades 6 through 8. The content presented in this course
Nonlinear Optimization: Algorithms 3: Interior-point methods
Nonlinear Optimization: Algorithms 3: Interior-point methods INSEAD, Spring 2006 Jean-Philippe Vert Ecole des Mines de Paris [email protected] Nonlinear optimization c 2006 Jean-Philippe Vert,
Similarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
Math 215 HW #6 Solutions
Math 5 HW #6 Solutions Problem 34 Show that x y is orthogonal to x + y if and only if x = y Proof First, suppose x y is orthogonal to x + y Then since x, y = y, x In other words, = x y, x + y = (x y) T
a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
