# A New Method for Quickly Solving Quadratic Assignment Problems

Save this PDF as:

Size: px
Start display at page:

## Transcription

2 to j, and x ij = if facility i is assigned to location j, otherwise x ij =. he total cost can be formulated as tr(dxf X ) [] and the problem is as follows: min tr(dxf X ) s.t. j= i= n x ij n x ij =, i =,,,n =, j =,,,n x ij = or, i, j =,,,n. (.) where tr( ) denotes the trace of a square matrix. We will solve it as a quadratic programming problem in which the constraint x ij = or, i, j =,,,n is replaced by x ij l ij, i, j =,,, n where l ij = or, and at the very beginning all the l ij are set to zeros. hat is we shall first solve the problem min x Hx s.t. Ax = b Where H = d( F + F ) df + d F M dnf + d nf x. (.) d d d n F + d ( F + F F M + d F n ) F L L L d nf + d nf d nf + d nf, M d ( + ) nn F F is an n n Hessian matrix, x = (x,, x n, x,, x n,,x n,, x nn ) is an n -dimensional column, A is the incidence matrix of the n nodes to n nodes bi-partitioned graph and b is a column vector of components and as determined by the first two equality constraints of (.). For brevity, it is supposed that a redundant row of (A b) is deleted. herefore A is an (n ) n matrix. We shall solve (.) by the pivoting algorithm presented in [] which is for solving quadratic programming. his algorithm is divided into three stages: (i)construct an initial table, (ii) preprocessing and (iii) main iterations. he initial table for solving (.) is able. Initial table h a e h e a H A A O b Let A = (A, A ) where A is a nonsingular matrix of dimension (n ) (n ).

3 Correspondingly, able. is partitioned as able.. able. Initial table (the partitioned form) h I h II a e I e II e III H H A H H A A A O b he preprocessing stage is to solve the inverses A and A which yields able.3 where the columns of basic equality constraints and the corresponding rows are deleted. able.3 Result of the preprocessing e II h I h II A H A e I A A O σ II A b Where H = H H A A A A (H H A A ) is called a reduced Hessian matrix, A A is called a reduced incidence matrix, and σ II = H A b A A H A b. A being nonsingular means that A is corresponding to a spanning tree [3] in the n nodes to n nodes bi-partitioned graph. Each branch of the tree is associated with a value of x ij which is the solution of the equality constraint Ax = b and is given by A b here. In the computing programs of the author, n initial feasible solutions are designed for experiments. ake n = 4 for example, they are 343, 3434, 3434, 434 which are called extended assignments meaning that x = x = x 33 = x 44 =, x = x 3 = x 43 =, x = x 3 = x 34 = x 4 =, x = x 33 = x 44 =, x 3 = x 4 = x 3 = x 4 =, x 3 = x 34 = x 4 =, x 4 = x = x 3 = x 43 =, x 4 = x 3 = x 4 = respectively. In the main iterations stage, all the pivots are elements of A A and A A. As soon as a pivoting operation is carried out on a nonzero element ( or ) of A A, another pivoting operation is carried out on the symmetric element in A A. It is called a double pivoting. Another form of the pivoting operation for quadratic programming is called principal pivoting. But it is never used for quadratic assignment problems. 3 he basic assignment and local search Now let us consider a detailed form of able.3 as shown by able 3.. 3

4 able 3. A table in the main iterations stage e e N h N+ h N+M h h N e N+ e N+M w w N w N+, w N+M, w N w NN w N+, N w M+N, N w N+, w N+, N w N+M, w N+M, N σ σ N σ N+ σ N+M Where N = n n +, M = n ; σ N+,,σ N+M are or, i.e., the current solution is feasible; e i is the coefficient vector of x i and h i is the coefficient vector of the complementary inequality of x i. Here we use x i (i =,,, n ) rather than x ij to represent a variable. In able 3., each basic unit vector e j is corresponding to x j. Now let us change the right side terms of k basic inequalities, say x, x,, x k, from to. By (.) of [], the last column becomes k (σ + w,, σ j N + j= k j= w Nj, σ N++ k w N +, j j=,, σ N+M + k w N + M j j=, ). (3.) his operation is denoted by {e, e,, e k } {e +, e +,,e + k } or {e, e,, e k } {e, e,, e k } +. If the last M components of (3.) are or, we have a new feasible solution and say it to be k order basic assignment. Obviously, k can not greater than n. And it is easy to prove there are n! k order basic assignment altogether for k = to n. As a special case, the current feasible solution is called order basic assignment. By (6.) of [], the change of the value of the objective function is w w Δf = (,,,) M w k w w w M k L L L w w w k k M kk M σ σ + (,,, ) M σ k (3.) which is called the cost of the operation {e, e,, e k } {e +, e +,, e + k }. Particularly speaking if we increase the right side term of just one basic inequality, say x j, by, the cost is Δf = wjj + σ j (3.3) he above operation is call + operation and denoted by e j e + j. Since the + operation involves a small amount of computation, it is the most common way to find a better solution. For this reason, we call it to be + search as well. Another way to find a better solution which requires small computation is the + search {e i, e j } {e + i, e + j } whose cost is Δf = (wii + w jj + w ij ) + σ i + σ j. (3.4) Now suppose that we have a table where k basic unit vectors e j s are associated with x j and which determine a feasible solution of the value f of the objective function. We say the k 4

5 vectors to be catalyzed basic vectors, denoted by e + j s, and say other basic unit vectors to be non-catalyzed basic vectors. When there are catalyzed basic vectors in a table we can also change one of them to non-catalyzed one, say e + j to e j, and the cost is Δf = wjj σ j. (3.5) If the change is feasible and the cost is negative, we get a solution whose value of the objective function is less than f. his operation is denoted by e + j e j and is called search. Similarly there are search and &+ search. he cost of {e + i, e + j } {e i, e j } is and the cost of {e i +, e j } {e i, e j + } is Δf = (wii + w jj + w ij ) σ i σ j (3.6) Δf = (wii + w jj w ij ) σ i + σ j (3.7) We can also do &+, &+ or &+ to search for a better solution. But these operations especially the &+ requires too much amount of computation and seldom used in the computing program of the author. he feasible solutions obtained by operations +, +,,, &+, &+, &+ or &+ are in the vicinity of the current feasible solution or the current basic assignment. herefore the above operations may be called local search methods. 4 he pivoting operation Suppose that we have a table with a feasible solution where some basic unit vectors may be catalyzed and where a part of entries are shown by able 4.. able 4. he current table h s e r able 4. Result of the double pivoting e s h r e r h s w ss * w rs * w rs σ s σ r h r e s w ss / w rs / w rs w rs σ s w ss σ r σ r / w rs In able 4. w rs = or, σ r = or, and e s = e s if x s = or e s = e + s if x s =. Carrying out a double pivoting e r e s and h s h r there will be able 4.. By (6.5) of [] the change of the value of the objective function is Δf = wss σ r w rs σ r σ s (4.) Some special cases called forward (backward) descent pivoting and par pivoting are as follows. (i) Forward descent pivoting It means that (a) e s = e s, and there would be a feasible solution if we performed the + operation e s e + s ; (ii) w rs =, σ r = and wss + σ s <. After the pivoting there will be a feasible solution and the change of the value of the objective function is (ii) Backward descent pivoting wss + σ s. 5

6 It means that (a) e s = e s +, and there would be a feasible solution if we performed the operation e s + e s ; (ii) w rs =, σ r = and wss σ s <. After the pivoting there will be a feasible solution and the change of the value of the objective function is (iii) Par pivoting wss σ s. It means that e s = e s, w rs = and σ r =. he par pivoting is carried out usually when wss + σ s < even if it dose not change the value of the objective function. o see the effect of the par pivoting let us consider able 4.3 where a nonbasic unit vector e 5 is associated with a deviation. able 4.3 A table for par pivoting h h h 3 h 4 e 5 e e e 3 e 4 h 5 w w w 3 w 4 * w w w 3 w 4 w 3 w 3 w 33 w 34 w 4 w 4 w 34 w 44 * σ σ σ 3 σ 4 Carrying out a pivoting on and then on with asterisks, there will be able 4.4. able 4.4 Result of the par pivoting h 5 h h 3 h 4 e e 5 e e 3 e 4 h w w w w 3 +w w 4 w w w +w +w w 3 w +w 3 w w 4 +w 4 w 3 +w w 3 w +w 3 w w 33 +w w 3 w 34 w 4 w 4 w 4 +w 4 w 34 w 4 w 44 σ σ + σ σ 3 σ σ 4 Suppose that performing a + operation {e, e } {e +, e + } in able 4.3, in which some basic unit vectors except for e and e may be catalyzed, there results in a feasible solution. hen the change of the value of the objective function is Δf = (w + w + w ) + σ + σ. On the other hand the same solution and change can be obtained by performing a + operation e e + in able 4.4. If w + σ is negative enough, Δf may be negative and a better solution would be found in able 4.4 which requires a smaller amount of computation than the + operation does in able 4.3. he property of the double pivoting especially the par pivoting that translates some high order basic assignments into lower ones is called contraction effect and is the key to generate a better solution. Of course the double pivoting would change some low order basic assignments to high order ones in the same time. But it is not required by us. he par pivoting is frequently performed in order to generate a low order basic assignment with a small value of the objective function. It takes more 9% of the amount of computation in 6

7 solving a QAP. o see the effect of the par pivoting in details, let us consider the reduced incidence matrix and the associated deviation for n = 4 which is shown by able 4.5 where e ij is the coefficient vector of x ij, i, j =,, 3, 4. able 4.5 A reduced incidence matrix of n = 4 e e 3 e 4 e 3 e 4 e 3 e 34 e 4 e 4 e e e 33 e 44 e e 3 e 43 * From able 4.5 we see that the current solution is 34 and other 3 basic assignments are as follows. (i) order basic assignments (6 ones): e + 34, e , e , e , e , e (ii) order basic assignments (6 ones): {e, e 34 } + 43, {e 3, e 3 } + 34, {e 4, e 3 } + 43, {e 4, e 4 } + 43, {e 4, e 4 } + 43, {e 4, e 4 } (iii) 3 order basic assignments (8 ones): {e, e 3, e 3 } + 34, {e, e 4, e 3 } + 43, {e, e 4, e 4 } + 43, {e 3, e 4, e 4 } + 34, {e 3, e 34, e 4 } + 34, {e 3, e 34, e 4 } + 34, {e 4, e 3, e 4 } + 43, {e 3, e 34, e 4 } (iv) 4 order basic assignments (3 ones): {e, e 3, e 34, e 4 } + 34, {e 3, e 4, e 3, e 4 } + 34, {e 4, e 3, e 3, e 4 } Let e enter and e 3 leave the basis there will be able 4.6. able 4.6 Result of the pivoting e e 3 e 4 e 3 e 4 e e 34 e 4 e 4 e e e 33 e 44 e 3 e 3 e 43 From able 4.6 we can get 3 basic assignments as follows. (i) order basic assignments (5 ones): e , e , e , e , e (ii) order basic assignments (7 ones): 7

8 {e, e 3 } + 34, {e, e 4 } + 43, {e, e } + 34, {e 3, e } + 34, {e 4, e } + 43, {e 4, e 4 } + 43, {e 4, e 4 } (iii) 3 order basic assignments (9 ones): {e, e 4, e 4 } + 43, {e, e, e 34 } + 43, {e 3, e 4, e 4 } + 34, {e 3, e 4, e 4 } + 34, {e 3, e 34, e 4 } + 34, {e 4, e 3, e 4 } + 43, {e 4, e 3, e 4 } + 43, {e 4, e, e 4 } + 43, {e 3, e 34, e 4 } (iv) 4 order basic assignments ( ones): {e, e 3, e 34, e 4 } + 34, {e 3, e, e 34, e 4 } We see that 34 and 43 are order basic assignments in able 4.5 and are order ones in able 4.6; 34 and 43 are 3 order basic assignments in able 4.5 and are order ones in able 4.6; 34 and 43 are 4 order basic assignments in able 4.5 and are 3 order ones in able 4.6 due to e 3 leaving the basis. Example 4. Consider a QAP of n = 4 where the distance matrix and flow matrix are D = 4 4 and F = he table associated with the extended assignment 343 is as following. able 4.7 he initial table in the main iterations stage h h 3 h 4 h 3 h 4 h 3 h 34 h 4 h 4 e e e 33 e 44 e e 3 e 43 e e 3 e 4 e 3 e 4 e 3 e 34 e 4 e 4 h h h 33 h 44 h h 3 h 43 σ i * * Where e ij is the coefficient vector of x ij and h ij is the coefficient vector of the associated inequality, i, j =,, 3, 4. he last column is the deviation of the nonbasic vector. he value of the objective function of the current solution is 58. From able 4.7 we see that the cost of e e + is 3 / 8 =. herefore the resulting order basic assignment 34 has a value of 58 = 46. Performing a forward descent 8

9 pivoting e e and h h to yield able 4.8 where the current solution is 34 with a value f = 46. able 4.8 Result of the forward descent pivoting e e 3 e 4 e 3 e 4 e 3 e 34 e 4 e 4 h h h 33 h 44 h h 3 h 43 σ i h h 3 h 4 h 3 h 4 h 3 h 34 h 4 h 4 e e e 33 e 44 e e 3 e * * Performing a + search in able 4.8 there are e + 34, e , e , e with costs, 6, 6, 8 respectively. herefore we can not perform a forward descent pivoting anymore. he cost of e 3 e + 3 is 6 / = 4 <. But it results in an infeasible solution. Let us perform a par pivoting e 3 e 3 and h 3 h 3 to yield able 4.9 where the current solution has no change. able 4.9 Result of the par pivoting e e 3 e 4 e 3 e 4 e 3 e 34 e 4 e 4 h h h 33 h 44 h h 3 h 43 σ i h h 3 h 4 h 3 h 4 h 3 h 34 h 4 h 4 e e e 33 e 44 e e 3 e

10 From able 4.9 we see that the cost of e 3 e + 3 is 6 / = < and which results in a feasible solution 34 with f = 46 = 44. he above example is very small. Generally speaking, after a table is set up we shall not only do the + search but also do the + search to find a better solution. In the case that there are catalyzed basic vectors in the table we may do the searches of,, &+, &+ and &+ besides the + and + searches. Meanwhile the descent pivoting and the par pivoting are carried out if the cost of the leaving unit vector is negative. 5 Results of experiments Several computing programs are made in Delphi 6.. hey are conducted in this way: First run the program from the first extended assignment until the optimal solution is obtained or a certain number of operation processes are carried out. If the optimal solution is not obtained, run the program from the second extended assignment, and so on. An entire operation process which doesn t find out a better solution is as follows. For a given feasible solution, (i) Do + search and + search combining with pivoting operations where no basic unit vectors are catalyzed; (ii) Construct a order basic assignment from which the number of catalyzed basic vectors is increased by the operation &+. Meanwhile local searches and pivoting operations are performed. (iii) Update the table by transferring the last k order basic assignment into a order basic assignment or construct another feasible solution by using previous feasible solutions, then return to (i). he operation &+ of (ii) is imposed no matter whether the cost is negative or not so as to increase the number of catalyzed basic vectors by every time until a certain number, say n \. Other operations are done only if their costs are negative. If a better solution is found in the midway of the above operation process, the operation process ends and then another one follows. he difficulty is how many operation processes should be set if each of which can not find out a better solution. If the number of operation processes is small, a better solution may be missed. And if the number is large, there would run much time without finding a better solution. able 5. gives the results of the program DelQAPpvt run on a personal computer of a GHz CPU and a G memory. Since the minimal values of the objective functions are known from QAPLIB Home Page, the experiment is designed as that as soon as the minimal value occurs or operation processes are carried out without finding a better solution the experiment ends for each initial extended assignment. he Max No. of processes is the maximal number of operation processes between two better solutions. Usually the value of the objective function of the initial extended assignment is big. Many better solutions (values of the objective functions become smaller and smaller) occur just in one or two operation processes. As the value becomes very small, the number of operation processes increases rapidly. From table 5. we see that most instances are solved to optimum in several seconds to several minutes.

11 able 5. Computing results of DelQAPpvt ime Initial Max No. of ime Initial Max No. of (seconds) solution processes (seconds) solution Processes Nug.9 3 Rou.4 3 Nug Rou Nug5.4 3 Rou Nug6a.47 4 Chra.8 9 Nug6b.7 Chrb Nug Chrc Nug Chr5a 7.9 nd 39 Nug Chr5b.9 69 Nug Chr5c Nug Chr8a Nug Chr8b.69 4 Nug Chra rd Nug Chrb Nug Chrc Nug Chra aia.7 3 Chrb 5.4 4th 55 aib.37 4 Chr5a.7 nd 7 ai5a Bur6a ai5b.7 4 Bur6b ai7a.3 96 Bur6c aia Bur6d nd 45 aib.36 Bur6e ai5a Bur6f ai5b ai3a rd Bur6g Bur6h st 4 8 ai3b Esc3a ai35a nd 67 Esc3b ai4b nd 4 Esc3c.3 Had Had st 43 8 Esc3d Esc3e.5.56 Had6 Had8 Had st Esc3g Esc3h ho st 5 9 ho Some new solutions are given in table 5. that are different from those published on QAPLIB Home Page. Each of Esc3a-h has many optimal solutions. Just one of them is listed.

12 able 5. New solutions for some instances of QAP value Solution Nug Nug5 Nug6b Nug8 Nug Nug Nug Nug4 Nug5 Nug (5, 6,,, 4, 8,,,, 7, 9, 3) (9, 8, 3,,,, 7, 4, 3, 4,, 5, 6, 5, ) (, 5, 6, 5,, 4, 3, 4, 7,,,, 3,8, 9) (, 5,6, 5,,, 7, 4, 3, 4, 9, 8, 3,, ) (5,, 6, 4, 3, 7,, 5,, 9,, 4, 8, 3,, 6) (5, 3,, 8,, 7, 9, 3, 6,,,, 4, 5, 4, 6) (9, 3,, 6, 3, 4,,, 7, 5, 8, 5, 8,, 7, 6, 4, ) (6,, 7, 5, 7, 3, 8,, 5, 9, 6,,,, 4, 9, 3,, 4, 8) (9, 3,, 4, 8, 6,,,, 4, 3, 8,, 5, 9, 6,, 7, 5, 7) (7, 5, 7,, 6, 9, 5,, 8, 3, 4,,,, 6, 8, 4,, 3, 9) (5,, 3, 9, 3,, 4, 5, 6,, 6,, 8, 4, 7,,, 7, 8, 9, ) (7,,, 7, 8, 9,, 5, 6,, 6,, 8, 4, 5,, 3, 9, 3,, 4) (, 9, 8, 7,,, 7, 4, 8,, 6,, 6, 5, 4,, 3, 9, 3,, 5) (5, 3, 6,, 6,,, 8, 4, 4, 5,,, 9,, 7, 3,, 9, 8,, 7) (5, 4, 4, 8,,, 6,, 6, 3, 5, 7,, 8, 9,, 3, 7,, 9,, ) (7,, 8, 9,, 3, 7,, 9,,, 5, 4, 4, 8,,, 6,, 6, 3, 5) (, 5, 3, 6,, 4,, 6, 9, 7,,, 4, 3, 8,, 9, 5,, 4, 3,, 8, 7) (7, 4,, 9, 8,, 5,,, 8, 3, 4, 5, 6,, 7, 3,,, 3,, 9, 6, 4) (, 4, 3,, 8, 7, 4, 3, 8,, 9, 5,, 6, 9, 7,,,, 5, 3, 6,, 4) (4,, 6, 3, 5,,,, 7, 9, 6,, 5, 9,, 8, 3, 4, 7, 8,, 3, 4, ) (5,, 8, 4, 7,, 5, 6,,,, 8, 3, 4, 4, 5, 9, 6, 7, 3,,, 9,, 3) (3,, 9,,, 3, 7, 6, 9, 5, 4, 4, 3, 8,,,, 6, 5,, 7, 4, 8,, 5) (3, 3, 4,, 7,, 7, 4,, 4, 9, 6, 3, 6, 8,, 9, 8, 5,,, 5,,, 5) (7, 4, 8,, 5,,, 6, 5,, 4, 4, 3, 8,, 3, 7, 6, 9, 5, 3,, 9,, ) (,, 9,, 3, 5, 9, 6, 7, 3,, 8, 3, 4, 4,, 5, 6,,, 5,, 8, 4, 7) (5,, 5, 7,,, 3, 6,, 3, 4,,, 9, 8, 6, 4, 7, 4, 8, 9, 5, 6, 3,,, 7) (5, 8,, 9, 9, 7, 3,, 5, 5,, 8, 6,,,, 6, 3, 4,, 7, 3, 4, 7, 4, 6,) (3,, 5, 7,,,, 4,,, 5,,, 9, 8, 6, 6, 6, 3, 8, 9, 4, 7, 4, 5, 3, 7)

13 able 5. (continue) value Solution Nug8 Nug3 Chr8b Chra Chrb Chr5a Had4 Had Bur6a Bur6b (, 8,, 8,, 9, 8, 6, 6, 7, 9,, 5, 7, 4, 7, 4, 3, 5, 6,,, 5, 3, 4,,, 3) (, 4, 6,, 7, 3, 8, 4,, 9, 7,,,, 8, 3, 4, 9, 5,, 8,, 3, 5, 6, 6, 5, 7) (8, 5, 3, 7,, 6,, 5, 9,,, 7, 8, 8,, 6, 5,, 4, 7,, 9, 6, 4, 3, 3, 4, ) (5, 8, 7, 3, 4,, 3,,, 6, 3, 4, 7,, 8, 7, 9, 5, 6, 4,, 9, 9, 8, 5,, 6, 3,, ) (, 4, 3, 7, 8, 5, 4, 3, 6,,, 3, 5, 9, 7, 8,, 7, 8, 9, 9,, 4, 6,,, 3, 6,, 5) (,, 3, 6,, 5, 8, 9, 9,, 4, 6, 5, 9, 7, 8,, 7, 4, 3, 6,,, 3,, 4, 3, 7, 8, 5) (,, 3, 4, 6, 5, 9, 8,, 7, 5,, 8, 3, 7, 4, 6, ) (, 3,, 6, 4, 5, 7, 8,, 9, 3,, 6,, 7, 4, 8, 5) (6, 3, 9,,,, 5, 4, 8, 5, 7, 8, 4, 7, 3,, 6, ) (7, 4,,, 3,, 6, 3, 7, 6, 4, 9,,, 8, 5, 5, 8) (9,, 6,, 5,, 8, 3, 7, 6, 4, 7,, 4,, 5, 3, 8) (4, 5,,, 8, 9, 7, 6,, 5, 3, 4, 6,, 7,, 8, 3) (3,, 7, 8, 9,, 9, 4,,,, 6, 5, 8, 5,, 4, 6, 7, 3) (, 9, 3,,,, 6, 4, 7, 8, 7, 3,, 5,,, 9, 5,, 4, 8, 6) (5,, 5, 3, 8, 4, 6, 8,,, 4, 6, 3, 5, 4, 9, 3,,,, 7,,, 7, 9) ( 8, 3,,,, 5,, 4, 3, 6, 7,, 9, 4) (8, 5,, 6, 4, 9, 7,, 6,,, 7,,, 5, 3, 4, 9, 8, 3) (8, 5,, 4, 6, 9, 7,, 6,,, 7,,, 5, 3, 4, 9, 8, 3) (8, 5, 6, 6, 4, 9, 7,,,,, 5, 3,,, 7, 4, 9, 8, 3) (8, 5, 6, 6, 4, 9, 7,,,,, 7,,, 5, 3, 4, 9, 8, 3) (8, 5, 6, 4, 6, 9, 7,,,,, 5, 3,,, 7, 4, 9, 8, 3) (, 5, 6, 7, 4,, 3, 6,, 8, 9, 5,,, 8, 4, 3, 9,,, 7, 5, 6, 4,, 3) (5,, 6, 7, 4,, 3, 6,, 8, 5, 9,,, 8, 4, 3, 9,, 7,, 5, 4, 6, 3, ) (6,, 5, 7, 4,, 3, 6,, 8,, 9, 5,, 8, 4, 3, 9,, 5,, 7, 6, 4, 3, ) (, 5, 5, 7, 4, 6, 4,, 3, 8, 9, 5,,, 8,, 3, 9,,, 7, 6, 6, 4, 3, ) (7,, 5, 7, 4, 3,,, 3, 8, 5, 9,,, 8,, 3, 9,, 5,, 6, 6, 4, 4, 6) (5, 5,, 7, 4,, 3,, 3, 8,, 5, 9,, 8,, 3, 9,, 6, 7,, 4, 6, 4, 6) 3

14 able 5. (continue) Value Solution Bur6c Bur6d Bur6e Bur6f Bur6g Bur6h Esc3a Esc3b Esc3c (6, 7,, 7, 4,, 3,, 3, 8,, 5, 9,, 8,, 3, 9,, 5, 5,, 4, 6, 6, 4) (,, 3, 3, 6,, 5,, 5,9, 8, 8, 9,, 4,,, 4, 5, 6, 4,, 3, 7, 7, 6) (, 3,, 3, 6, 5,, 5,, 9, 8, 9, 8,, 4,,, 5, 4, 4,, 6, 3, 7, 7, 6) (3, 3,,, 6, 8, 6,, 5, 9, 9,, 8,, 3, 5, 4,, 5, 6,, 4, 7, 4,, 7) (6, 4, 3,, 6,, 7, 5,, 9, 9, 8,,, 3, 5, 4, 5,, 3,,, 4, 7, 8, 6) (,, 6,, 6,, 7, 5,, 9, 8,, 9,, 3, 5, 4, 5,, 4, 3, 3, 7, 4, 6, 8) (4, 3,,, 6,, 7,, 5, 9, 8, 9,,, 3, 5, 4,, 5,, 6, 3, 7, 4, 6, 8) (4, 4, 3, 7, 6, 6, 5,, 7, 5,,, 8, 9, 3, 8,, 5, 9, 6, 4,,,, 3, ) (4, 4, 3, 7, 6, 6, 5, 7,, 5,,, 8, 9, 3, 8,, 9, 5, 6, 4,,,,, 3) (3, 4, 4, 7, 6, 6, 5, 7,, 5, 8,,, 9, 3, 8,, 9, 5,, 4, 6,,,, 3) (3, 3, 6, 7, 6, 6, 3,,, 5, 8, 9,,, 4, 5,, 5, 9, 4, 7,,, 4,, 8) (4,, 7, 7, 6,, 8,,, 5, 8,, 9,, 4, 5,, 5, 9, 3, 6, 3,, 4, 6, 3) (4, 6, 7, 7, 6, 6, 3,,, 5, 8,, 9,, 4, 5,, 5, 9, 3,, 3,, 4, 8, ) (,,, 3, 3, 5, 4, 8,,, 4, 7,, 8,, 5, 9, 9, 5, 6, 6, 6, 4, 3,, 7) (3,,,, 6,, 5,, 8,, 7,, 4, 8, 4, 5, 9, 5, 9, 3,, 6, 3, 6, 4, 7) (3,,,, 6,, 5,, 8,, 7, 4,, 8, 4, 5, 9, 5, 9, 3, 6,, 6, 3, 4, 7) (, 6, 3,, 6, 5,, 8,,,, 4, 7, 8, 4, 5, 9, 9, 5,,, 3, 3, 6, 4, 7) (7, 9,, 8, 7, 5, 8, 6,, 3, 9,,, 3, 4, 7, 3,,, 3, 6, 9, 8, 5, 3, 5, 4, 6,, 4,, 3) (5, 7, 6, 8, 5, 6,, 3, 7, 3, 9, 3,, 4, 8, 3,, 4,,, 7,, 8,, 5,4, 3, 9, 6,, 3, 9) (7, 5, 9, 8, 4, 6, 5, 8,,, 9,,, 3, 4, 5, 6,, 3, 7,,,, 3, 4, 6, 7, 8, 9, 3, 3, 3) 4

15 able 5. (continue) Value Solution Esc3d Esc3e Esc3g Esc3h ai3a ai35a ho3 ho (,, 3, 8, 9, 7, 6, 5, 8,, 4,, 6,, 5, 6, 3, 4, 9,,,, 7, 3, 4, 5, 7, 8, 9, 3, 3, 3) (8,, 3, 4, 5, 6,, 7,,,,, 9, 3, 4, 6, 5, 7, 8, 9,,, 3, 4, 5, 6, 7, 8, 9, 3, 3, 3) (5,,, 3, 4, 6, 4, 7, 8, 9,,,, 3, 5, 6, 7, 8, 9,,,, 3, 4, 5, 6, 7, 8, 9, 3, 3, 3) (, 9, 9,,, 4, 3, 5, 9, 7, 7,,, 6, 5, 3, 4, 3,, 8, 8, 3, 3, 6, 7,, 3, 5, 4, 8,, 6) (9, 8, 4, 4, 3, 5, 5, 7,,, 8,,, 3, 9, 6, 8,, 7,,, 9,, 5, 3, 4, 6, 7, 3, 6) (9, 9, 8,, 7, 33, 3, 6, 5,, 3, 6, 4, 7,, 5, 3, 3, 9,, 6, 5,, 3, 34,, 8, 4,,, 4, 8, 3, 35, 7) (, 7, 4, 6, 3,,, 5, 8,, 3, 5, 4, 4, 7, 8, 6, 3, 6,, 9, 9,, 7,, 8, 3, 5,, 9) (3, 35, 7, 6, 3, 5, 37, 38, 5, 9, 8,, 3,, 4, 9, 36,, 6, 3, 6, 33, 8, 8, 7,,,, 5,, 4, 39, 3, 34, 7, 4, 3, 9,, 4) ai4a is difficult to solve. It is just found a solution of the value It is greater than by the Ro-S method. he contraction effect of pivoting operations occurs to us the formation of substance that is under certain conditions or control while the formation of a good solution of QAP is depending on the appropriate control of pivoting operations. It is hopeful to discover the deep mechanism of QAP and develop more efficient computing method. References [] G Finke, R E Burkard, F Rendl. Quadratic assignment problems. Annals of Discrete Mathematics, 987(3): 6-8 [] Zhongzhen Zhang. An efficient method for solving the local minimum of indefinite quadratic programming, on QUADRAIC PROGRAMMING PAGER, N.I.M. Gould and Ph.L.oint, [3] Zhongzhen Zhang. Convex Programming: Pivoting Algorithms for Portfolio Selection and Network Optimization. Wuhan University Press, 4(in Chinese) 5

### Definition of a Linear Program

Definition of a Linear Program Definition: A function f(x 1, x,..., x n ) of x 1, x,..., x n is a linear function if and only if for some set of constants c 1, c,..., c n, f(x 1, x,..., x n ) = c 1 x 1

### Linear Dependence Tests

Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

### We seek a factorization of a square matrix A into the product of two matrices which yields an

LU Decompositions We seek a factorization of a square matrix A into the product of two matrices which yields an efficient method for solving the system where A is the coefficient matrix, x is our variable

### Linear Programming. March 14, 2014

Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

### 3 Does the Simplex Algorithm Work?

Does the Simplex Algorithm Work? In this section we carefully examine the simplex algorithm introduced in the previous chapter. Our goal is to either prove that it works, or to determine those circumstances

### Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation The Optimization Tree Linear Programming

### 6. Cholesky factorization

6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

### Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

### SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis

### 5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 General Integer Linear Program: (ILP) min c T x Ax b x 0 integer Assumption: A, b integer The integrality condition

### Nonlinear Programming Methods.S2 Quadratic Programming

Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective

### 4.6 Linear Programming duality

4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

### Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Chapter 6 Linear Programming: The Simplex Method Introduction to the Big M Method In this section, we will present a generalized version of the simplex method that t will solve both maximization i and

### 1 Introduction to Matrices

1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

### 1 Solving LPs: The Simplex Algorithm of George Dantzig

Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

### IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2

IEOR 4404 Homework # Intro OR: Deterministic Models February 14, 011 Prof. Jay Sethuraman Page 1 of 5 Homework #.1 (a) What is the optimal solution of this problem? Let us consider that x 1, x and x 3

### Least-Squares Intersection of Lines

Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a

### Linear Programming. April 12, 2005

Linear Programming April 1, 005 Parts of this were adapted from Chapter 9 of i Introduction to Algorithms (Second Edition) /i by Cormen, Leiserson, Rivest and Stein. 1 What is linear programming? The first

### Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

1. Introduction Linear Programming for Optimization Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1.1 Definition Linear programming is the name of a branch of applied mathematics that

### Matrix Algebra and Applications

Matrix Algebra and Applications Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 1 / 49 EC2040 Topic 2 - Matrices and Matrix Algebra Reading 1 Chapters

### Proximal mapping via network optimization

L. Vandenberghe EE236C (Spring 23-4) Proximal mapping via network optimization minimum cut and maximum flow problems parametric minimum cut problem application to proximal mapping Introduction this lecture:

### Introduction to Matrix Algebra

Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

### Factorization Theorems

Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

### F Matrix Calculus F 1

F Matrix Calculus F 1 Appendix F: MATRIX CALCULUS TABLE OF CONTENTS Page F1 Introduction F 3 F2 The Derivatives of Vector Functions F 3 F21 Derivative of Vector with Respect to Vector F 3 F22 Derivative

### 4. MATRICES Matrices

4. MATRICES 170 4. Matrices 4.1. Definitions. Definition 4.1.1. A matrix is a rectangular array of numbers. A matrix with m rows and n columns is said to have dimension m n and may be represented as follows:

### 1. Sorting (assuming sorting into ascending order) a) BUBBLE SORT

DECISION 1 Revision Notes 1. Sorting (assuming sorting into ascending order) a) BUBBLE SORT Make sure you show comparisons clearly and label each pass First Pass 8 4 3 6 1 4 8 3 6 1 4 3 8 6 1 4 3 6 8 1

### Introduction to Matrix Algebra I

Appendix A Introduction to Matrix Algebra I Today we will begin the course with a discussion of matrix algebra. Why are we studying this? We will use matrix algebra to derive the linear regression model

### Approximation Algorithms

Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms

### Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Chapter 3 Linear Least Squares Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

### The basic unit in matrix algebra is a matrix, generally expressed as: a 11 a 12. a 13 A = a 21 a 22 a 23

(copyright by Scott M Lynch, February 2003) Brief Matrix Algebra Review (Soc 504) Matrix algebra is a form of mathematics that allows compact notation for, and mathematical manipulation of, high-dimensional

### Linear Programming in Matrix Form

Linear Programming in Matrix Form Appendix B We first introduce matrix concepts in linear programming by developing a variation of the simplex method called the revised simplex method. This algorithm,

### 3. Linear Programming and Polyhedral Combinatorics

Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the

### Lecture 3: Linear Programming Relaxations and Rounding

Lecture 3: Linear Programming Relaxations and Rounding 1 Approximation Algorithms and Linear Relaxations For the time being, suppose we have a minimization problem. Many times, the problem at hand can

### Systems of Linear Equations

Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

### Solving the quadratic assignment problem by means of general purpose mixed integer linear programming solvers

Solving the quadratic assignment problem by means of general purpose mixed integer linear programg solvers Huizhen Zhang Cesar Beltran-Royo Liang Ma 19/04/2010 Abstract The Quadratic Assignment Problem

### Matrix Norms. Tom Lyche. September 28, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Matrix Norms Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 28, 2009 Matrix Norms We consider matrix norms on (C m,n, C). All results holds for

### What is Linear Programming?

Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to

### 1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

### Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

### Solution to Homework 2

Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

### 5.1 Bipartite Matching

CS787: Advanced Algorithms Lecture 5: Applications of Network Flow In the last lecture, we looked at the problem of finding the maximum flow in a graph, and how it can be efficiently solved using the Ford-Fulkerson

### A Lagrangian-DNN Relaxation: a Fast Method for Computing Tight Lower Bounds for a Class of Quadratic Optimization Problems

A Lagrangian-DNN Relaxation: a Fast Method for Computing Tight Lower Bounds for a Class of Quadratic Optimization Problems Sunyoung Kim, Masakazu Kojima and Kim-Chuan Toh October 2013 Abstract. We propose

### IE 680 Special Topics in Production Systems: Networks, Routing and Logistics*

IE 680 Special Topics in Production Systems: Networks, Routing and Logistics* Rakesh Nagi Department of Industrial Engineering University at Buffalo (SUNY) *Lecture notes from Network Flows by Ahuja, Magnanti

### Linear Programming I

Linear Programming I November 30, 2003 1 Introduction In the VCR/guns/nuclear bombs/napkins/star wars/professors/butter/mice problem, the benevolent dictator, Bigus Piguinus, of south Antarctica penguins

Quadratic Functions, Optimization, and Quadratic Forms Robert M. Freund February, 2004 2004 Massachusetts Institute of echnology. 1 2 1 Quadratic Optimization A quadratic optimization problem is an optimization

### Solving Systems of Linear Equations

LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

### 1. LINEAR EQUATIONS. A linear equation in n unknowns x 1, x 2,, x n is an equation of the form

1. LINEAR EQUATIONS A linear equation in n unknowns x 1, x 2,, x n is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b, where a 1, a 2,..., a n, b are given real numbers. For example, with x and

### 1.2 Solving a System of Linear Equations

1.. SOLVING A SYSTEM OF LINEAR EQUATIONS 1. Solving a System of Linear Equations 1..1 Simple Systems - Basic De nitions As noticed above, the general form of a linear system of m equations in n variables

### Duality in Linear Programming

Duality in Linear Programming 4 In the preceding chapter on sensitivity analysis, we saw that the shadow-price interpretation of the optimal simplex multipliers is a very useful concept. First, these shadow

### Operation Research. Module 1. Module 2. Unit 1. Unit 2. Unit 3. Unit 1

Operation Research Module 1 Unit 1 1.1 Origin of Operations Research 1.2 Concept and Definition of OR 1.3 Characteristics of OR 1.4 Applications of OR 1.5 Phases of OR Unit 2 2.1 Introduction to Linear

### 4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns

L. Vandenberghe EE133A (Spring 2016) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows

### LINEAR SYSTEMS. Consider the following example of a linear system:

LINEAR SYSTEMS Consider the following example of a linear system: Its unique solution is x +2x 2 +3x 3 = 5 x + x 3 = 3 3x + x 2 +3x 3 = 3 x =, x 2 =0, x 3 = 2 In general we want to solve n equations in

### Special Situations in the Simplex Algorithm

Special Situations in the Simplex Algorithm Degeneracy Consider the linear program: Maximize 2x 1 +x 2 Subject to: 4x 1 +3x 2 12 (1) 4x 1 +x 2 8 (2) 4x 1 +2x 2 8 (3) x 1, x 2 0. We will first apply the

### LABEL PROPAGATION ON GRAPHS. SEMI-SUPERVISED LEARNING. ----Changsheng Liu 10-30-2014

LABEL PROPAGATION ON GRAPHS. SEMI-SUPERVISED LEARNING ----Changsheng Liu 10-30-2014 Agenda Semi Supervised Learning Topics in Semi Supervised Learning Label Propagation Local and global consistency Graph

### Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances

Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances It is possible to construct a matrix X of Cartesian coordinates of points in Euclidean space when we know the Euclidean

### A Direct Numerical Method for Observability Analysis

IEEE TRANSACTIONS ON POWER SYSTEMS, VOL 15, NO 2, MAY 2000 625 A Direct Numerical Method for Observability Analysis Bei Gou and Ali Abur, Senior Member, IEEE Abstract This paper presents an algebraic method

### Lecture 5 Principal Minors and the Hessian

Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and

### LECTURE: INTRO TO LINEAR PROGRAMMING AND THE SIMPLEX METHOD, KEVIN ROSS MARCH 31, 2005

LECTURE: INTRO TO LINEAR PROGRAMMING AND THE SIMPLEX METHOD, KEVIN ROSS MARCH 31, 2005 DAVID L. BERNICK dbernick@soe.ucsc.edu 1. Overview Typical Linear Programming problems Standard form and converting

### Mathematical finance and linear programming (optimization)

Mathematical finance and linear programming (optimization) Geir Dahl September 15, 2009 1 Introduction The purpose of this short note is to explain how linear programming (LP) (=linear optimization) may

### Solution of Linear Systems

Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

### Solving Linear Systems, Continued and The Inverse of a Matrix

, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

### Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. ( 1 1 5 2 0 6

Chapter 7 Matrices Definition An m n matrix is an array of numbers set out in m rows and n columns Examples (i ( 1 1 5 2 0 6 has 2 rows and 3 columns and so it is a 2 3 matrix (ii 1 0 7 1 2 3 3 1 is a

### Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Sensitivity Analysis 3 We have already been introduced to sensitivity analysis in Chapter via the geometry of a simple example. We saw that the values of the decision variables and those of the slack and

### Solving Systems of Linear Equations. Substitution

Solving Systems of Linear Equations There are two basic methods we will use to solve systems of linear equations: Substitution Elimination We will describe each for a system of two equations in two unknowns,

### 24. The Branch and Bound Method

24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

### Lecture 4: Partitioned Matrices and Determinants

Lecture 4: Partitioned Matrices and Determinants 1 Elementary row operations Recall the elementary operations on the rows of a matrix, equivalent to premultiplying by an elementary matrix E: (1) multiplying

### Row Echelon Form and Reduced Row Echelon Form

These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (3rd edition) These notes are intended primarily for in-class presentation

### Cofactor Expansion: Cramer s Rule

Cofactor Expansion: Cramer s Rule MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Introduction Today we will focus on developing: an efficient method for calculating

### Lecture 6. Inverse of Matrix

Lecture 6 Inverse of Matrix Recall that any linear system can be written as a matrix equation In one dimension case, ie, A is 1 1, then can be easily solved as A x b Ax b x b A 1 A b A 1 b provided that

### DETERMINANTS. b 2. x 2

DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in

### LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

### 7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

### Lecture Notes: Matrix Inverse. 1 Inverse Definition

Lecture Notes: Matrix Inverse Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Inverse Definition We use I to represent identity matrices,

### Optimization in R n Introduction

Optimization in R n Introduction Rudi Pendavingh Eindhoven Technical University Optimization in R n, lecture Rudi Pendavingh (TUE) Optimization in R n Introduction ORN / 4 Some optimization problems designing

### Optimization Modeling for Mining Engineers

Optimization Modeling for Mining Engineers Alexandra M. Newman Division of Economics and Business Slide 1 Colorado School of Mines Seminar Outline Linear Programming Integer Linear Programming Slide 2

### 2.3 Scheduling jobs on identical parallel machines

2.3 Scheduling jobs on identical parallel machines There are jobs to be processed, and there are identical machines (running in parallel) to which each job may be assigned Each job = 1,,, must be processed

### Discuss the size of the instance for the minimum spanning tree problem.

3.1 Algorithm complexity The algorithms A, B are given. The former has complexity O(n 2 ), the latter O(2 n ), where n is the size of the instance. Let n A 0 be the size of the largest instance that can

### Notes for STA 437/1005 Methods for Multivariate Data

Notes for STA 437/1005 Methods for Multivariate Data Radford M. Neal, 26 November 2010 Random Vectors Notation: Let X be a random vector with p elements, so that X = [X 1,..., X p ], where denotes transpose.

### Math 312 Homework 1 Solutions

Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

### FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT MINING SYSTEM

International Journal of Innovative Computing, Information and Control ICIC International c 0 ISSN 34-48 Volume 8, Number 8, August 0 pp. 4 FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT

### The Characteristic Polynomial

Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

### 2.5 Elementary Row Operations and the Determinant

2.5 Elementary Row Operations and the Determinant Recall: Let A be a 2 2 matrtix : A = a b. The determinant of A, denoted by det(a) c d or A, is the number ad bc. So for example if A = 2 4, det(a) = 2(5)

### Orthogonal Projections

Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

### NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

### Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Section 1.2: Row Reduction and Echelon Forms Echelon form (or row echelon form): 1. All nonzero rows are above any rows of all zeros. 2. Each leading entry (i.e. left most nonzero entry) of a row is in

### In this paper we present a branch-and-cut algorithm for

SOLVING A TRUCK DISPATCHING SCHEDULING PROBLEM USING BRANCH-AND-CUT ROBERT E. BIXBY Rice University, Houston, Texas EVA K. LEE Georgia Institute of Technology, Atlanta, Georgia (Received September 1994;

### Solving polynomial least squares problems via semidefinite programming relaxations

Solving polynomial least squares problems via semidefinite programming relaxations Sunyoung Kim and Masakazu Kojima August 2007, revised in November, 2007 Abstract. A polynomial optimization problem whose

### Diagonal, Symmetric and Triangular Matrices

Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by

### 1 Determinants and the Solvability of Linear Systems

1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped

### Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Linear Programming Widget Factory Example Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory:, Vertices, Existence of Solutions. Equivalent formulations.

### Solving Linear Diophantine Matrix Equations Using the Smith Normal Form (More or Less)

Solving Linear Diophantine Matrix Equations Using the Smith Normal Form (More or Less) Raymond N. Greenwell 1 and Stanley Kertzner 2 1 Department of Mathematics, Hofstra University, Hempstead, NY 11549

### 7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

7. LU factorization EE103 (Fall 2011-12) factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization algorithm effect of rounding error sparse

### Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem

### The Inverse of a Matrix

The Inverse of a Matrix 7.4 Introduction In number arithmetic every number a ( 0) has a reciprocal b written as a or such that a ba = ab =. Some, but not all, square matrices have inverses. If a square

### 7.4 Linear Programming: The Simplex Method

7.4 Linear Programming: The Simplex Method For linear programming problems with more than two variables, the graphical method is usually impossible, so the simplex method is used. Because the simplex method

### Transportation Polytopes: a Twenty year Update

Transportation Polytopes: a Twenty year Update Jesús Antonio De Loera University of California, Davis Based on various papers joint with R. Hemmecke, E.Kim, F. Liu, U. Rothblum, F. Santos, S. Onn, R. Yoshida,