A Low-Order Knowledge-Based Algorithm (LOKBA) to Solve Binary Integer Programming Problems
|
|
|
- Marsha Green
- 10 years ago
- Views:
Transcription
1 A Low-Order Knowledge-Based Algorithm (LOKBA) to Solve Binary Integer Programming Problems S. H. Pakzad-Moghadam MSc student, University of Tehran, Industrial Engineering Department E.mail: M. S. Shahmohammadi BSc student, University of Tehran, Industrial Engineering Dept, E.mail: R. Ghodsi Assistant Professor, University of Tehran, Industrial Engineering Dept, E.mail: Abstract In this paper a novel and very fast low order knowledge-based algorithm (called LOKBA) is presented to solve binary integer programming (BIP) problems. The factors in the objective function and the constraints of any BIP problem contains specific knowledge and information that can be used to better search the solution space of the problem. In this work many new definitions are introduced to elaborate on this information and extract necessary knowledge for creating and improving solutions. Found solutions are improved further and further until there is no new knowledge that can be extracted and there are no more possible improvements. The proposed algorithm has many major credentials. It produces promising results within amazingly short run times even for very large problems such as problems with one million variables. Also, it is extendable to solve integer programming and binary quadratic programming problems. Furthermore, it can be combined with heuristic or meta heuristic algorithms. Keywords: Binary Integer Programming, Combinatorial Optimization, Rule-based Algorithm, Mathematical Programming, Mixed Integer Programming 1. Introduction Binary integer programming (BIP) problems are a subset of Mixed Integer Programming (MIP) problems. Since there exist many practical applications such as distribution networks, scheduling in airline and other transportation industries, capital budgeting, telecommunications, sequencing and selection decisions, many researches focus on BIP 247
2 specifically as a major subset of MIP. The general BIP problem can be formulated as below: Max cx Subjected to Ax b x ε {0, 1} n where A is an m by n matrix, c is an n-dimensional row vector, b an m-dimensional column vector, and x is an n-dimensional column vector of variables or unknowns. All integer programming problems are known to be NP-hard and difficult to solve and have long been an important research area (Wang and Xing, 2009) (Wolsey, 1998). Kellerer et al (2004) have provided a full-scale presentation of all methods available for the solution of the Knapsack problem as a BIP problem in their book (Kellerer et al., 2004). Because large size NP-hard problems are not amenable to solution by exact methods, generally heuristic algorithms are used to solve integer programming problems have been solved by different exact and heuristic methods. Exact methods include branch and bound, dynamic programming, Lagrangian relaxation based methods and integer programming based methods such as branch and cut, branch and price, and branch and cut and price (Nemhauser and Wolsey, 1988). Heuristics include simulated annealing (Kirkpatrick et al., 1983), tabu search (Glover and Laguna, 1997), population-based models such as evolutionary algorithms (Baeck et al., 1997), scatter search (Glover et al., 2000), memetic algorithm (Moscato and Cotta, 2003), various estimation of distribution algorithms (Larranaga and Lozano, 2001), etc. There are also many attempts that combine exact and heuristic methods (Domitrescu and Stuetzle, 2003). Some recent improvements in heuristics are Local Branching (LB) (Fischetti and Lodi, 2003), Variable Neighborhood Search and Local Branching (VNSB) (Hansen et al., 2006), Variable Neighborhood Decomposition search for 0-1 Mixed Integer Programs (VNDS) (Lazic et al., 2009), Relaxation Induced Neighborhood Search (RINS) (Danna et al., 2005) and Distance Induced Neighborhood Search (DINS) (Ghosh, 2007). Namazifar and Miller (2008) presented a framework called macro partitioning (PMaP) for solving MIP problems in parallel (Namazifar and Miller, 2008). Chu and Beasley (1998) have developed a genetic algorithm for the multidimensional knapsack problem (Chu and Beasley, 1998). Captivo et al. (2003) have used a labeling algorithm to solve bi-criteria zero-one Knapsack problems (Captivo et al., 2003). General BIP problems include both negative and non-negative coefficients. A lot of research work exists on Knapsack problems or problems with non-negative coefficients. Fortin and Tseveendorj (2009) presented a branch and bound procedure based on empirical distribution of each variable under the linear relaxation model (Fortin and Tseveendorj, 2009). Vasquez and Hao (2001) developed a hybrid approach for the zero-one multidimensional Knapsack problem and stated that most efficient algorithms rely on linear relaxation bounds coupled with extra constraints (Vasquez and Hao, 2001). Haartman et al. (2008) find approximate solutions to BIP problems using continualization techniques (Haartman et al., 2008). Their new algorithm constructs a sequence of approximations to a solution using a meta-control approach that calculates the least squares of the errors
3 In the work at hand, both negative and non-negative coefficients are considered and none of the constraints is relaxed. The proposed algorithm considers both the objective and the constraints simultaneously and finds the solution very fast after some iteration. In each of the iterations the variables and constraints of the problem are inspected to draw knowledge and some of them are distinguished as firm or strong variables and obsolete constraints. The novel part of the algorithm is the drawing knowledge from the variables and constraint factors that is called standardization in this work. Based on that knowledge it then classifies the constraints and variables as mentioned in an innovative manner. Next the constraints and variables are weighted using a specific weighting scheme. Using these weights, the values for a few of the variables are specified and a smaller problem is then formed. Then the variables and constraints are re-inspected and new weights are calculated. In each of these steps, a few more of the variables are specified until all variables are known and the algorithm terminates. The algorithm is explained in detail in the next section. The validation of the algorithm is done through solving many different sample problems. The results were very promising and the optimum or good solutions were found in very short run time even for large size problems. For example, in a problem with 10,000 variables the optimum solution was found in less than 4 seconds for a normal PC. The novel algorithm presented here called Low Order Knowledge-Based Algorithm (LOKBA) is capable of solving small to large size problems that might be impossible or too complicated to be solved by other methods. Also, it should be noted that many previous works such as DINS, RINS, and LB are reported for problems of smaller size than the sizes that are tested in this work. Hence, there is a potential for future research to combine such approaches with the algorithm presented here. A very good initial solution within a very short time even for problems of one million variables can be produced with the proposed algorithm and then be improved further by methods such as DINS. Also, combination of exact methods with the algorithm appears to be a possible technique to reduce the problem complexity. For example, the branch & bound s lower bound could be improved surprisingly if the value of the final solution of this algorithm is used as the lower bound. 2. LOKBA Algorithm Prior to presenting the steps of the proposed algorithm in the work at hand it is necessary to first define some terms and concepts specific to this algorithm. These terms are explained in section 3.1 and then the algorithm is given in section
4 2.1. Definitions Complement of a variable The complement of a variable as such that Complement-constraints (1) Consider the following BIP sample problem: Min z= x 1-2x 2-9x 3 +0x 4 +3x 5 +0x 6 +8x 7-5x 8 Subjected to: -x 1 -x 2 -x 3 +x 4 -x 5 -x 6 -x 7 -x 8 <=-2-3x 1 +1x 2-2x 3 +1x 4 +6x 5 +0x 6 +5x 7 -x 8 >= 9 +2x 1 +x 2 +2x 3 +0x 4 +9x 5-3x 6-3x 7 +x 8 >= 7 For each constraint, a new constraint can be formed using the complement variable. These constraints will be called complement-constraints from this point forward. For example, the second constraint in the above sample problem can be transformed to its complement-constraint as follows: -3x 1 +1x 2-2x 3 +1x 4 +6x 5 +0x 6 +5x 7 -x 8 >= 9 & -3(x 1 +y 1 ) +1(x 2 +y 2 ) -2(x 3 +y 3 ) +1(x 4 +y 4 ) +6(x 5 +y 5 ) +0(x 6 +y 6 ) +5(x 7 +y 7 ) -1(x 8 +y 8 ) = 7 So the complement constraint will be: -3y 1 +1y 2-2y 3 +1y 4 +6y 5 +0y 6 +5y 7 -y 8 =< -2 Where. Superior versus inferior constraints As shown above, in each pair of a constraint and its complement-constraint, one has greater-equal sign and the other smaller-equal sign. From this point forward, the constraint with greater-equal sign is called superior constraint and represented with a C symbol. Likewise, the smaller-equal constraint is called inferior constraint and represented with a Ø. Standard problem In this work, a problem is defined as standard problem when: 1. The objective is maximization. 2. The signs of all constraints and complement-constraints must be either greater-equal or smaller-equal. 3. All factors in the objective function and the constraints are positive integers
5 All BIP problems can be transformed into a standard problem. The objective function of a minimization problem is multiplied by -1 to form a maximization problem. For a constraint with greater sign, a value 1 is added to the right hand side (RHS) and the sign is then changed into greater-equal. Likewise, for a constraint with smaller sign, a value 1 is subtracted from the RHS and the sign is then changed into smaller-equal. Wherever a non-integer factor exists for a variable in the objective function or in a constraint, that objective function or that constraint can be multiplied to an appropriate number to satisfy the integer requirement. Also, for variables with negative factor, whether in a constraint or in the objective function, the complement of that variable is used. This is shown in the following standardization example. In the first step, a maximization problem is formed: Max z = -x 1 +2x 2 +9x 3 +0x 4-3x 5 +0x 6-8x 7 +5x 8 To have positive-sign factors only: Max z = y 1 +2x 2 +9x 3 +0x 4 +3y 5 +0x 6 +8y 7 +5x 8-12 The equivalent for the objective function will be: Max z = y 1 +2x 2 +9x 3 +0x 4 +3y 5 +0x 6 +8y 7 +5x 8 The constraints are changed to: y 1 +y 2 +y 3 +x 4 +y 5 +y 6 +y 7 +y 8 <= 5 3y 1 +1x 2 +2y 3 +1x 4 +6x 5 +0x 6 +5x 7 +y 8 >= 15 2x 1 +x 2 +2x 3 +0x 4 +9x 5 +3y 6 +3y 7 +x 8 >= 13 Now the complement-constraints are formed: x 1 +x 2 +x 3 +y 4 +x 5 +x 6 +x 7 +x 8 >= 3 3x 1 +1y 2 +2x 3 +1y 4 +6y 5 +0y 6 +5y 7 +x 8 <= 4 2y 1 +y 2 +2y 3 +0y 4 +9y 5 +3x 6 +3x 7 +y 8 <= 8 Finally the standard form of the example is this: Max z = y 1 +2x 2 +9x 3 +0x 4 +3y 5 +0x 6 +8y 7 +5x 8 Superior constraints: Inferior constraints: x 1 +x 2 +x 3 +y 4 +x 5 +x 6 +x 7 +x 8 >= 3 y 1 +y 2 +y 3 +x 4 +y 5 +y 6 +y 7 +y 8 <=5 3y 1 +1x 2 +2y 3 +1x 4 +6x 5 +0x 6 +5x 7 +y 8 >= 15 3x 1 +1y 2 +2x 3 +1y 4 +6y 5 +0y 6 +5y 7 +x 8 <=4 2x 1 +x 2 +2x 3 +0x 4 +9x 5 +3y 6 +3y 7 +x 8 >=13 2y 1 +y 2 +2y 3 +0y 4 +9y 5 +3x 6 +3x 7 +y 8 <= 8 Where. Weight assignment Obviously in the above example, the eight variables (x 1,x 2,x 3,x 4,x 5,x 6,x 7,x 8 or y 1,y 2, y 3, y 4, y 5, y 6, y 7,y 8 ) can take each one of the two 0 or 1 values and therefore the problem has 2 8 = 256 alternatives. Testing all alternatives to find the optimum for even a normal size problem is computationally expensive. In the exact solution approaches such as Branch
6 and-bound (B&B) the variables are prioritized in the ascending order of their factors in the objective function only and their effects in satisfying the constraints are ignored. Therefore, it frequently happens that a variable is assigned as 1 too early although in the optimum solution that variable must be 0. Hence, too many alternatives might be tested to correct the value for this variable in order to find the optimum solution. Furthermore to ensure and verify the optimality, even when the optimum is found in the early steps, still too many alternatives are often tested. To reduce the number of alternatives, in addition to the factors of variables in the objective function, the role of variables in satisfying the constraints has to be considered as well. Lots of effort in the work at hand was dedicated to consider the role of the variables in the constraints and based on the results the following approach is chosen. Determining the Coefficient of Constraints (CoC) Each constraint represents a set of acceptable solutions and it seems a valid claim to say that typically a constraint with smaller set has a higher influence on region for the optimum solution and thus it has to have a higher effect coefficient. The solutions for the inferior and superior constraints are corresponding to one another and consequently the numbers of elements in both sets are equal (n(c) = n(ø)). Using the above explanations, it can be stated that the effect of a constraint and its complementconstraint are equal or in other words each superior constraint and its corresponding inferior constraint have similar influence on the region for the optimum solution. Because all variables and their complements can have a binary (0 or 1) value, the phrases x 1 +x 2 +x 3 +y 4 +x 5 +x 6 +x 7 +x 8 >= 3 or y 1 +y 2 +y 3 +x 4 +y 5 +y 6 +y 7 +y 8 <= 5 in the above example will have a value between 0 to 8. The RHS of a superior constraint after standardization is shown by P and the RHS of the inferior constraint after standardization is shown by S. It is obvious that increasing P or decreasing S each will reduce the size of sets C and Ø accordingly and consequently it increases the effect of that constraint. Once the P for all constraints (The vector of P) are non-positive then the problem is fully feasible. On the other hand, if one element of the vector S is negative, then the problem is infeasible. There will be no such case that the last 2 statements (All P s non-positive and at least one S negative) occur simultaneously. This is because P+S is the sum of the factors in the constraints and is non-negative always. Hence, CoC is defined as: (2) 252
7 Determining the Effect Coefficient for each variable (SNCoV, SPCoV) Considering a standard problem, the superior constraints can be written in the general form as: (3) and the inferior constraints can also be written in the following general form: (4) Clearly: (5) On the other hand, the effect of k th variable in satisfying the constraints can be represented by (Note that all variables are binary) which is the same as Variables can take a value 0 or 1, each value results in a different effect on the constraints. To analyze the amount of influence of each variable in the constraints, two new coefficients are defined here. In a standard problem considering all variables in the objective function, SPCoV is defined below such that it illustrates the effect of each variable once given a value 1. Similarly, SNCoV is defined such that it illustrates the effects of each variable once given a value 0. Thus, having m constraints and n variables these coefficients are defined as:. (6) Note that: (7) (8) Normalization of objective function factors To normalize the objective function factors, their absolute value of each is divided to the sum of all their absolute values. Evidently, the total of all normalized factors equals one. Normalization of SPCoV and SNCoV coefficients Although usually to normalize a series of values they are divided to their total such that the sum of normalized values is 1, here another approach is used. In the solution 253
8 algorithm presented in this work, the SPCoV and SNCoV coefficients relate to the feasibility of the problem since they are formed based on the constraints. Thus, when in a problem the importance of feasibility is higher than its optimality, these feasibility coefficients (i.e., all SPCoV s and SNCoV s ) have to have higher effect than the objective function factors. This means when the difficulty of the constraints increases (the feasibility will reduce and therefore its importance will increase), then the sum of all feasibility coefficients must also increase in comparison to the normalized factors of the objective function. For this purpose, to normalize the feasibility coefficients (Instead of dividing to their total), they are divided to the number of constraints (m). The result can be larger than one, whereas the sum of normalized objective function factors is always one. Difficulty coefficient of the sum of all constraints = (9) Determining Final Coefficient (FCo) for each variable It should be noted that using constraints alone to prioritize variables in taking value does not provide a comprehensive solution strategy. This is similar to considering the ascending order of the objective function factors for variables in the Branch and Bound approach that sometimes leads to a prolonged or too extensive search. Therefore, a more reliable approach is to use both the factors of the constraints and the factors of the objective function. Prior to explaining how to use both of these factors, it should be pointed out here that normally the two following approaches exist for solving binary problems: 1- The general approach considers both the feasibility and the optimality of the solution. Here it is tried to find the feasible solution close to optimum as much as possible. 2- The particular approach considers the feasibility of the solution only. This approach is used when the solution space is significantly small and finding a feasible solution is vital. Here the probability of finding a feasible solution is greater compared to the general approach as the focus is on feasibility only. Obviously an optimum but nonfeasible solution is of no use. Hence, in the algorithm presented in this work the general approach is employed first and after several iterations to get closer to optimum the solution space shrinks and becomes smaller. Next whenever finding a better feasible solution through general approach is not possible, the particular approach is employed. In the general approach both of the above mentioned factors (constraints and objective function) are used simultaneously in the manner that follows. Once a variable is 1, then clearly the objective function increment is the value of its factor in that function. Furthermore, the constraints are satisfied in the amount of its SPCoV. On the other hand, once a variable is 0, the objective function does not change and the constraints are then satisfied in the amount of SNCoV. Therefore, a new coefficient can be defined 254
9 for each variable by adding SPCoV to the normalized factor of that variable in the objective function and then dividing them to SNCoV. FCo( ) = (10) In the particular approach the objective function factors are neglected and therefore FCo is calculated as below: (11) Once a variable has an FCo greater than 1 it has a high merit to take the value 1 with the variable with highest FCo having the highest merit. On the other hand a variable with FCo smaller-equal 1 has a high merit to become 0 with the variable with the smallest FCo having the highest merit to become zero. The details of how to use FCo will be explained later in the proposed algorithm. Strong variable In the Branch and Bound approach many branches and sub-branches are created and then while using the feasibility test, it can be observed that some of the variables have to be fixed to a special value. In the proposed algorithm in this work, this fact is discovered earlier in higher level branches and the feasibility tests in lower levels are skipped. This is done through defining strong variables. In the proposed work during the iterations of the algorithm variables are gradually assigned 0 or 1 values and consequently the RHS of the constraints are decreased accordingly. Whenever in an inferior constraint the factor of a variable is greater than the RHS of that constraint, then that variable is considered strong and is assigned to zero. This is because inferior constraints have smaller-equal sign and all variables have non-negative factors in a standard problem. Therefore, if the factor of a variable is greater than the RHS and if assigned to 1, then obviously the result of multiplying the factor to 1 violates the constraint. The assigned value for the strong variables is fixed contrary to the other variables that take a value based on the weighting scheme. Firm variables Once the vector of factors for a variables in all of the superior constraints of a standard problem is greater-equal zero, then SNCoV=0 and that variable is firm and will be assigned to 1. This definition for firm variable is once the general approach is employed. For the particular approach in addition to those firm variables with SNCoV=0 (which are 255
10 assigned to 1) also the variables with SPCoV=0 are considered as firm which will be assigned to 0. Obsolete constraints As mentioned earlier, in the proposed algorithm variables (like strong or firm) are gradually assigned values during the iterations of the algorithm. This value for each variable is multiplied to its factor in each constraint and then deducted from the RHS. And at some point the RHS of the superior constraints will become zero or negative. A superior constraint has greater-equal sign and a standard problem has non-negative factors only. Thus, a zero or negative RHS means that the remaining variables can take any value and the constraint will be true anyways. Hence, this constraint and its complement become obsolete The algorithm steps The algorithm works on the concept of finding a solution and if possible improving this solution further by adding extra constraints. To find a solution in each step, knowledge is extracted from the values of the factors in the objective function and the constraints. This knowledge is then processed further in a loop called improvement loop to create smaller problems intelligently. Next the knowledge is also used in a different manner to assign weights to the constraints and the variables. Based on the weighting scheme, the variables are assigned values. From the assigned values, a new knowledge is extracted. This new knowledge upgrades the previous knowledge and the algorithm uses this as a feedback for future improvement. The algorithm stops with the best solution found so far once there can be no further improvements. The procedure of the algorithm is such that it produces a better solution in each of the iterations while using all of the knowledge that can be extracted from the problem factors to improve the solutions. Therefore, it can be expected from the algorithm to produce promising results. This is proven by the results that are discussed later. The algorithm is described in the following steps and the flowchart of the algorithm is presented in figure 1. For the example of each step, please refer to the comprehensive example given at the end of this paper in the appendix. The algorithm has an initialization step where it is transformed into a standard problem. The solution approach can also be explained using a table. The table is introduced for the simplicity and better understanding of the algorithm. Consider a BIP problem with n variables and m constraints. 1. The objective function has to be of maximization type (Max-type). So if the objective of BIP is minimization, multiply the objective to -1. Then, the first n elements in the first row of the table are the signs of the objective function factors, each is written in one column (from column 1 to n) accordingly. 2. The second row of the table is devoted to variables. This means that the second row is filled with x s and y s with index 1 for the first column, index 2 for the second column, index 3 for third column and so on till the n th column. In each column if the first row has a positive sign, then variable x j is used and otherwise 256
11 (negative sign) y j is used where j = 1 to n. It should be noted that this way of filling the table is towards the standardization of the problem. 3. The third row of the table (from column 1 to column n) is filled with the absolute values of the objective function factors. 4. Constraints will be written from the 4 th row onwards with one row for each constraint in a manner that will follow (Up to now there are m+3 rows). First all of the constraints have to be transformed to greater-equal or smaller-equal type by the technique explained in standardization of the problem. For each constraint of the problem before standardization that have greater-equal sign, a positive sign must be written in the n+1 column of the row specific to that constraint. Similarly, for smaller-equal constraints a negative sign is written. The factors of variables in each constraint will then be multiplied to this sign (written in (n+1) th column) and also multiplied to the sign written in the first row of the table of the column. 5. According to that variable. The resulting values will be written in the row for that constraint from column 1 to n. 6. In the same row for a constraint, P is written in the column n+2 and S is written in column n+3. P and S are calculated as below. When the sign in column n+1 is positive, then: P = RHS (The sum of negative factors in the constraints prior to standardization) S = (The sum of positive factors in the constraints prior to standardization) RHS (12) (13) When the sign in column n+1 is negative, then: P = (The sum of positive factors in the constraints prior to standardization) RHS (14) S = RHS (The sum of negative factors in the constraints prior to standardization) (15) It should be noted that: P+S = Sum of the absolute values of the factors of the constraint (16) The primary table, which basically is the initialization section of the algorithm, is now ready. Also consider the general approach
12 Fig1 The flowchart of the algorithm 7. Now the improvement loop: - If there is a constraint with negative S, then go to step
13 - Find the strong and firm variables and the obsolete constraints. - Assign the values of the strong and firm variables and omit the obsolete constraints (if any exist) - Update the P and S values for the new problem (Note that the rest of the table does not change). The updating is in this form: When a variable is assigned to 1, the positive factors are deducted from its P and the negative factors are added to its S. When a variable is assigned to 0, the positive factors are deducted from its S and the negative factors are added to its P. - Loop back to find again the strong and firm variables and obsolete constraints until there are no more such variables and constraints. It is needed to loop back because after each iteration of this loop the new setting of the problem might result in creating the possibility of finding further strong or firm variables and obsolete constraints. Thus, once the loop ends it is certain that all the leftover variables and constraints need another approach for being evaluated. This is done through the weighting scheme below. All the values given to the specified variables in this step must be written in row m+10. The values for remaining variables (empty columns of this row) will be filled in step 12 after weighting steps. 8. Beginning of the weighting scheme starts here with calculating the CoC of each constraint divided to (P+S) of that constraint and writing the result in (n+4) th column in the row for that constraint. 9. In this step the row m+4 will be filled from column 1 to n. For each variable (i.e., for each column) the sum of the product of the positive factors written in step 4 for that variable and the number calculated in the previous step for each constraint will be written in row m+4 in the same column. This value is SPCoV of the variable according to that column. Similarly, the sum of the product of the negative factors written in step 4 for each variable and the number calculated in the previous step for each constraint will be written in row m+5 in the same column. This is SNCoV of the associated variable. 10. Rows m+6 and m+7 are the normalized values of the two previous rows accordingly. 11. Row m+8 is the normalized values of the objective function factors (row 3). 12. FCo is now calculated for variables that have not been given any value up to now and written in row m+9. Weighting is now completed. 13. Using the FCo weights of the previous step, the associated variables are assigned 0 or 1 value according to the criteria that those with FCo greater than 1 are given value 1 and those smaller-equal 1 are given the value 0. This is a solution created for the BIP problem and is called the base solution hereafter and this base solution will fill the blank elements of row m+10. If only one variable is given a value in this step, then update the problem (P and S) similar to step 6 and go to step 17. Otherwise continue. 14. The feasibility of the solution is tested at this step. If yes, go to next step (Yes- Loop). Otherwise go to step 16 (No-Loop)
14 15. Yes-Loop is to improve the found feasible solution as much as possible. In this loop we put aside all those variables with value 1 and update the problem using these values. The approach of updating is the same as explained in step 6. Therefore, the number of variables of this sub-problem is the number of variables with value zero prior to this step (It has less than n variables). All the variables with value 1 are omitted from the objective function to form the new objective function. Also an additional constraint is added to the sub-problem. This constraint is as follows: New objective function The maximum of {1 & the factor of the new objective function with minimum value} (17) This sub-problem is now solved by going to the step 6. The yes-loop continues till the last sub-problem is infeasible. 16. Now that the solution has been improved. Note that the improved solution has n variables. The objective function value is then calculated for this improved solution and a new problem is formed by adding this constraint: Objective function 1+ Objective function value for the improved solution (18) To escape from local optimum the two following constraints are also added that forces to find a new solution if possible. The sum of all variables that had value 1 (the number of these variables -1) (19) The sum of all variables that had value 0 1 With this new problem go to step 6. This forces the quest for better solution to continue till at some level, the algorithm stops due to infeasibility. 17. No-Loop starts here. First reverse the step 12 as if it just past step 11. Find the max and min of FCo s calculated in step 11. If the product of this max and min is greater than 1 then assign the value 1 to the variable that has this max FCo. Otherwise, assign 0 to the variable with min FCo. Update P and S in the same way that was stated in step 6. Take this new problem and go to step Feasibility test. If feasible, go to step 14. Otherwise continue. 19. Check to see whether the general or the particular approach is used. If the approach is general currently, then change to particular approach and go to step 6. Otherwise (particular approach), the last found solution is the final solution. 3. Results The algorithm is tested in various manners using a normal PC. First, the updated MIPLIB 3.0 which is the famous Mixed Integer Programming Library is used ( To compare LOKBA with other recent Algorithms (Default Cplex, LB, RINS, DINS, VNDS and VNSB), only the results of problems solved by (20)
15 other algorithms are reported here. Table 1 illustrates the results of LOKBA in column 6 for the eight problems given in the first column. The results of other algorithms (Default Cplex, LB, RINS and DINS) reported in (Ghosh, 2007) are shown in columns 2 to 5 for comparison. The run time for LOKBA on a normal PC (2.0 GHz Intel) is given in column 7. It should be noted that the run time for all other algorithms for all eight problems is one CPU-hour on a 2403 MHz AMD Athlon processor. The speed of LOKBA is much more with results not too far from the others. Table 1: Optimality and running time comparison Problem Name Default Cplex LB RINS DINS LOKBA LOKBA Run Time (Sec) seymour sp97ar sp97ic sp98ar sp98ic fast harp t ds To further verify the algorithm and compare with other recent works, the optimality and running time of LOKBA is compared to VNSB and VNDS that are two recent methods for 0-1 programming. Table 2 reports the results of the algorithms and their run times in seconds. It also illustrates that LOKBA has great speed with very good solutions. Table 2: Optimality and running time comparison Proble m Name VNDS VNDS Run Time (Sec) VNSB VNSB Run Time (Sec) LOKBA LOKBA Run Time (Sec) seymou r sp97ar sp97ic sp98ar sp98ic The LOKBA algorithm is also tested using some sample problems taken from the OR library website ( The results are shown in table 3. The size of these sample problems were ranging from problems with 100 variables and 15 constraints to problems with 2500 variables and 100 constraints. The run times of the algorithm range from 0.01 to 14 seconds. The maximum deviation from the given solution was 1.3 percent and the average deviation was 0.4 percent
16 Table 3: The results for 10 problems from OR library n-m Optimum LOKBA Run Time Ratio In order to analyze the sensitivity of the algorithm to the increase of the number of variables and constraints a graph is shown in Figure 2. After applying regression method on the results shown in table 3, the run time (which represents algorithm s order) can be approximated as a function of the number of constraints multiplied to the square of the number of variables (mn 2 ). The approximation using the correlation coefficient is R 2 = %. Fig 2: The relationship between run time and mn 2 for OR library 10 sample problems Next, as another way of testing the proposed algorithm, the results of the algorithm by Fortin and Tseveendorj (Fortin and Tseveendorj, 2009) are also used. All of their sample problems have 500 variables and 30 constraints. They have not provided the run times of their approach. The deviation of LOKBA from their results is 1.5 percent with run times of about 1 second
17 In addition, several BIP problems of small to large size (up to 1,000,000 variables) with known optimal solutions are generated and tested. The algorithm produces the optimal solution in all of these cases. The results are shown in table 4. The run times are again surprisingly low. The running time for the problem with 1,000,000 variables was less than 8.5 hours; for the problem with 10,000 variables was less than 4 seconds and for smaller problems was even a fraction of a second. Table 4: The results for 5 large size instances n-m Optimum Solution by LOKBA Run solution the algorithm Time (Sec) Although, in this step many different BIP problems were tested, a set of problems with similar structures were used in order to analyze the sensitivity of the algorithm when the number of variables is growing. Using the data of table 4, figure 3 demonstrates similar behavior shown in figure 2 with an even better approximation. Hence, it is predictable that the relationship will be the same for any other binary problems. Therefore, the low sensitivity of the presented approach to the size of the problem in comparison to the other approaches make this approach suitable for solving extra-large problems. Fig 3: The relationship between run time and mn 2 for above large size samples 4. Conclusion and future works There are definitely some information and knowledge embedded in the factors of the constraints and the objective function of any binary integer programming problem. This 263
18 knowledge, if extracted and implemented intelligently, can help in finding the optimal solution. In this work, this is attempted and a novel algorithm is developed that uses this knowledge in many different levels. The knowledge is employed to form a so-called novel standard problem and then the variables and constraints are then classified in an innovative manner. Next the constraints and variables are weighted. The solutions are improved towards optimum in several iterations. In these iterations the variables are assigned values and new knowledge is created again that finally results in an optimal or near optimal solution within a short amount of time. The efficiency of the algorithm is tested using many different BIP problems. The algorithm produces optimal or near optimal solutions in all test cases. Due to the optimality of the numerous results acquired in amazingly short run times, the algorithm is promising. It has the ability to solve any type of BIP problems (whether they encompass either positive or negative factors or both of them and without any limitation for the number of constraints) with low sensitivity to the size of the problem. Based on these characteristics, there is high potential for extending this approach by combining it with other methods. For example, it can be combined with many previous works such as DINS, RINS, LB, VNSB and VNDS that are used for problems with much less variables in comparison to the problems tested in this work. The proposed algorithm can provide an initial solution for a very large problem with satisfactory gap from the optimum in a short time. Then, the other methods can improve the solution and decrease the gap. However, it should be noted that a solution with some gap for a real world huge commercial problems which normally have a considerable amount of uncertainty might be satisfactory. Instead of very exact solutions, there might be a need to run the BIP models frequently in a short time. Another possibility is to consider fuzziness and uncertainty of the data in the algorithm. 5. Acknowledgement The authors are grateful for the financial support by the University of Tehran, College of Engineering, for this research with grant number 27768/1/
19 References Baeck, Fogel D, and Michalewicz Z (1997). Handbook of Evolutionary Computation, Oxf. Univ. Press, NY. Captivo ME, Climaco J, Figueira J, Martins E, Santos JL (2003),Solving Bicriteria 0-1 Knapsack Problems Using a Labeling Algorithm, Comput. Oper. Res.,30: Chu PC and, Beasley JE (1998), a Genetic Algorithm for the Multidimensional Knapsack Problem, J. Heuristics, 4: Danna E, Rothberg E, and Pape CL (2005), Exploring Relaxation Induced Neighborhoods to Improve MIP Solutions. Math. Program., 102:71-90 Domitrescu I and Stuetzle T (2003), Combinations of Local Search and Exact Algorithms, Applications of Evolutionary Computations, Lect. Notes. Comput. Sc., 2611: , Springer Fischetti M and Lodi A (2003), Local Branching, Math. Program. B, 98: Fortin D, Tseveendorj I (2009), a Trust Branching Path for Zero-One Programming, Eur. J. Oper. Res., 197: Ghosh S (2007), DINS, a MIP Improvement Heuristic, Lect. Notes Comput. Sc., 4513: , Springer Glover F and Laguna M (1997), Tabu Search, Klu. Acad. Publ. Glover F, Laguna M, and Marti R (2000), Fundamentals of Scatter Search and Path Relinking, Control. Cybern., 39(3): Haartman KV, Kohn W, Zabinsky ZB (2008), A Meta-Control Algorithm for Generating Approximate Solutions to Binary Integer Programming Problems, Nonlinear. Anal. Hybrid Syst., 2: Hansen P, Mladenovic N, Urosevic D (2006),Variable Neighborhood Search and Local Branching, Comput. Oper. Res., 33: Kellerer H, Pferschy U, and Pisinger D (2004): Knapsack Problems, Springer Kirkpatrick S, Gellat C and Vecchi M (1983), Optimization by Simulated Annealing, Science, 220: Larranaga P and Lozano J (2001), Estimation of Distribution Algorithms, a New Tool for Evolutionary Computation, Klu. Acad. Publ. Lazic J, Hanafi S, Mladenovic N, Urosevic D (2009), Variable Neighborhood Decomposition Search for 0-1 Mixed Integer Programs, Comput. Oper. Res. P. Moscato and C. Cotta, A Gentle Introduction to Memetic Algorithms, in Handbook of Metaheuristics, Chapter 5, pp , F. Glover and G. Kochenberger (Eds.), Klu. Acad. Publ., Boston, Massachusetts, USA, (2003), ISBN: Namazifar M, Miller AJ (2008), a Parallel Macro Partitioning Framework for Solving Mixed Integer Programs, Lect. Notes Comput. Sc., 5015: , Springer Nemhauser G and Wolsey L (1988), Integer and Combinatorial Optimization, J. Wiley, NY. Vasquez M, Hao JK (2001), a Hybrid Approach for the 0-1 Multidimensional Knapsack problem, Int. Joint. Conf. Artif., 265
20 Wang Z, Xing W (2009), a Successive Approximation Algorithm for the Multiple Knapsack Problem, J. Comb. Optim.,17: Wolsey LA (1998), Integer Programming, J. Wiley, NY
5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1
5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 General Integer Linear Program: (ILP) min c T x Ax b x 0 integer Assumption: A, b integer The integrality condition
24. The Branch and Bound Method
24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no
The Problem of Scheduling Technicians and Interventions in a Telecommunications Company
The Problem of Scheduling Technicians and Interventions in a Telecommunications Company Sérgio Garcia Panzo Dongala November 2008 Abstract In 2007 the challenge organized by the French Society of Operational
INTEGER PROGRAMMING. Integer Programming. Prototype example. BIP model. BIP models
Integer Programming INTEGER PROGRAMMING In many problems the decision variables must have integer values. Example: assign people, machines, and vehicles to activities in integer quantities. If this is
A Constraint Programming based Column Generation Approach to Nurse Rostering Problems
Abstract A Constraint Programming based Column Generation Approach to Nurse Rostering Problems Fang He and Rong Qu The Automated Scheduling, Optimisation and Planning (ASAP) Group School of Computer Science,
Approximation Algorithms
Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms
D-optimal plans in observational studies
D-optimal plans in observational studies Constanze Pumplün Stefan Rüping Katharina Morik Claus Weihs October 11, 2005 Abstract This paper investigates the use of Design of Experiments in observational
Chapter 13: Binary and Mixed-Integer Programming
Chapter 3: Binary and Mixed-Integer Programming The general branch and bound approach described in the previous chapter can be customized for special situations. This chapter addresses two special situations:
Practical Guide to the Simplex Method of Linear Programming
Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear
A Reference Point Method to Triple-Objective Assignment of Supporting Services in a Healthcare Institution. Bartosz Sawik
Decision Making in Manufacturing and Services Vol. 4 2010 No. 1 2 pp. 37 46 A Reference Point Method to Triple-Objective Assignment of Supporting Services in a Healthcare Institution Bartosz Sawik Abstract.
OPRE 6201 : 2. Simplex Method
OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2
Nonlinear Programming Methods.S2 Quadratic Programming
Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective
HYBRID GENETIC ALGORITHMS FOR SCHEDULING ADVERTISEMENTS ON A WEB PAGE
HYBRID GENETIC ALGORITHMS FOR SCHEDULING ADVERTISEMENTS ON A WEB PAGE Subodha Kumar University of Washington [email protected] Varghese S. Jacob University of Texas at Dallas [email protected]
Student Project Allocation Using Integer Programming
IEEE TRANSACTIONS ON EDUCATION, VOL. 46, NO. 3, AUGUST 2003 359 Student Project Allocation Using Integer Programming A. A. Anwar and A. S. Bahaj, Member, IEEE Abstract The allocation of projects to students
Nan Kong, Andrew J. Schaefer. Department of Industrial Engineering, Univeristy of Pittsburgh, PA 15261, USA
A Factor 1 2 Approximation Algorithm for Two-Stage Stochastic Matching Problems Nan Kong, Andrew J. Schaefer Department of Industrial Engineering, Univeristy of Pittsburgh, PA 15261, USA Abstract We introduce
Locating and sizing bank-branches by opening, closing or maintaining facilities
Locating and sizing bank-branches by opening, closing or maintaining facilities Marta S. Rodrigues Monteiro 1,2 and Dalila B. M. M. Fontes 2 1 DMCT - Universidade do Minho Campus de Azurém, 4800 Guimarães,
Core problems in the bi-criteria {0,1}-knapsack: new developments
Core problems in the bi-criteria {0,}-knapsack: new developments Carlos Gomes da Silva (2,3, ), João Clímaco (,3) and José Figueira (3,4) () Faculdade de Economia da Universidade de Coimbra Av. Dias da
A Hybrid Tabu Search Method for Assembly Line Balancing
Proceedings of the 7th WSEAS International Conference on Simulation, Modelling and Optimization, Beijing, China, September 15-17, 2007 443 A Hybrid Tabu Search Method for Assembly Line Balancing SUPAPORN
A Column-Generation and Branch-and-Cut Approach to the Bandwidth-Packing Problem
[J. Res. Natl. Inst. Stand. Technol. 111, 161-185 (2006)] A Column-Generation and Branch-and-Cut Approach to the Bandwidth-Packing Problem Volume 111 Number 2 March-April 2006 Christine Villa and Karla
Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach
MASTER S THESIS Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach PAULINE ALDENVIK MIRJAM SCHIERSCHER Department of Mathematical
Distributed and Scalable QoS Optimization for Dynamic Web Service Composition
Distributed and Scalable QoS Optimization for Dynamic Web Service Composition Mohammad Alrifai L3S Research Center Leibniz University of Hannover, Germany [email protected] Supervised by: Prof. Dr. tech.
4.6 Linear Programming duality
4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal
Standardization and Its Effects on K-Means Clustering Algorithm
Research Journal of Applied Sciences, Engineering and Technology 6(7): 399-3303, 03 ISSN: 040-7459; e-issn: 040-7467 Maxwell Scientific Organization, 03 Submitted: January 3, 03 Accepted: February 5, 03
What is Linear Programming?
Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to
Branch-and-Price Approach to the Vehicle Routing Problem with Time Windows
TECHNISCHE UNIVERSITEIT EINDHOVEN Branch-and-Price Approach to the Vehicle Routing Problem with Time Windows Lloyd A. Fasting May 2014 Supervisors: dr. M. Firat dr.ir. M.A.A. Boon J. van Twist MSc. Contents
A Binary Model on the Basis of Imperialist Competitive Algorithm in Order to Solve the Problem of Knapsack 1-0
212 International Conference on System Engineering and Modeling (ICSEM 212) IPCSIT vol. 34 (212) (212) IACSIT Press, Singapore A Binary Model on the Basis of Imperialist Competitive Algorithm in Order
A Branch and Bound Algorithm for Solving the Binary Bi-level Linear Programming Problem
A Branch and Bound Algorithm for Solving the Binary Bi-level Linear Programming Problem John Karlof and Peter Hocking Mathematics and Statistics Department University of North Carolina Wilmington Wilmington,
Scheduling Jobs and Preventive Maintenance Activities on Parallel Machines
Scheduling Jobs and Preventive Maintenance Activities on Parallel Machines Maher Rebai University of Technology of Troyes Department of Industrial Systems 12 rue Marie Curie, 10000 Troyes France [email protected]
Integer Programming: Algorithms - 3
Week 9 Integer Programming: Algorithms - 3 OPR 992 Applied Mathematical Programming OPR 992 - Applied Mathematical Programming - p. 1/12 Dantzig-Wolfe Reformulation Example Strength of the Linear Programming
An Efficient Algorithm for Solving a Stochastic Location-Routing Problem
Journal of mathematics and computer Science 12 (214) 27 38 An Efficient Algorithm for Solving a Stochastic LocationRouting Problem H.A. HassanPour a, M. MosadeghKhah a, M. Zareei 1 a Department of Industrial
Adaptive Memory Search for Boolean Optimization Problems
Adaptive Memory Search for Boolean Optimization Problems Lars M. Hvattum Molde College, 6411 Molde, Norway. [email protected] Arne Løkketangen Molde College, 6411 Molde, Norway. [email protected]
Discrete Optimization
Discrete Optimization [Chen, Batson, Dang: Applied integer Programming] Chapter 3 and 4.1-4.3 by Johan Högdahl and Victoria Svedberg Seminar 2, 2015-03-31 Todays presentation Chapter 3 Transforms using
! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.
Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of
A Genetic Algorithm Approach for Solving a Flexible Job Shop Scheduling Problem
A Genetic Algorithm Approach for Solving a Flexible Job Shop Scheduling Problem Sayedmohammadreza Vaghefinezhad 1, Kuan Yew Wong 2 1 Department of Manufacturing & Industrial Engineering, Faculty of Mechanical
Resource Dedication Problem in a Multi-Project Environment*
Noname manuscript No. (will be inserted by the editor) Resource Dedication Problem in a Multi-Project Environment* Umut Beşikci Ümit Bilge Gündüz Ulusoy Abstract There can be different approaches to the
Scheduling Home Health Care with Separating Benders Cuts in Decision Diagrams
Scheduling Home Health Care with Separating Benders Cuts in Decision Diagrams André Ciré University of Toronto John Hooker Carnegie Mellon University INFORMS 2014 Home Health Care Home health care delivery
A Brief Study of the Nurse Scheduling Problem (NSP)
A Brief Study of the Nurse Scheduling Problem (NSP) Lizzy Augustine, Morgan Faer, Andreas Kavountzis, Reema Patel Submitted Tuesday December 15, 2009 0. Introduction and Background Our interest in the
Integrating Benders decomposition within Constraint Programming
Integrating Benders decomposition within Constraint Programming Hadrien Cambazard, Narendra Jussien email: {hcambaza,jussien}@emn.fr École des Mines de Nantes, LINA CNRS FRE 2729 4 rue Alfred Kastler BP
Optimization Modeling for Mining Engineers
Optimization Modeling for Mining Engineers Alexandra M. Newman Division of Economics and Business Slide 1 Colorado School of Mines Seminar Outline Linear Programming Integer Linear Programming Slide 2
An Integer Programming Model for the School Timetabling Problem
An Integer Programming Model for the School Timetabling Problem Geraldo Ribeiro Filho UNISUZ/IPTI Av. São Luiz, 86 cj 192 01046-000 - República - São Paulo SP Brazil Luiz Antonio Nogueira Lorena LAC/INPE
Two objective functions for a real life Split Delivery Vehicle Routing Problem
International Conference on Industrial Engineering and Systems Management IESM 2011 May 25 - May 27 METZ - FRANCE Two objective functions for a real life Split Delivery Vehicle Routing Problem Marc Uldry
Multi-layer MPLS Network Design: the Impact of Statistical Multiplexing
Multi-layer MPLS Network Design: the Impact of Statistical Multiplexing Pietro Belotti, Antonio Capone, Giuliana Carello, Federico Malucelli Tepper School of Business, Carnegie Mellon University, Pittsburgh
GENERALIZED INTEGER PROGRAMMING
Professor S. S. CHADHA, PhD University of Wisconsin, Eau Claire, USA E-mail: [email protected] Professor Veena CHADHA University of Wisconsin, Eau Claire, USA E-mail: [email protected] GENERALIZED INTEGER
A Study of Crossover Operators for Genetic Algorithm and Proposal of a New Crossover Operator to Solve Open Shop Scheduling Problem
American Journal of Industrial and Business Management, 2016, 6, 774-789 Published Online June 2016 in SciRes. http://www.scirp.org/journal/ajibm http://dx.doi.org/10.4236/ajibm.2016.66071 A Study of Crossover
The Psychology of Simulation Model and Metamodeling
THE EXPLODING DOMAIN OF SIMULATION OPTIMIZATION Jay April* Fred Glover* James P. Kelly* Manuel Laguna** *OptTek Systems 2241 17 th Street Boulder, CO 80302 **Leeds School of Business University of Colorado
CREATING WEEKLY TIMETABLES TO MAXIMIZE EMPLOYEE PREFERENCES
CREATING WEEKLY TIMETABLES TO MAXIMIZE EMPLOYEE PREFERENCES CALEB Z. WHITE, YOUNGBAE LEE, YOONSOO KIM, REKHA THOMAS, AND PATRICK PERKINS Abstract. We develop a method to generate optimal weekly timetables
Optimal Scheduling for Dependent Details Processing Using MS Excel Solver
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 8, No 2 Sofia 2008 Optimal Scheduling for Dependent Details Processing Using MS Excel Solver Daniela Borissova Institute of
ARTICLE IN PRESS. European Journal of Operational Research xxx (2004) xxx xxx. Discrete Optimization. Nan Kong, Andrew J.
A factor 1 European Journal of Operational Research xxx (00) xxx xxx Discrete Optimization approximation algorithm for two-stage stochastic matching problems Nan Kong, Andrew J. Schaefer * Department of
New Exact Solution Approaches for the Split Delivery Vehicle Routing Problem
New Exact Solution Approaches for the Split Delivery Vehicle Routing Problem Gizem Ozbaygin, Oya Karasan and Hande Yaman Department of Industrial Engineering, Bilkent University, Ankara, Turkey ozbaygin,
An optimisation framework for determination of capacity in railway networks
CASPT 2015 An optimisation framework for determination of capacity in railway networks Lars Wittrup Jensen Abstract Within the railway industry, high quality estimates on railway capacity is crucial information,
A Robust Formulation of the Uncertain Set Covering Problem
A Robust Formulation of the Uncertain Set Covering Problem Dirk Degel Pascal Lutter Chair of Management, especially Operations Research Ruhr-University Bochum Universitaetsstrasse 150, 44801 Bochum, Germany
! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.
Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three
Chapter 6. The stacking ensemble approach
82 This chapter proposes the stacking ensemble approach for combining different data mining classifiers to get better performance. Other combination techniques like voting, bagging etc are also described
NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )
Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates
Linear Programming. March 14, 2014
Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1
Circuits 1 M H Miller
Introduction to Graph Theory Introduction These notes are primarily a digression to provide general background remarks. The subject is an efficient procedure for the determination of voltages and currents
R u t c o r Research R e p o r t. A Method to Schedule Both Transportation and Production at the Same Time in a Special FMS.
R u t c o r Research R e p o r t A Method to Schedule Both Transportation and Production at the Same Time in a Special FMS Navid Hashemian a Béla Vizvári b RRR 3-2011, February 21, 2011 RUTCOR Rutgers
A Reactive Tabu Search for Service Restoration in Electric Power Distribution Systems
IEEE International Conference on Evolutionary Computation May 4-11 1998, Anchorage, Alaska A Reactive Tabu Search for Service Restoration in Electric Power Distribution Systems Sakae Toune, Hiroyuki Fudo,
ANT COLONY OPTIMIZATION ALGORITHM FOR RESOURCE LEVELING PROBLEM OF CONSTRUCTION PROJECT
ANT COLONY OPTIMIZATION ALGORITHM FOR RESOURCE LEVELING PROBLEM OF CONSTRUCTION PROJECT Ying XIONG 1, Ya Ping KUANG 2 1. School of Economics and Management, Being Jiaotong Univ., Being, China. 2. College
A Service Revenue-oriented Task Scheduling Model of Cloud Computing
Journal of Information & Computational Science 10:10 (2013) 3153 3161 July 1, 2013 Available at http://www.joics.com A Service Revenue-oriented Task Scheduling Model of Cloud Computing Jianguang Deng a,b,,
A Dynamic Programming Approach for Generating N-ary Reflected Gray Code List
A Dynamic Programming Approach for Generating N-ary Reflected Gray Code List Mehmet Kurt 1, Can Atilgan 2, Murat Ersen Berberler 3 1 Izmir University, Department of Mathematics and Computer Science, Izmir
Binary Image Reconstruction
A network flow algorithm for reconstructing binary images from discrete X-rays Kees Joost Batenburg Leiden University and CWI, The Netherlands [email protected] Abstract We present a new algorithm
Generating Personnel Schedules in an Industrial Setting Using a Tabu Search Algorithm
Generating Personnel Schedules in an Industrial Setting Using a Tabu Search Algorithm Pascal Tellier 1 and George White 2 1 PrairieFyre Software Inc., 555 Legget Dr., Kanata K2K 2X3, Canada [email protected]
Chapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling
Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one
Factoring Trinomials: The ac Method
6.7 Factoring Trinomials: The ac Method 6.7 OBJECTIVES 1. Use the ac test to determine whether a trinomial is factorable over the integers 2. Use the results of the ac test to factor a trinomial 3. For
15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
Proximal mapping via network optimization
L. Vandenberghe EE236C (Spring 23-4) Proximal mapping via network optimization minimum cut and maximum flow problems parametric minimum cut problem application to proximal mapping Introduction this lecture:
IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2
IEOR 4404 Homework # Intro OR: Deterministic Models February 14, 011 Prof. Jay Sethuraman Page 1 of 5 Homework #.1 (a) What is the optimal solution of this problem? Let us consider that x 1, x and x 3
JUST-IN-TIME SCHEDULING WITH PERIODIC TIME SLOTS. Received December May 12, 2003; revised February 5, 2004
Scientiae Mathematicae Japonicae Online, Vol. 10, (2004), 431 437 431 JUST-IN-TIME SCHEDULING WITH PERIODIC TIME SLOTS Ondřej Čepeka and Shao Chin Sung b Received December May 12, 2003; revised February
Make Better Decisions with Optimization
ABSTRACT Paper SAS1785-2015 Make Better Decisions with Optimization David R. Duling, SAS Institute Inc. Automated decision making systems are now found everywhere, from your bank to your government to
Transportation Polytopes: a Twenty year Update
Transportation Polytopes: a Twenty year Update Jesús Antonio De Loera University of California, Davis Based on various papers joint with R. Hemmecke, E.Kim, F. Liu, U. Rothblum, F. Santos, S. Onn, R. Yoshida,
October 2007. ENSEEIHT-IRIT, Team APO collaboration with GREM 3 -LAPLACE, Toulouse. Design of Electrical Rotating Machines using
using IBBA using ENSEEIHT-IRIT, Team APO collaboration with GREM 3 -LAPLACE, Toulouse October 2007 Collaborations with the GREM 3 Team (LAPLACE-ENSEEIHT) using IBBA Bertrand Nogarede, Professor : Director
Scheduling Shop Scheduling. Tim Nieberg
Scheduling Shop Scheduling Tim Nieberg Shop models: General Introduction Remark: Consider non preemptive problems with regular objectives Notation Shop Problems: m machines, n jobs 1,..., n operations
Instituto de Engenharia de Sistemas e Computadores de Coimbra Institute of Systems Engineering and Computers INESC Coimbra
Instituto de Engenharia de Sistemas e Computadores de Coimbra Institute of Systems Engineering and Computers INESC Coimbra João Clímaco and Marta Pascoal A new method to detere unsupported non-doated solutions
1 Solving LPs: The Simplex Algorithm of George Dantzig
Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.
In this paper we present a branch-and-cut algorithm for
SOLVING A TRUCK DISPATCHING SCHEDULING PROBLEM USING BRANCH-AND-CUT ROBERT E. BIXBY Rice University, Houston, Texas EVA K. LEE Georgia Institute of Technology, Atlanta, Georgia (Received September 1994;
Airport Planning and Design. Excel Solver
Airport Planning and Design Excel Solver Dr. Antonio A. Trani Professor of Civil and Environmental Engineering Virginia Polytechnic Institute and State University Blacksburg, Virginia Spring 2012 1 of
New binary representation in Genetic Algorithms for solving TSP by mapping permutations to a list of ordered numbers
Proceedings of the 5th WSEAS Int Conf on COMPUTATIONAL INTELLIGENCE, MAN-MACHINE SYSTEMS AND CYBERNETICS, Venice, Italy, November 0-, 006 363 New binary representation in Genetic Algorithms for solving
The Trip Scheduling Problem
The Trip Scheduling Problem Claudia Archetti Department of Quantitative Methods, University of Brescia Contrada Santa Chiara 50, 25122 Brescia, Italy Martin Savelsbergh School of Industrial and Systems
Dantzig-Wolfe bound and Dantzig-Wolfe cookbook
Dantzig-Wolfe bound and Dantzig-Wolfe cookbook [email protected] DTU-Management Technical University of Denmark 1 Outline LP strength of the Dantzig-Wolfe The exercise from last week... The Dantzig-Wolfe
Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.
1. Introduction Linear Programming for Optimization Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1.1 Definition Linear programming is the name of a branch of applied mathematics that
Integer Programming. subject to: (i = 1, 2,..., m),
Integer Programming 9 The linear-programming models that have been discussed thus far all have been continuous, in the sense that decision variables are allowed to be fractional. Often this is a realistic
Integer Factorization using the Quadratic Sieve
Integer Factorization using the Quadratic Sieve Chad Seibert* Division of Science and Mathematics University of Minnesota, Morris Morris, MN 56567 [email protected] March 16, 2011 Abstract We give
Frans J.C.T. de Ruiter, Norman L. Biggs Applications of integer programming methods to cages
Frans J.C.T. de Ruiter, Norman L. Biggs Applications of integer programming methods to cages Article (Published version) (Refereed) Original citation: de Ruiter, Frans and Biggs, Norman (2015) Applications
Load Balancing. Load Balancing 1 / 24
Load Balancing Backtracking, branch & bound and alpha-beta pruning: how to assign work to idle processes without much communication? Additionally for alpha-beta pruning: implementing the young-brothers-wait
A Shift Sequence for Nurse Scheduling Using Linear Programming Problem
IOSR Journal of Nursing and Health Science (IOSR-JNHS) e-issn: 2320 1959.p- ISSN: 2320 1940 Volume 3, Issue 6 Ver. I (Nov.-Dec. 2014), PP 24-28 A Shift Sequence for Nurse Scheduling Using Linear Programming
1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.
Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S
Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS
Sensitivity Analysis 3 We have already been introduced to sensitivity analysis in Chapter via the geometry of a simple example. We saw that the values of the decision variables and those of the slack and
Solve addition and subtraction word problems, and add and subtract within 10, e.g., by using objects or drawings to represent the problem.
Solve addition and subtraction word problems, and add and subtract within 10, e.g., by using objects or drawings to represent the problem. Solve word problems that call for addition of three whole numbers
An Improvement Technique for Simulated Annealing and Its Application to Nurse Scheduling Problem
An Improvement Technique for Simulated Annealing and Its Application to Nurse Scheduling Problem Young-Woong Ko, DongHoi Kim, Minyeong Jeong, Wooram Jeon, Saangyong Uhmn and Jin Kim* Dept. of Computer
