Chapter. Weighted Graphs. Contents

Size: px
Start display at page:

Download "Chapter. Weighted Graphs. Contents"

Transcription

1 Chapter 7 Weighted Graphs Contents 7.1 Single-Source Shortest Paths Dijkstra s Algorithm The Bellman-Ford Shortest Paths Algorithm Shortest Paths in Directed Acyclic Graphs All-Pairs Shortest Paths A Dynamic Programming Shortest Path Algorithm Computing Shortest Paths via Matrix Multiplication Minimum Spanning Trees Kruskal s Algorithm The Prim-Jarník Algorithm Barůvka s Algorithm A Comparison of MST Algorithms Java Example: Dijkstra s Algorithm Exercises

2 34 Chapter 7. Weighted Graphs As we saw in the previous chapter, the breadth-first search strategy can be used to find a shortest path from some starting vertex to every other vertex in a connected graph. This approach makes sense in cases where each edge is as good as any other, but there are many situations where this approach is not appropriate. For example, we might be using a graph to represent a computer network (such as the Internet), and we might be interested in finding the fastest way to route a data packet between two computers. In this case, it is probably not appropriate for all the edges to be equal to each other, for some connections in a computer network are typically much faster than others (for example, some edges might represent slow phone-line connections while others might represent high-speed, fiber-optic connections). Likewise, we might want to use a graph to represent the roads between cities, and we might be interested in finding the fastest way to travel cross-country. In this case, it is again probably not appropriate for all the edges to be equal to each other, for some intercity distances will likely be much larger than others. Thus, it is natural to consider graphs whose edges are not weighted equally. In this chapter, we study weighted graphs. A weighted graph is a graph that has a numeric label w(e) associated with each edge e, called the weight of edge e. Edge weights can be integers, rational numbers, or real numbers, which represent a concept such as distance, connection costs, or affinity. We show an example of a weighted graph in Figure JFK 1258 Figure 7.1: A weighted graph whose vertices represent major U.S. airports and whose edge weights represent distances in miles. This graph has a path from JFK to of total weight 2,777 (going through and ). This is the minimum weight path in the graph from JFK to.

3 7.1. Single-Source Shortest Paths Single-Source Shortest Paths Let G be a weighted graph. The length (or weight) ofapathp is the sum of the weights of the edges of P. Thatis,ifP consists of edges e,e 1,...,e k 1 then the length of P, denoted w(p),isdefinedas k 1 w(p)= i= w(e i ). The distance from a vertex v to a vertex u in G, denoted d(v,u), is the length of a minimum length path (also called shortest path) from v to u, if such a path exists. People often use the convention that d(v,u)=+ if there is no path at all from v to u in G. Even if there is a path from v to u in G, the distance from v to u may not be defined, however, if there is a cycle in G whose total weight is negative. For example, suppose vertices in G represent cities, and the weights of edges in G represent how much money it costs to go from one city to another. If someone were willing to actually pay us to go from say JFK to, then the cost of the edge (JFK,) would be negative. If someone else were willing to pay us to go from to JFK, then there would be a negative-weight cycle in G and distances would no longer be defined. That is, anyone can now build a path (with cycles) in G from any city A to another city B that first goes to JFK and then cycles as many times as he or she likes from JFK to and back, before going on to B. The existence of such paths allows us to build arbitrarily low negative-cost paths (and in this case make a fortune in the process). But distances cannot be arbitrarily low negative numbers. Thus, any time we use edge weights to represent distances, we must be careful not to introduce any negative-weight cycles. Suppose we are given a weighted graph G, and we are asked to find a shortest path from some vertex v to each other vertex in G, viewing the weights on the edges as distances. In this section, we explore efficient ways of finding all such single-source shortest paths, if they exist. The first algorithm we discuss is for the simple, yet common, case when all the edge weights in G are nonnegative (that is, w(e) for each edge e of G); hence, we know in advance that there are no negative-weight cycles in G. Recall that the special case of computing a shortest path when all weights are 1 was solved with the BFS traversal algorithm presented in Section There is an interesting approach for solving this single-source problem based on the greedy method design pattern (Section 5.1). Recall that in this pattern we solve the problem at hand by repeatedly selecting the best choice from among those available in each iteration. This paradigm can often be used in situations where we are trying to optimize some cost function over a collection of objects. We can add objects to our collection, one at a time, always picking the next one that optimizes the function from among those yet to be chosen.

4 342 Chapter 7. Weighted Graphs Dijkstra s Algorithm The main idea in applying the greedy method pattern to the single-source shortestpath problem is to perform a weighted breadth-first search starting at v. In particular, we can use the greedy method to develop an algorithm that iteratively grows a cloud of vertices out of v, with the vertices entering the cloud in order of their distances from v. Thus, in each iteration, the next vertex chosen is the vertex outside the cloud that is closest to v. The algorithm terminates when no more vertices are outside the cloud, at which point we have a shortest path from v to every other vertex of G. This approach is a simple, but nevertheless powerful, example of the greedy method design pattern. A Greedy Method for Finding Shortest Paths Applying the greedy method to the single-source, shortest-path problem, results in an algorithm known as Dijkstra s algorithm. When applied to other graph problems, however, the greedy method may not necessarily find the best solution (such as in the so-called traveling salesman problem, in which we wish to find the shortest path that visits all the vertices in a graph exactly once). Nevertheless, there are a number of situations in which the greedy method allows us to compute the best solution. In this chapter, we discuss two such situations: computing shortest paths and constructing minimum spanning trees. In order to simplify the description of Dijkstra s algorithm, we assume, in the following, that the input graph G is undirected (that is, all its edges are undirected) and simple (that is, it has no self-loops and no parallel edges). Hence, we denote the edges of G as unordered vertex pairs (u,z). We leave the description of Dijkstra s algorithm so that it works for a weighted directed graph as an exercise (R-7.2). In Dijkstra s algorithm, the cost function we are trying to optimize in our application of the greedy method is also the function that we are trying to compute the shortest path distance. This may at first seem like circular reasoning until we realize that we can actually implement this approach by using a bootstrapping trick, consisting of using an approximation to the distance function we are trying to compute, which in the end will be equal to the true distance. Edge Relaxation Let us define a label D[u] for each vertex u of G, which we use to approximate the distance in G from v to u. The meaning of these labels is that D[u] will always store the length of the best path we have found so far from v to u. Initially, D[v] = and D[u]=+ for each u v, and we define the set C, which is our cloud of vertices, to initially be the empty set. At each iteration of the algorithm, we select a vertex u not in C with smallest D[u] label, and we pull u into C. Intheveryfirst iteration we will, of course, pull v into C. Onceanewvertexu is pulled into C, we then update the label D[z] of each vertex z that is adjacent to u and is outside of

5 7.1. Single-Source Shortest Paths 343 C, to reflect the fact that there may be a new and better way to get to z via u. This update operation is known as a relaxation procedure, for it takes an old estimate and checks if it can be improved to get closer to its true value. (A metaphor for why we call this a relaxation comes from a spring that is stretched out and then relaxed back to its true resting shape.) In the case of Dijkstra s algorithm, the relaxation is performed for an edge (u,z), such that we have computed a new value of D[u] and wish to see if there is a better value for D[z] using the edge (u,z). The specific edge relaxation operation is as follows: Edge Relaxation: if D[u]+w((u, z)) < D[z] then D[z] D[u]+w((u,z)). Note that if the newly discovered path to z is no better than the old way, then we do not change D[z]. The Details of Dijkstra s Algorithm We give the pseudo-code for Dijkstra s algorithm in Algorithm 7.2. Note that we use a priority queue Q to store the vertices outside of the cloud C. Algorithm DijkstraShortestPaths(G,v): Input: A simple undirected weighted graph G with nonnegative edge weights, and a distinguished vertex v of G Output: A label D[u], for each vertex u of G, such that D[u] is the distance from v to u in G D[v] for each vertex u v of G do D[u] + Let a priority queue Q contain all the vertices of G using the D labels as keys. while Q is not empty do {pull a new vertex u into the cloud} u Q.removeMin() for each vertex z adjacent to u such that z is in Q do {perform the relaxation procedure on edge (u, z)} if D[u]+w((u,z)) < D[z] then D[z] D[u]+w((u,z)) Change to D[z] the key of vertex z in Q. return the label D[u] of each vertex u Algorithm 7.2: Dijkstra s algorithm for the single-source shortest path problem for agraphg, starting from a vertex v. We illustrate several iterations of Dijkstra s algorithm in Figures 7.3 and 7.4.

6 344 Chapter 7. Weighted Graphs (a) (b) (c) (d) (e) Figure 7.3: An execution of Dijkstra s algorithm on a weighted graph. The start vertex is. A box next to each vertex u stores the label D[u]. The symbol is used instead of +. The edges of the shortest-path tree are drawn as thick arrows, and for each vertex u outside the cloud we show the current best edge for pulling in u with a solid line. (Continued in Figure 7.4.) 1423 (f)

7 7.1. Single-Source Shortest Paths (g) (h) (i) (j) Figure 7.4: Visualization of Dijkstra s algorithm. (Continued from Figure 7.3.) Why It Works The interesting, and possibly even a little surprising, aspect of the Dijkstra algorithm is that, at the moment a vertex u is pulled into C, its label D[u] stores the correct length of a shortest path from v to u. Thus, when the algorithm terminates, it will have computed the shortest-path distance from v to every vertex of G. That is, it will have solved the single-source shortest path problem. It is probably not immediately clear why Dijkstra s algorithm correctly finds the shortest path from the start vertex v to each other vertex u in the graph. Why is it that the distance from v to u is equal to the value of the label D[u] at the time vertex u is pulled into the cloud C (which is also the time u is removed from the priority queue Q)? The answer to this question depends on there being no negativeweight edges in the graph, for it allows the greedy method to work correctly, as we show in the lemma that follows.

8 346 Chapter 7. Weighted Graphs Lemma 7.1: In Dijkstra s algorithm, whenever a vertex u is pulled into the cloud, the label D[u] is equal to d(v,u), the length of a shortest path from v to u. Proof: Suppose that D[t] > d(v,t) for some vertex t in V,andletu be the first vertex the algorithm pulled into the cloud C (that is, removed from Q), such that D[u] > d(v,u). There is a shortest path P from v to u (for otherwise d(v,u) = + = D[u]). Therefore, let us consider the moment when u is pulled into C, and let z be the first vertex of P (when going from v to u) that is not in C at this moment. Let y be the predecessor of z in path P (note that we could have y = v). (See Figure 7.5.) We know, by our choice of z, thaty is already in C at this point. Moreover, D[y]=d(v, y), sinceu is the first incorrect vertex. When y was pulled into C, we tested (and possibly updated) D[z] so that we had at that point D[z] D[y]+w((y,z)) = d(v,y)+w((y,z)). But since z is the next vertex on the shortest path from v to u, thisimpliesthat D[z]=d(v,z). But we are now at the moment when we are picking u, not z,tojoinc; hence, D[u] D[z]. It should be clear that a subpath of a shortest path is itself a shortest path. Hence, since z is on the shortest path from v to u, d(v, z)+d(z, u)=d(v, u). Moreover, d(z, u) because there are no negative-weight edges. Therefore, D[u] D[z]=d(v,z) d(v,z)+d(z,u)=d(v,u). But this contradicts the definition of u; hence, there can be no such vertex u. v C the first wrong vertex u D[u] > d(v,u) u picked next so D[u] D[z] P D[y] = d(v,y) z y D[z] = d(v,z) Figure 7.5: A schematic illustration for the justification of Theorem 7.1.

9 7.1. Single-Source Shortest Paths 347 The Running Time of Dijkstra s Algorithm In this section, we analyze the time complexity of Dijkstra s algorithm. We denote with n and m, the number of vertices and edges of the input graph G, respectively. We assume that the edge weights can be added and compared in constant time. Because of the high level of the description we gave for Dijkstra s algorithm in Algorithm 7.2, analyzing its running time requires that we give more details on its implementation. Specifically, we should indicate the data structures used and how they are implemented. Let us first assume that we are representing the graph G using an adjacency list structure. This data structure allows us to step through the vertices adjacent to u during the relaxation step in time proportional to their number. It still does not settle all the details for the algorithm, however, for we must say more about how to implement the other main data structure in the algorithm the priority queue Q. An efficient implementation of the priority queue Q uses a heap (see Section 2.4.3). This allows us to extract the vertex u with smallest D label, by calling the removemin method, in O(log n) time. As noted in the pseudo-code, each time we update a D[z] label we need to update the key of z in the priority queue. If Q is implemented as a heap, then this key update can, for example, be done by first removing and then inserting z with its new key. If our priority queue Q supports the locator pattern (see Section 2.4.4), then we can easily implement such key updates in O(logn) time, since a locator for vertex z would allow Q to have immediate access to the item storing z in the heap (see Section 2.4.4). Assuming this implementation of Q, Dijkstra s algorithm runs in O((n + m) log n) time. Referring back to Algorithm 7.2, the details of the running-time analysis are as follows: Inserting all the vertices in Q with their initial key value can be done in O(nlog n) time by repeated insertions, or in O(n) time using bottom-up heap construction (see Section 2.4.4). At each iteration of the while loop, we spend O(logn) time to remove vertex u from Q, and O(deg(v) log n) time to perform the relaxation procedure on the edges incident on u. The overall running time of the while loop is (1 + deg(v))log n, v G which is O((n + m)logn) by Theorem 6.6. Thus, we have the following. Theorem 7.2: Given a weighted n-vertex graph G with m edges, each with a nonnegative weight, Dijkstra s algorithm can be implemented to find all shortest paths from a vertex v in G in O(mlogn) time. Note that if we wish to express the above running time as a function of n only, then it is O(n 2 log n) in the worst case, since we have assumed that G is simple.

10 348 Chapter 7. Weighted Graphs An Alternative Implementation for Dijkstra s Algorithm Let us now consider an alternative implementation for the priority queue Q using an unsorted sequence. This, of course, requires that we spend Ω(n) time to extract the minimum element, but it allows for very fast key updates, provided Q supports the locator pattern (Section 2.4.4). Specifically, we can implement each key update done in a relaxation step in O(1) time we simply change the key value once we locate the item in Q to update. Hence, this implementation results in a running time that is O(n 2 + m), which can be simplified to O(n 2 ) since G is simple. Comparing the Two Implementations We have two choices for implementing the priority queue in Dijkstra s algorithm: a locator-based heap implementation, which yields O(m log n) running time, and a locator-based unsorted sequence implementation, which yields an O(n 2 )-time algorithm. Since both implementations would be fairly simple to code up, they are about equal in terms of the programming sophistication needed. These two implementations are also about equal in terms of the constant factors in their worst-case running times. Looking only at these worst-case times, we prefer the heap implementation when the number of edges in the graph is small (that is, when m < n 2 /logn), and we prefer the sequence implementation when the number of edges is large (that is, when m > n 2 /log n). Theorem 7.3: Given a simple weighted graph G with n vertices and m edges, such that the weight of each edge is nonnegative, and a vertex v of G, Dijkstra s algorithm computes the distance from v to all other vertices of G in O(mlogn) time, or, alternatively, in O(n 2 ) time. In Exercise R-7.3, we explore how to modify Dijkstra s algorithm to output a tree T rooted at v, such that the path in T from v to a vertex u is a shortest path in G from v to u. In addition, extending Dijkstra s algorithm for directed graphs is fairly straightforward. We cannot extend Dijkstra s algorithm to work on graphs with negative-weight edges, however, as Figure 7.6 illustrates. v 124 x - 8 z C 1 D[z]=13 12 y Figure 7.6: An illustration of why Dijkstra s algorithm fails for graphs with negative-weight edges. Bringing z into C and performing edge relaxations will invalidate the previously computed shortest path distance (124) to x.

11 7.1. Single-Source Shortest Paths The Bellman-Ford Shortest Paths Algorithm There is another algorithm, which is due to Bellman and Ford, that can find shortest paths in graphs that have negative-weight edges. We must, in this case, insist that the graph be directed, for otherwise any negative-weight undirected edge would immediately imply a negative-weight cycle, where we traverse this edge back and forth in each direction. We cannot allow such edges, since a negative cycle invalidates the notion of distance based on edge weights. Let G be a weighted directed graph, possibly with some negative-weight edges. The Bellman-Ford algorithm for computing the shortest-path distance from some vertex v in G to every other vertex in G is very simple. It shares the notion of edge relaxation from Dijkstra s algorithm, but does not use it in conjunction with the greedy method (which would not work in this context; see Exercise C-7.2). That is, as in Dijkstra s algorithm, the Bellman-Ford algorithm uses a label D[u] that is always an upper bound on the distance d(v,u) from v to u, and which is iteratively relaxed until it exactly equals this distance. The Details of the Bellman-Ford Algorithm The Bellman-Ford method is shown in Algorithm 7.7. It performs n 1 times a relaxation of every edge in the digraph. We illustrate an execution of the Bellman- Ford algorithm in Figure 7.8. Algorithm BellmanFordShortestPaths( G,v): Input: A weighted directed graph G with n vertices, and a vertex v of G Output: A label D[u], for each vertex u of G, such that D[u] is the distance from v to u in G, or an indication that G has a negative-weight cycle D[v] for each vertex u v of G do D[u] + for i 1ton 1do for each (directed) edge (u,z) outgoing from u do {Perform the relaxation operation on (u,z)} if D[u]+w((u,z)) < D[z] then D[z] D[u]+w((u,z)) if there are no edges left with potential relaxation operations then return the label D[u] of each vertex u else return G contains a negative-weight cycle Algorithm 7.7: The Bellman-Ford single-source shortest-path algorithm, which allows for negative-weight edges.

12 35 Chapter 7. Weighted Graphs JFK JFK (a) (b) JFK JFK (c) (d) (e) JFK Figure 7.8: An illustration of an application of the Bellman-Ford algorithm. The start vertex is. A box next to each vertex u stores the label D[u], with shadows showing values revised during relaxations; the thick edges are causing such relaxations (f) JFK

13 7.1. Single-Source Shortest Paths 351 Lemma 7.4: If at the end of the execution of Algorithm 7.7 there is an edge (u,z) that can be relaxed (that is, D[u]+w((u, z)) < D[z]), then the input digraph G contains a negative-weight cycle. Otherwise, D[u]=d(v,u) for each vertex u in G. Proof: For the sake of this proof, let us introduce a new notion of distance in a digraph. Specifically, let d i (v,u) denote the length of a path from v to u that is shortest among all paths from v to u that contain at most i edges. We call d i (v,u) the i-edge distance from v to u. We claim that after iteration i of the main forloop in the Bellman-Ford algorithm D[u] =d i (v,u) for each vertex in G. This is certainly true before we even begin the first iteration, for D[v] = = d (v,v) and, for u v, D[u]=+ = d (v,u). Suppose this claim is true before iteration i (we will now show that if this is the case, then this claim will be true after iteration i as well). In iteration i, we perform a relaxation step for every edge in the digraph. The i-edge distance d i (v,u), from v to a vertex u, is determined in one of two ways. Either d i (v,u) =d i 1 (v,u) or d i (v,u) =d i 1 (v,z)+w((z,u)) for some vertex z in G. Because we do a relaxation for every edge of G in iteration i, ifitistheformer case, then after iteration i we have D[u]=d i 1 (v,u)=d i (v,u), and if it is the latter case, then after iteration i we have D[u]=D[z]+w((z,u)) = d i 1 (v,z)+w((z,u)) = d i (v,u). Thus, if D[u]=d i 1 (v,u) for each vertex u before iteration i, thend[u]= d i (v,u) for each vertex u after iteration i. Therefore, after n 1 iterations, D[u]=d n 1 (v,u) for each vertex u in G. Now observe that if there is still an edge in G that can be relaxed, then there is some vertex u in G, such that the n-edge distance from v to u is less than the (n 1)-edge distance from v to u,thatis,d n (v,u) < d n 1 (v,u). But there are only n vertices in G; hence, if there is a shortest n-edge path from v to u, it must repeat some vertex z in G twice. That is, it must contain a cycle. Moreover, since the distance from a vertex to itself using zero edges is (that is, d (z,z)=), this cycle must be a negativeweight cycle. Thus, if there is an edge in G that can still be relaxed after running the Bellman-Ford algorithm, then G contains a negative-weight cycle. If, on the other hand, there is no edge in G that can still be relaxed after running the Bellman-Ford algorithm, then G does not contain a negative-weight cycle. Moreover, in this case, every shortest path between two vertices will have at most n 1 edges; hence, for each vertex u in G, D[u]=d n 1 (v,u)=d(v,u). Thus, the Bellman-Ford algorithm is correct and even gives us a way of telling when a digraph contains a negative-weight cycle. The running time of the Bellman- Ford algorithm is easy to analyze. We perform the main for-loop n 1 times, and each such loop involves spending O(1) time for each edge in G. Therefore, the running time for this algorithm is O(nm). We summarize as follows: Theorem 7.5: Given a weighted directed graph G with n vertices and m edges, and a vertex v of G, the Bellman-Ford algorithm computes the distance from v to all other vertices of G or determines that G contains a negative-weight cycle in O(nm) time.

14 352 Chapter 7. Weighted Graphs Shortest Paths in Directed Acyclic Graphs As mentioned above, both Dijkstra s algorithm and the Bellman-Ford algorithm work for directed graphs. We can solve the single-source shortest paths problem faster than these algorithms can, however, if the digraph has no directed cycles, that is, it is a weighted directed acyclic graph (DAG). Recall from Section that a topological ordering of a DAG G is a listing of its vertices (v 1,v 2,...,v n ), such that if (v i,v j ) is an edge in G,theni < j. Also, recall that we can use the depth-first search algorithm to compute a topological ordering of the n vertices in an m-edge DAG G in O(n + m) time. Interestingly, given a topological ordering of such a weighted DAG G, we can compute all shortest paths from a given vertex v in O(n + m) time. The Details for Computing Shortest Paths in a DAG The method, which is given in Algorithm 7.9, involves visiting the vertices of G according to the topological ordering, relaxing the outgoing edges with each visit. Algorithm DAGShortestPaths( G,s): Input: A weighted directed acyclic graph (DAG) G with n vertices and m edges, and a distinguished vertex s in G Output: A label D[u], for each vertex u of G, such that D[u] is the distance from v to u in G Compute a topological ordering (v 1,v 2,...,v n ) for G D[s] for each vertex u s of G do D[u] + for i 1ton 1do {Relax each outgoing edge from v i } for each edge (v i,u) outgoing from v i do if D[v i ]+w((v i,u)) < D[u] then D[u] D[v i ]+w((v i,u)) Output the distance labels D as the distances from s. Algorithm 7.9: Shortest path algorithm for a directed acyclic graph. The running time of the shortest path algorithm for a DAG is easy to analyze. Assuming the digraph is represented using an adjacency list, we can process each vertex in constant time plus an additional time proportional to the number of its outgoing edges. In addition, we have already observed that computing the topological ordering of the vertices in G can be done in O(n + m) time. Thus, the entire algorithm runs in O(n + m) time. We illustrate this algorithm in Figure 7.1.

15 7.1. Single-Source Shortest Paths (a) (b) (c) (d) (e) Figure 7.1: An illustration of the shortest-path algorithm for a DAG (f) Theorem 7.6: DAGShortestPaths computes the distance from a start vertex s to each other vertex in a directed n-vertex graph G with m edges in O(n + m) time. Proof: Suppose, for the sake of a contradiction, that v i is the first vertex in the topological ordering such that D[v i ] is not the distance from s to v i. First, note that D[v i ] < +, for the initial D value for each vertex other than s is + and the value of a D label is only ever lowered if a path from s is discovered. Thus, if D[v j ]= +, thenv j is unreachable from s. Therefore, v i is reachable from s, so there is a shortest path from s to v i.letv k be the penultimate vertex on a shortest path from s to v i. Since the vertices are numbered according to a topological ordering, we have that k < i. Thus, D[v k ] is correct (we may possibly have v k = s). But when v k is processed, we relax each outgoing edge from v k, including the edge on the shortest path from v k to v i. Thus, D[v i ] is assigned the distance from s to v i.but this contradicts the definition of v i ; hence, no such vertex v i can exist.

16 354 Chapter 7. Weighted Graphs 7.2 All-Pairs Shortest Paths Suppose we wish to compute the shortest path distance between every pair of vertices in a directed graph G with n vertices and m edges. Of course, if G has no negative-weight edges, then we could run Dijkstra s algorithm from each vertex in G in turn. This approach would take O(n(n + m)logn) time, assuming G is represented using an adjacency list structure. In the worst case, this bound could be as large as O(n 3 logn). Likewise, if G contains no negative-weight cycles, then we could run the Bellman-Ford algorithm starting from each vertex in G in turn. This approach would run in O(n 2 m) time, which, in the worst case, could be as large as O(n 4 ). In this section, we consider algorithms for solving the all-pairs shortest path problem in O(n 3 ) time, even if the digraph contains negative-weight edges (but not negative-weight cycles) A Dynamic Programming Shortest Path Algorithm The first all-pairs shortest path algorithm we discuss is a variation on an algorithm we have given earlier in this book, namely, the Floyd-Warshall algorithm for computing the transitive closure of a directed graph (Algorithm 6.16). Let G be a given weighted directed graph. We number the vertices of G arbitrarily as (v 1,v 2,...,v n ). As in any dynamic programming algorithm (Section 5.3), the key construct in the algorithm is to define a parameterized cost function that is easy to compute and also allows us to ultimately compute a final solution. In this case, we use the cost function, D k i, j, which is defined as the distance from v i to v j using only intermediate vertices in the set {v 1,v 2,...,v k }. Initially, ifi = j D i, j = w((v i,v j )) if (v i,v j ) is an edge in G + otherwise. Given this parameterized cost function D k i, j, and its initial value D i, j, we can then easily define the value for an arbitrary k > as D k i, j = min{dk 1 i, j,d k 1 i,k + D k 1 k, j }. In other words, the cost for going from v i to v j using vertices numbered 1 through k is equal to the shorter of two possible paths. The first path is simply the shortest path from v i to v j using vertices numbered 1 through k 1. The second path is the sum of the costs of the shortest path from v i to v k using vertices numbered 1 through k 1 and the shortest path from v k to v j using vertices numbered 1 through k 1. Moreover, there is no other shorter path from v i to v j using vertices of {v 1,v 2,...,v k } than these two. If there was such a shorter path and it excluded v k, then it would violate the definition of Di, k 1 j, and if there was such a shorter path and it included v k, then it would violate the definition of D k 1 i,k or D k 1 k, j. In fact, note

17 7.2. All-Pairs Shortest Paths 355 Algorithm AllPairsShortestPaths( G): Input: A simple weighted directed graph G without negative-weight cycles Output: A numbering v 1,v 2,...,v n of the vertices of G and a matrix D, such that D[i, j] is the distance from v i to v j in G let v 1,v 2,...,v n be an arbitrary numbering of the vertices of G for i 1 to n do for j 1 to n do if i = j then D [i,i] if (v i,v j ) is an edge in G then D [i, j] w((v i,v j )) else D [i, j] + for k 1ton do for i 1tondo for j 1ton do D k [i, j] min{d k 1 [i, j],d k 1 [i,k]+d k 1 [k, j]} return matrix D n Algorithm 7.11: A dynamic programming algorithm to compute all-pairs shortest path distances in a digraph without negative cycles. that this argument still holds even if there are negative cost edges in G, just so long as there are no negative cost cycles. In Algorithm 7.11, we show how this costfunction definition allows us to build an efficient solution to the all-pairs shortest path problem. The running time for this dynamic programming algorithm is clearly O(n 3 ). Thus, we have the following theorem Theorem 7.7: Given a simple weighted directed graph G with n vertices and no negative-weight cycles, Algorithm 7.11 (AllPairsShortestPaths) computes the shortest-path distances between each pair of vertices of G in O(n 3 ) time Computing Shortest Paths via Matrix Multiplication We can view the problem of computing the shortest-path distances for all pairs of vertices in a directed graph G as a matrix problem. In this subsection, we describe how to solve the all-pairs shortest-path problem in O(n 3 ) time using this approach. We first describe how to use this approach to solve the all-pairs problem in O(n 4 ) time, and then we show how this can be improved to O(n 3 ) time by studying the problem in more depth. This matrix-multiplication approach to shortest paths is especially useful in contexts where we represent graphs using the adjacency matrix data structure.

18 356 Chapter 7. Weighted Graphs The Weighted Adjacency Matrix Representation Let us number the vertices of G as (v,v 1,...,v n 1 ), returning to the convention of numbering the vertices starting at index. Given this numbering of the vertices of G, there is a natural weighted view of the adjacency matrix representation for a graph, where we define A[i, j] as follows: if i = j A[i, j]= w((v i,v j )) if (v i,v j ) is an edge in G + otherwise. (Note that this is the same definition used for the cost function D i, j from the previous subsection.) Shortest Paths and Matrix Multiplication In other words, A[i, j] stores the shortest path distance from v i to v j using one or fewer edges in the path. Let us therefore use the matrix A to define another matrix A 2, such that A 2 [i, j] stores the shortest path distance from v i to v j using at most two edges. A path with at most two edges is either empty (a zero-edge path) or it adds an extra edge to a zero-edge or one-edge path. Therefore, we can define A 2 [i, j] as A 2 [i, j]= min {A[i,l]+A[l, j]}. l=,1,...,n 1 Thus, given A, we can compute the matrix A 2 in O(n 3 ) time, by using an algorithm very similar to the standard matrix multiplication algorithm. In fact, we can view this computation as a matrix multiplication in which we have simply redefined what the operators plus and times mean in the matrix multiplication algorithm (the programming language C++ specifically allows for such operator overloading). If we let plus be redefined to mean min and we let times be redefined to mean +, then we can write A 2 [i, j] as a true matrix multiplication: A 2 [i, j]= A[i,l] A[l, j]. l=,1,...,n 1 Indeed, this matrix-multiplication viewpoint is the reason why we have written this matrix as A 2, for it is the square of the matrix A. Let us continue this approach to define a matrix A k,sothata k [i, j] is the shortestpath distance from v i to v j using at most k edges. Since a path with at most k edges is equivalent to a path with at most k 1 edges plus possibly one additional edge, we can define A k so that A k [i, j]= A k 1 [i,l] A[l, j], l=,1,...,n 1 continuing the operator redefining so that + stands for min and stands for +.

19 7.2. All-Pairs Shortest Paths 357 The crucial observation is that if G contains no negative-weight cycles, then A n 1 stores the shortest-path distance between each pair of vertices in G. This observation follows from the fact that any well-defined shortest path contains at most n 1 edges. If a path has more than n 1 edges, it must repeat some vertex; hence, it must contain a cycle. But a shortest path will never contain a cycle (unless there is a negative-weight cycle in G). Thus, to solve the all-pairs shortest path problem, all we need to do is to multiply A times itself n 1 times. Since each such multiplication can be done in O(n 3 ) time, this approach immediately gives us the following. Theorem 7.8: Given a weighted directed n-vertex graph G containing no negativeweight cycles, and the weighted adjacency matrix A for G, the all-pairs shortest path problem for G can be solved by computing A n 1, which can be performed in O(n 4 ) time. In Section 1.1.4, we discuss an exponentiation algorithm for numbers, which can be applied in the present context of matrix multiplication to compute A n 1 in O(n 3 logn) time. We can actually compute A n 1 in O(n 3 ) time, however, by taking advantage of additional structure present in the all-pairs shortest-path problem. Matrix Closure As observed above, if G contains no negative-weight cycles, then A n 1 encodes all the shortest-path distances between pairs of vertices in G. A well-defined shortest path can contain no cycles; hence, a shortest path restricted to contain at most n 1 edges must be a true shortest path. Likewise, a shortest path containing at most n edges is a true shortest path, as is a shortest path containing at most n + 1 edges, n + 2 edges, and so on. Therefore, if G contains no negative-weight cycles, then A n 1 = A n = A n+1 = A n+2 =. The closure of a matrix A is defined as A = l= if such a matrix exists. If A is a weighted adjacency matrix, then A [i, j] is the sum of all possible paths from v i to v j. In our case, A is the weighted adjacency matrix for a directed graph G and we have redefined + as min. Thus, we can write A = min i=,..., {Ai }. Moreover, since we are computing shortest path distances, the entries in A i+1 are never larger than the entries in A i. Therefore, for the weighted adjacency matrix of an n-vertex digraph G with no negative-weight cycles, A = A n 1 = A n = A n+1 = A n+2 =. That is, A [i, j] stores the length of the shortest path from v i to v j. A l,

20 358 Chapter 7. Weighted Graphs Computing the Closure of a Weighted Adjacency Matrix We can compute the closure A by divide-and-conquer in O(n 3 ) time. Without loss of generality, we may assume that n is a power of two (if not, then pad the digraph G with extra vertices that have no in-going or out-going edges). Let us divide the set V of vertices in G into two equal-sized sets V 1 = {v,...,v n/2 1 } and V 2 = {v n/2,...,v n 1 }. Given this division, we can likewise divide the adjacency matrix A into four blocks, B, C, D, and E, each with n/2 rows and columns, defined as follows: B: weights of edges from V 1 to V 1 C: weights of edges from V 1 to V 2 D: weights of edges from V 2 to V 1 E: weights of edges from V 2 to V 2. That is, ( B C A = D E We illustrate these four sets of edges in Figure Likewise, we can partition A into four blocks W, X, Y,andZ, as well, which are similarly defined. W: weights of shortest paths from V 1 to V 1 X: weights of shortest paths from V 1 to V 2 Y : weights of shortest paths from V 2 to V 1 Z: weights of shortest paths from V 2 to V 2, ). That is, ( A W X = Y Z ). B C E V 1 V 2 D Figure 7.12: An illustration of the four sets of edges used to partition the adjacency matrix A in the divide-and-conquer algorithm for computing A.

21 7.2. All-Pairs Shortest Paths 359 Submatrix Equations By these definitions and those above, we can derive simple equations to define W, X, Y, and Z directly from the submatrices B, C, D, and E. W =(B +C E D), for paths in W consist of the closure of subpaths that either stay in V 1 or jump to V 2, travel in V 2 for a while, and then jump back to V 1. X = W C E, for paths in X consist of the closure of subpaths that start and end in V 1 (with possible jumps to V 2 and back), followed by a jump to V 2 and the closure of subpaths that stay in V 2. Y = E D W, for paths in Y consist of the closure of subpaths that stay in V 2, followed by a jump to V 1 and the closure of subpaths that start and end in V 1 (with possible jumps to V 2 and back). Z = E + E D W C E, for paths in Z consist of paths that stay in V 2 or paths that travel in V 2,jumptoV 1, travel in V 1 for a while (with possible jumps to V 2 and back), jump back to V 2, and then stay in V 2. Given these equations it is a simple matter to then construct a recursive algorithm to compute A. In this algorithm, we divide A into the blocks B, C, D, and E, as described above. We then recursively compute the closure E.GivenE,we can then recursively compute the closure (B +C E D),whichisW. Note that no other recursive closure computations are then needed to compute X, Y, and Z. Thus, after a constant number of matrix additions and multiplications, we can compute all the blocks in A. This gives us the following theorem. Theorem 7.9: Given a weighted directed n-vertex graph G containing no negativeweight cycles, and the weighted adjacency matrix A for G, the all-pairs shortest path problem for G can be solved by computing A, which can be performed in O(n 3 ) time. Proof: We have already argued why the computation of A solves the all-pairs shortest-path problem. Consider, then, the running time of the divide-and-conquer algorithm for computing A, the closure of the n n adjacency matrix A. This algorithm consists of two recursive calls to compute the closure of (n/2) (n/2) submatrices, plus a constant number of matrix additions and multiplications (using min for + and + for ). Thus, assuming we use the straightforward O(n 3 )- time matrix multiplication algorithm, we can characterize the running time, T (n), for computing A as { b if n = 1 T (n)= 2T (n/2)+cn 3 if n > 1, where b > andc > are constants. Therefore, by the Master Theorem (5.6), we can compute A in O(n 3 ) time.

22 36 Chapter 7. Weighted Graphs 7.3 Minimum Spanning Trees Suppose we wish to connect all the computers in a new office building using the least amount of cable. Likewise, suppose we have an undirected computer network in which each connection between two routers has a cost for usage; we want to connect all our routers at the minimum cost possible. We can model these problems using a weighted graph G whose vertices represent the computers or routers, and whose edges represent all the possible pairs (u, v) of computers, where the weight w((v,u)) of edge (v,u) is equal to the amount of cable or network cost needed to connect computer v to computer u. Rather than computing a shortest path tree from some particular vertex v, we are interested instead in finding a (free) tree T that contains all the vertices of G and has the minimum total weight over all such trees. Methods for finding such trees are the focus of this section. Problem Definition Given a weighted undirected graph G, we are interested in finding a tree T that contains all the vertices in G and minimizes the sum of the weights of the edges of T,thatis, w(t )= w(e). e T We recall from Section 6.1 that a tree such as this, which contains every vertex of a connected graph G, is said to be a spanning tree. Computing a spanning tree T with smallest total weight is the problem of constructing a minimum spanning tree (or MST). The development of efficient algorithms for the minimum-spanning-tree problem predates the modern notion of computer science itself. In this section, we discuss two algorithms for solving the MST problem. These algorithms are all classic applications of the greedy method. As was discussed in Section 5.1, we apply the greedy method by iteratively choosing objects to join a growing collection, by incrementally picking an object that minimizes some cost function. The first MST algorithm we discuss is Kruskal s algorithm, which grows the MST in clusters by considering edges in order of their weights. The second algorithm we discuss is the Prim-Jarník algorithm, which grows the MST from a single root vertex, much in the same way as Dijkstra s shortest-path algorithm. We conclude this section by discussing a third algorithm, due to Barůvka, which applies the greedy approach in a parallel way. As in Section 7.1.1, in order to simplify the description the algorithms, we assume, in the following, that the input graph G is undirected (that is, all its edges are undirected) and simple (that is, it has no self-loops and no parallel edges). Hence, we denote the edges of G as unordered vertex pairs (u,z).

23 7.3. Minimum Spanning Trees 361 A Crucial Fact about Minimum Spanning Trees Before we discuss the details of these algorithms, however, let us give a crucial fact about minimum spanning trees that forms the basis of the algorithms. In particular, all the MST algorithms we discuss are based on the greedy method, which in this case depends crucially on the following fact. (See Figure 7.13.) e Belongs to a Minimum Spanning Tree e V 1 min-weight V 2 bridge edge Figure 7.13: An illustration of the crucial fact about minimum spanning trees. Theorem 7.1: Let G be a weighted connected graph, and let V 1 and V 2 be a partition of the vertices of G into two disjoint nonempty sets. Furthermore, let e be an edge in G with minimum weight from among those with one endpoint in V 1 and the other in V 2. There is a minimum spanning tree T that has e as one of its edges. Proof: Let T be a minimum spanning tree of G. IfT does not contain edge e, the addition of e to T must create a cycle. Therefore, there is some edge f of this cycle that has one endpoint in V 1 and the other in V 2. Moreover, by the choice of e, w(e) w( f ). If we remove f from T {e}, we obtain a spanning tree whose total weight is no more than before. Since T was a minimum spanning tree, this new tree must also be a minimum spanning tree. In fact, if the weights in G are distinct, then the minimum spanning tree is unique; we leave the justification of this less crucial fact as an exercise (C-7.5). In addition, note that Theorem 7.1 remains valid even if the graph G contains negative-weight edges or negative-weight cycles, unlike the algorithms we presented for shortest paths.

24 362 Chapter 7. Weighted Graphs Kruskal s Algorithm The reason Theorem 7.1 is so important is that it can be used as the basis for building a minimum spanning tree. In Kruskal s algorithm, it is used to build the minimum spanning tree in clusters. Initially, each vertex is in its own cluster all by itself. The algorithm then considers each edge in turn, ordered by increasing weight. If an edge e connects two different clusters, then e is added to the set of edges of the minimum spanning tree, and the two clusters connected by e are merged into a single cluster. If, on the other hand, e connects two vertices that are already in the same cluster, then e is discarded. Once the algorithm has added enough edges to form a spanning tree, it terminates and outputs this tree as the minimum spanning tree. We give pseudo-code for Kruskal s method for solving the MST problem in Algorithm 7.14, and we show the working of this algorithm in Figures 7.15, 7.16, and Algorithm KruskalMST(G): Input: A simple connected weighted graph G with n vertices and m edges Output: A minimum spanning tree T for G for each vertex v in G do Define an elementary cluster C(v) {v}. Initialize a priority queue Q to contain all edges in G, using the weights as keys. T {T will ultimately contain the edges of the MST} while T has fewer than n 1 edges do (u, v) Q.removeMin() Let C(v) be the cluster containing v,andletc(u) be the cluster containing u. if C(v) C(u) then Add edge (v,u) to T. Merge C(v) and C(u) into one cluster, that is, union C(v) and C(u). return tree T Algorithm 7.14: Kruskal s algorithm for the MST problem. As mentioned before, the correctness of Kruskal s algorithm follows from the crucial fact about minimum spanning trees, Theorem 7.1. Each time Kruskal s algorithm adds an edge (v,u) to the minimum spanning tree T, we can define a partitioning of the set of vertices V (as in the theorem) by letting V 1 be the cluster containing v and letting V 2 contain the rest of the vertices in V. This clearly defines a disjoint partitioning of the vertices of V and, more importantly, since we are extracting edges from Q in order by their weights, e must be a minimum-weight edge with one vertex in V 1 and the other in V 2. Thus, Kruskal s algorithm always adds a valid minimum-spanning-tree edge.

25 7.3. Minimum Spanning Trees 363 (a) (b) (c) (d) (e) (f) Figure 7.15: Example of an execution of Kruskal s MST algorithm on a graph with integer weights. We show the clusters as shaded regions and we highlight the edge being considered in each iteration (continued in Figure 7.16).

26 364 Chapter 7. Weighted Graphs (g) (h) (i) (j) (k) (l) Figure 7.16: An example of an execution of Kruskal s MST algorithm (continued). Rejected edges are shown dashed. (continued in Figure 7.17).

27 7.3. Minimum Spanning Trees 365 (m) (n) Figure 7.17: Example of an execution of Kruskal s MST algorithm (continued from Figures 7.15 and 7.16). The edge considered in (n) merges the last two clusters, which concludes this execution of Kruskal s algorithm. Implementing Kruskal s Algorithm We denote the number of vertices and edges of the input graph G with n and m, respectively. We assume that the edge weights can be compared in constant time. Because of the high level of the description we gave for Kruskal s algorithm in Algorithm 7.14, analyzing its running time requires that we give more details on its implementation. Specifically, we should indicate the data structures used and how they are implemented. We implement the priority queue Q using a heap. Thus, we can initialize Q in O(mlogm) time by repeated insertions, or in O(m) time using bottom-up heap construction (see Section 2.4.4). In addition, at each iteration of the while loop, we can remove a minimum-weight edge in O(logm) time, which actually is O(logn), since G is simple. A Simple Cluster Merging Strategy We use a list-based implementation of a partition (Section 4.2.2) for the clusters. Namely, we represent each cluster C with an unordered linked list of vertices, storing, with each vertex v, a reference to its cluster C(v). With this representation, testing whether C(u) C(v) takes O(1) time. When we need to merge two clusters, C(u) and C(v), we move the elements of the smaller cluster into the larger one and update the cluster references of the vertices in the smaller cluster. Since we can simply add the elements of the smaller cluster at the end of the list for the larger cluster, merging two clusters takes time proportional to the size of the smaller cluster. That is, merging clusters C(u) and C(v) takes O(min{ C(u), C(v) }) time. There are other, more efficient, methods for merging clusters (see Section 4.2.2), but this simple approach will be sufficient.

CSE 326, Data Structures. Sample Final Exam. Problem Max Points Score 1 14 (2x7) 2 18 (3x6) 3 4 4 7 5 9 6 16 7 8 8 4 9 8 10 4 Total 92.

CSE 326, Data Structures. Sample Final Exam. Problem Max Points Score 1 14 (2x7) 2 18 (3x6) 3 4 4 7 5 9 6 16 7 8 8 4 9 8 10 4 Total 92. Name: Email ID: CSE 326, Data Structures Section: Sample Final Exam Instructions: The exam is closed book, closed notes. Unless otherwise stated, N denotes the number of elements in the data structure

More information

The Goldberg Rao Algorithm for the Maximum Flow Problem

The Goldberg Rao Algorithm for the Maximum Flow Problem The Goldberg Rao Algorithm for the Maximum Flow Problem COS 528 class notes October 18, 2006 Scribe: Dávid Papp Main idea: use of the blocking flow paradigm to achieve essentially O(min{m 2/3, n 1/2 }

More information

Cpt S 223. School of EECS, WSU

Cpt S 223. School of EECS, WSU The Shortest Path Problem 1 Shortest-Path Algorithms Find the shortest path from point A to point B Shortest in time, distance, cost, Numerous applications Map navigation Flight itineraries Circuit wiring

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms

More information

The Union-Find Problem Kruskal s algorithm for finding an MST presented us with a problem in data-structure design. As we looked at each edge,

The Union-Find Problem Kruskal s algorithm for finding an MST presented us with a problem in data-structure design. As we looked at each edge, The Union-Find Problem Kruskal s algorithm for finding an MST presented us with a problem in data-structure design. As we looked at each edge, cheapest first, we had to determine whether its two endpoints

More information

Social Media Mining. Graph Essentials

Social Media Mining. Graph Essentials Graph Essentials Graph Basics Measures Graph and Essentials Metrics 2 2 Nodes and Edges A network is a graph nodes, actors, or vertices (plural of vertex) Connections, edges or ties Edge Node Measures

More information

Analysis of Algorithms, I

Analysis of Algorithms, I Analysis of Algorithms, I CSOR W4231.002 Eleni Drinea Computer Science Department Columbia University Thursday, February 26, 2015 Outline 1 Recap 2 Representing graphs 3 Breadth-first search (BFS) 4 Applications

More information

Problem Set 7 Solutions

Problem Set 7 Solutions 8 8 Introduction to Algorithms May 7, 2004 Massachusetts Institute of Technology 6.046J/18.410J Professors Erik Demaine and Shafi Goldwasser Handout 25 Problem Set 7 Solutions This problem set is due in

More information

IE 680 Special Topics in Production Systems: Networks, Routing and Logistics*

IE 680 Special Topics in Production Systems: Networks, Routing and Logistics* IE 680 Special Topics in Production Systems: Networks, Routing and Logistics* Rakesh Nagi Department of Industrial Engineering University at Buffalo (SUNY) *Lecture notes from Network Flows by Ahuja, Magnanti

More information

Handout #Ch7 San Skulrattanakulchai Gustavus Adolphus College Dec 6, 2010. Chapter 7: Digraphs

Handout #Ch7 San Skulrattanakulchai Gustavus Adolphus College Dec 6, 2010. Chapter 7: Digraphs MCS-236: Graph Theory Handout #Ch7 San Skulrattanakulchai Gustavus Adolphus College Dec 6, 2010 Chapter 7: Digraphs Strong Digraphs Definitions. A digraph is an ordered pair (V, E), where V is the set

More information

3. Mathematical Induction

3. Mathematical Induction 3. MATHEMATICAL INDUCTION 83 3. Mathematical Induction 3.1. First Principle of Mathematical Induction. Let P (n) be a predicate with domain of discourse (over) the natural numbers N = {0, 1,,...}. If (1)

More information

Exam study sheet for CS2711. List of topics

Exam study sheet for CS2711. List of topics Exam study sheet for CS2711 Here is the list of topics you need to know for the final exam. For each data structure listed below, make sure you can do the following: 1. Give an example of this data structure

More information

6.3 Conditional Probability and Independence

6.3 Conditional Probability and Independence 222 CHAPTER 6. PROBABILITY 6.3 Conditional Probability and Independence Conditional Probability Two cubical dice each have a triangle painted on one side, a circle painted on two sides and a square painted

More information

Data Structures and Algorithms Written Examination

Data Structures and Algorithms Written Examination Data Structures and Algorithms Written Examination 22 February 2013 FIRST NAME STUDENT NUMBER LAST NAME SIGNATURE Instructions for students: Write First Name, Last Name, Student Number and Signature where

More information

Lecture 17 : Equivalence and Order Relations DRAFT

Lecture 17 : Equivalence and Order Relations DRAFT CS/Math 240: Introduction to Discrete Mathematics 3/31/2011 Lecture 17 : Equivalence and Order Relations Instructor: Dieter van Melkebeek Scribe: Dalibor Zelený DRAFT Last lecture we introduced the notion

More information

Reading 13 : Finite State Automata and Regular Expressions

Reading 13 : Finite State Automata and Regular Expressions CS/Math 24: Introduction to Discrete Mathematics Fall 25 Reading 3 : Finite State Automata and Regular Expressions Instructors: Beck Hasti, Gautam Prakriya In this reading we study a mathematical model

More information

Formal Languages and Automata Theory - Regular Expressions and Finite Automata -

Formal Languages and Automata Theory - Regular Expressions and Finite Automata - Formal Languages and Automata Theory - Regular Expressions and Finite Automata - Samarjit Chakraborty Computer Engineering and Networks Laboratory Swiss Federal Institute of Technology (ETH) Zürich March

More information

Linear Programming. March 14, 2014

Linear Programming. March 14, 2014 Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

The Graphical Method: An Example

The Graphical Method: An Example The Graphical Method: An Example Consider the following linear program: Maximize 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2 0, where, for ease of reference,

More information

Near Optimal Solutions

Near Optimal Solutions Near Optimal Solutions Many important optimization problems are lacking efficient solutions. NP-Complete problems unlikely to have polynomial time solutions. Good heuristics important for such problems.

More information

Network File Storage with Graceful Performance Degradation

Network File Storage with Graceful Performance Degradation Network File Storage with Graceful Performance Degradation ANXIAO (ANDREW) JIANG California Institute of Technology and JEHOSHUA BRUCK California Institute of Technology A file storage scheme is proposed

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

Chapter 6: Graph Theory

Chapter 6: Graph Theory Chapter 6: Graph Theory Graph theory deals with routing and network problems and if it is possible to find a best route, whether that means the least expensive, least amount of time or the least distance.

More information

Data Structures Fibonacci Heaps, Amortized Analysis

Data Structures Fibonacci Heaps, Amortized Analysis Chapter 4 Data Structures Fibonacci Heaps, Amortized Analysis Algorithm Theory WS 2012/13 Fabian Kuhn Fibonacci Heaps Lacy merge variant of binomial heaps: Do not merge trees as long as possible Structure:

More information

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs CSE599s: Extremal Combinatorics November 21, 2011 Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs Lecturer: Anup Rao 1 An Arithmetic Circuit Lower Bound An arithmetic circuit is just like

More information

Session 6 Number Theory

Session 6 Number Theory Key Terms in This Session Session 6 Number Theory Previously Introduced counting numbers factor factor tree prime number New in This Session composite number greatest common factor least common multiple

More information

Network (Tree) Topology Inference Based on Prüfer Sequence

Network (Tree) Topology Inference Based on Prüfer Sequence Network (Tree) Topology Inference Based on Prüfer Sequence C. Vanniarajan and Kamala Krithivasan Department of Computer Science and Engineering Indian Institute of Technology Madras Chennai 600036 vanniarajanc@hcl.in,

More information

1 if 1 x 0 1 if 0 x 1

1 if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

More information

Graph Theory Problems and Solutions

Graph Theory Problems and Solutions raph Theory Problems and Solutions Tom Davis tomrdavis@earthlink.net http://www.geometer.org/mathcircles November, 005 Problems. Prove that the sum of the degrees of the vertices of any finite graph is

More information

Session 7 Bivariate Data and Analysis

Session 7 Bivariate Data and Analysis Session 7 Bivariate Data and Analysis Key Terms for This Session Previously Introduced mean standard deviation New in This Session association bivariate analysis contingency table co-variation least squares

More information

5.1 Bipartite Matching

5.1 Bipartite Matching CS787: Advanced Algorithms Lecture 5: Applications of Network Flow In the last lecture, we looked at the problem of finding the maximum flow in a graph, and how it can be efficiently solved using the Ford-Fulkerson

More information

Solutions to Homework 6

Solutions to Homework 6 Solutions to Homework 6 Debasish Das EECS Department, Northwestern University ddas@northwestern.edu 1 Problem 5.24 We want to find light spanning trees with certain special properties. Given is one example

More information

CMPSCI611: Approximating MAX-CUT Lecture 20

CMPSCI611: Approximating MAX-CUT Lecture 20 CMPSCI611: Approximating MAX-CUT Lecture 20 For the next two lectures we ll be seeing examples of approximation algorithms for interesting NP-hard problems. Today we consider MAX-CUT, which we proved to

More information

Euclidean Minimum Spanning Trees Based on Well Separated Pair Decompositions Chaojun Li. Advised by: Dave Mount. May 22, 2014

Euclidean Minimum Spanning Trees Based on Well Separated Pair Decompositions Chaojun Li. Advised by: Dave Mount. May 22, 2014 Euclidean Minimum Spanning Trees Based on Well Separated Pair Decompositions Chaojun Li Advised by: Dave Mount May 22, 2014 1 INTRODUCTION In this report we consider the implementation of an efficient

More information

Single-Link Failure Detection in All-Optical Networks Using Monitoring Cycles and Paths

Single-Link Failure Detection in All-Optical Networks Using Monitoring Cycles and Paths Single-Link Failure Detection in All-Optical Networks Using Monitoring Cycles and Paths Satyajeet S. Ahuja, Srinivasan Ramasubramanian, and Marwan Krunz Department of ECE, University of Arizona, Tucson,

More information

8.1 Min Degree Spanning Tree

8.1 Min Degree Spanning Tree CS880: Approximations Algorithms Scribe: Siddharth Barman Lecturer: Shuchi Chawla Topic: Min Degree Spanning Tree Date: 02/15/07 In this lecture we give a local search based algorithm for the Min Degree

More information

3 Some Integer Functions

3 Some Integer Functions 3 Some Integer Functions A Pair of Fundamental Integer Functions The integer function that is the heart of this section is the modulo function. However, before getting to it, let us look at some very simple

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Scheduling Shop Scheduling. Tim Nieberg

Scheduling Shop Scheduling. Tim Nieberg Scheduling Shop Scheduling Tim Nieberg Shop models: General Introduction Remark: Consider non preemptive problems with regular objectives Notation Shop Problems: m machines, n jobs 1,..., n operations

More information

(Refer Slide Time: 2:03)

(Refer Slide Time: 2:03) Control Engineering Prof. Madan Gopal Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 11 Models of Industrial Control Devices and Systems (Contd.) Last time we were

More information

CS104: Data Structures and Object-Oriented Design (Fall 2013) October 24, 2013: Priority Queues Scribes: CS 104 Teaching Team

CS104: Data Structures and Object-Oriented Design (Fall 2013) October 24, 2013: Priority Queues Scribes: CS 104 Teaching Team CS104: Data Structures and Object-Oriented Design (Fall 2013) October 24, 2013: Priority Queues Scribes: CS 104 Teaching Team Lecture Summary In this lecture, we learned about the ADT Priority Queue. A

More information

CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis CS711008Z Algorithm Design and Analysis Lecture 7 Binary heap, binomial heap, and Fibonacci heap 1 Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 The slides were

More information

Lecture 1: Course overview, circuits, and formulas

Lecture 1: Course overview, circuits, and formulas Lecture 1: Course overview, circuits, and formulas Topics in Complexity Theory and Pseudorandomness (Spring 2013) Rutgers University Swastik Kopparty Scribes: John Kim, Ben Lund 1 Course Information Swastik

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

More information

Triangle deletion. Ernie Croot. February 3, 2010

Triangle deletion. Ernie Croot. February 3, 2010 Triangle deletion Ernie Croot February 3, 2010 1 Introduction The purpose of this note is to give an intuitive outline of the triangle deletion theorem of Ruzsa and Szemerédi, which says that if G = (V,

More information

136 CHAPTER 4. INDUCTION, GRAPHS AND TREES

136 CHAPTER 4. INDUCTION, GRAPHS AND TREES 136 TER 4. INDUCTION, GRHS ND TREES 4.3 Graphs In this chapter we introduce a fundamental structural idea of discrete mathematics, that of a graph. Many situations in the applications of discrete mathematics

More information

Institut für Informatik Lehrstuhl Theoretische Informatik I / Komplexitätstheorie. An Iterative Compression Algorithm for Vertex Cover

Institut für Informatik Lehrstuhl Theoretische Informatik I / Komplexitätstheorie. An Iterative Compression Algorithm for Vertex Cover Friedrich-Schiller-Universität Jena Institut für Informatik Lehrstuhl Theoretische Informatik I / Komplexitätstheorie Studienarbeit An Iterative Compression Algorithm for Vertex Cover von Thomas Peiselt

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

2. (a) Explain the strassen s matrix multiplication. (b) Write deletion algorithm, of Binary search tree. [8+8]

2. (a) Explain the strassen s matrix multiplication. (b) Write deletion algorithm, of Binary search tree. [8+8] Code No: R05220502 Set No. 1 1. (a) Describe the performance analysis in detail. (b) Show that f 1 (n)+f 2 (n) = 0(max(g 1 (n), g 2 (n)) where f 1 (n) = 0(g 1 (n)) and f 2 (n) = 0(g 2 (n)). [8+8] 2. (a)

More information

SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH

SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH 31 Kragujevac J. Math. 25 (2003) 31 49. SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH Kinkar Ch. Das Department of Mathematics, Indian Institute of Technology, Kharagpur 721302, W.B.,

More information

DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH

DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH CHRISTOPHER RH HANUSA AND THOMAS ZASLAVSKY Abstract We investigate the least common multiple of all subdeterminants,

More information

Cost Model: Work, Span and Parallelism. 1 The RAM model for sequential computation:

Cost Model: Work, Span and Parallelism. 1 The RAM model for sequential computation: CSE341T 08/31/2015 Lecture 3 Cost Model: Work, Span and Parallelism In this lecture, we will look at how one analyze a parallel program written using Cilk Plus. When we analyze the cost of an algorithm

More information

Midterm Practice Problems

Midterm Practice Problems 6.042/8.062J Mathematics for Computer Science October 2, 200 Tom Leighton, Marten van Dijk, and Brooke Cowan Midterm Practice Problems Problem. [0 points] In problem set you showed that the nand operator

More information

Just the Factors, Ma am

Just the Factors, Ma am 1 Introduction Just the Factors, Ma am The purpose of this note is to find and study a method for determining and counting all the positive integer divisors of a positive integer Let N be a given positive

More information

Linear Programming. April 12, 2005

Linear Programming. April 12, 2005 Linear Programming April 1, 005 Parts of this were adapted from Chapter 9 of i Introduction to Algorithms (Second Edition) /i by Cormen, Leiserson, Rivest and Stein. 1 What is linear programming? The first

More information

Warshall s Algorithm: Transitive Closure

Warshall s Algorithm: Transitive Closure CS 0 Theory of Algorithms / CS 68 Algorithms in Bioinformaticsi Dynamic Programming Part II. Warshall s Algorithm: Transitive Closure Computes the transitive closure of a relation (Alternatively: all paths

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

WRITING PROOFS. Christopher Heil Georgia Institute of Technology

WRITING PROOFS. Christopher Heil Georgia Institute of Technology WRITING PROOFS Christopher Heil Georgia Institute of Technology A theorem is just a statement of fact A proof of the theorem is a logical explanation of why the theorem is true Many theorems have this

More information

GRAPH THEORY LECTURE 4: TREES

GRAPH THEORY LECTURE 4: TREES GRAPH THEORY LECTURE 4: TREES Abstract. 3.1 presents some standard characterizations and properties of trees. 3.2 presents several different types of trees. 3.7 develops a counting method based on a bijection

More information

Section IV.1: Recursive Algorithms and Recursion Trees

Section IV.1: Recursive Algorithms and Recursion Trees Section IV.1: Recursive Algorithms and Recursion Trees Definition IV.1.1: A recursive algorithm is an algorithm that solves a problem by (1) reducing it to an instance of the same problem with smaller

More information

A Note on Maximum Independent Sets in Rectangle Intersection Graphs

A Note on Maximum Independent Sets in Rectangle Intersection Graphs A Note on Maximum Independent Sets in Rectangle Intersection Graphs Timothy M. Chan School of Computer Science University of Waterloo Waterloo, Ontario N2L 3G1, Canada tmchan@uwaterloo.ca September 12,

More information

The last three chapters introduced three major proof techniques: direct,

The last three chapters introduced three major proof techniques: direct, CHAPTER 7 Proving Non-Conditional Statements The last three chapters introduced three major proof techniques: direct, contrapositive and contradiction. These three techniques are used to prove statements

More information

Binary Heaps * * * * * * * / / \ / \ / \ / \ / \ * * * * * * * * * * * / / \ / \ / / \ / \ * * * * * * * * * *

Binary Heaps * * * * * * * / / \ / \ / \ / \ / \ * * * * * * * * * * * / / \ / \ / / \ / \ * * * * * * * * * * Binary Heaps A binary heap is another data structure. It implements a priority queue. Priority Queue has the following operations: isempty add (with priority) remove (highest priority) peek (at highest

More information

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II. Matrix Algorithms DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

More information

CS 598CSC: Combinatorial Optimization Lecture date: 2/4/2010

CS 598CSC: Combinatorial Optimization Lecture date: 2/4/2010 CS 598CSC: Combinatorial Optimization Lecture date: /4/010 Instructor: Chandra Chekuri Scribe: David Morrison Gomory-Hu Trees (The work in this section closely follows [3]) Let G = (V, E) be an undirected

More information

THE DIMENSION OF A VECTOR SPACE

THE DIMENSION OF A VECTOR SPACE THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field

More information

Connectivity and cuts

Connectivity and cuts Math 104, Graph Theory February 19, 2013 Measure of connectivity How connected are each of these graphs? > increasing connectivity > I G 1 is a tree, so it is a connected graph w/minimum # of edges. Every

More information

Shortcut sets for plane Euclidean networks (Extended abstract) 1

Shortcut sets for plane Euclidean networks (Extended abstract) 1 Shortcut sets for plane Euclidean networks (Extended abstract) 1 J. Cáceres a D. Garijo b A. González b A. Márquez b M. L. Puertas a P. Ribeiro c a Departamento de Matemáticas, Universidad de Almería,

More information

6.02 Practice Problems: Routing

6.02 Practice Problems: Routing 1 of 9 6.02 Practice Problems: Routing IMPORTANT: IN ADDITION TO THESE PROBLEMS, PLEASE SOLVE THE PROBLEMS AT THE END OF CHAPTERS 17 AND 18. Problem 1. Consider the following networks: network I (containing

More information

The Binary Blocking Flow Algorithm. Andrew V. Goldberg Microsoft Research Silicon Valley www.research.microsoft.com/ goldberg/

The Binary Blocking Flow Algorithm. Andrew V. Goldberg Microsoft Research Silicon Valley www.research.microsoft.com/ goldberg/ The Binary Blocking Flow Algorithm Andrew V. Goldberg Microsoft Research Silicon Valley www.research.microsoft.com/ goldberg/ Theory vs. Practice In theory, there is no difference between theory and practice.

More information

. 0 1 10 2 100 11 1000 3 20 1 2 3 4 5 6 7 8 9

. 0 1 10 2 100 11 1000 3 20 1 2 3 4 5 6 7 8 9 Introduction The purpose of this note is to find and study a method for determining and counting all the positive integer divisors of a positive integer Let N be a given positive integer We say d is a

More information

International Journal of Advanced Research in Computer Science and Software Engineering

International Journal of Advanced Research in Computer Science and Software Engineering Volume 3, Issue 7, July 23 ISSN: 2277 28X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Greedy Algorithm:

More information

Dynamic Programming 11.1 AN ELEMENTARY EXAMPLE

Dynamic Programming 11.1 AN ELEMENTARY EXAMPLE Dynamic Programming Dynamic programming is an optimization approach that transforms a complex problem into a sequence of simpler problems; its essential characteristic is the multistage nature of the optimization

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete

More information

5.1 Radical Notation and Rational Exponents

5.1 Radical Notation and Rational Exponents Section 5.1 Radical Notation and Rational Exponents 1 5.1 Radical Notation and Rational Exponents We now review how exponents can be used to describe not only powers (such as 5 2 and 2 3 ), but also roots

More information

Conditional Probability, Independence and Bayes Theorem Class 3, 18.05, Spring 2014 Jeremy Orloff and Jonathan Bloom

Conditional Probability, Independence and Bayes Theorem Class 3, 18.05, Spring 2014 Jeremy Orloff and Jonathan Bloom Conditional Probability, Independence and Bayes Theorem Class 3, 18.05, Spring 2014 Jeremy Orloff and Jonathan Bloom 1 Learning Goals 1. Know the definitions of conditional probability and independence

More information

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS STEVEN P. LALLEY AND ANDREW NOBEL Abstract. It is shown that there are no consistent decision rules for the hypothesis testing problem

More information

Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products

Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products Chapter 3 Cartesian Products and Relations The material in this chapter is the first real encounter with abstraction. Relations are very general thing they are a special type of subset. After introducing

More information

Dynamic Programming. Lecture 11. 11.1 Overview. 11.2 Introduction

Dynamic Programming. Lecture 11. 11.1 Overview. 11.2 Introduction Lecture 11 Dynamic Programming 11.1 Overview Dynamic Programming is a powerful technique that allows one to solve many different types of problems in time O(n 2 ) or O(n 3 ) for which a naive approach

More information

1 Error in Euler s Method

1 Error in Euler s Method 1 Error in Euler s Method Experience with Euler s 1 method raises some interesting questions about numerical approximations for the solutions of differential equations. 1. What determines the amount of

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

Regular Expressions and Automata using Haskell

Regular Expressions and Automata using Haskell Regular Expressions and Automata using Haskell Simon Thompson Computing Laboratory University of Kent at Canterbury January 2000 Contents 1 Introduction 2 2 Regular Expressions 2 3 Matching regular expressions

More information

Exponential time algorithms for graph coloring

Exponential time algorithms for graph coloring Exponential time algorithms for graph coloring Uriel Feige Lecture notes, March 14, 2011 1 Introduction Let [n] denote the set {1,..., k}. A k-labeling of vertices of a graph G(V, E) is a function V [k].

More information

Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

More information

CMSC 858T: Randomized Algorithms Spring 2003 Handout 8: The Local Lemma

CMSC 858T: Randomized Algorithms Spring 2003 Handout 8: The Local Lemma CMSC 858T: Randomized Algorithms Spring 2003 Handout 8: The Local Lemma Please Note: The references at the end are given for extra reading if you are interested in exploring these ideas further. You are

More information

The positive minimum degree game on sparse graphs

The positive minimum degree game on sparse graphs The positive minimum degree game on sparse graphs József Balogh Department of Mathematical Sciences University of Illinois, USA jobal@math.uiuc.edu András Pluhár Department of Computer Science University

More information

Minimum cost maximum flow, Minimum cost circulation, Cost/Capacity scaling

Minimum cost maximum flow, Minimum cost circulation, Cost/Capacity scaling 6.854 Advanced Algorithms Lecture 16: 10/11/2006 Lecturer: David Karger Scribe: Kermin Fleming and Chris Crutchfield, based on notes by Wendy Chu and Tudor Leu Minimum cost maximum flow, Minimum cost circulation,

More information

Finding and counting given length cycles

Finding and counting given length cycles Finding and counting given length cycles Noga Alon Raphael Yuster Uri Zwick Abstract We present an assortment of methods for finding and counting simple cycles of a given length in directed and undirected

More information

Converting a Number from Decimal to Binary

Converting a Number from Decimal to Binary Converting a Number from Decimal to Binary Convert nonnegative integer in decimal format (base 10) into equivalent binary number (base 2) Rightmost bit of x Remainder of x after division by two Recursive

More information

UPPER BOUNDS ON THE L(2, 1)-LABELING NUMBER OF GRAPHS WITH MAXIMUM DEGREE

UPPER BOUNDS ON THE L(2, 1)-LABELING NUMBER OF GRAPHS WITH MAXIMUM DEGREE UPPER BOUNDS ON THE L(2, 1)-LABELING NUMBER OF GRAPHS WITH MAXIMUM DEGREE ANDREW LUM ADVISOR: DAVID GUICHARD ABSTRACT. L(2,1)-labeling was first defined by Jerrold Griggs [Gr, 1992] as a way to use graphs

More information

V. Adamchik 1. Graph Theory. Victor Adamchik. Fall of 2005

V. Adamchik 1. Graph Theory. Victor Adamchik. Fall of 2005 V. Adamchik 1 Graph Theory Victor Adamchik Fall of 2005 Plan 1. Basic Vocabulary 2. Regular graph 3. Connectivity 4. Representing Graphs Introduction A.Aho and J.Ulman acknowledge that Fundamentally, computer

More information

Zeros of a Polynomial Function

Zeros of a Polynomial Function Zeros of a Polynomial Function An important consequence of the Factor Theorem is that finding the zeros of a polynomial is really the same thing as factoring it into linear factors. In this section we

More information

Circuits 1 M H Miller

Circuits 1 M H Miller Introduction to Graph Theory Introduction These notes are primarily a digression to provide general background remarks. The subject is an efficient procedure for the determination of voltages and currents

More information

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm. Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three

More information

Mathematical Induction

Mathematical Induction Mathematical Induction (Handout March 8, 01) The Principle of Mathematical Induction provides a means to prove infinitely many statements all at once The principle is logical rather than strictly mathematical,

More information

Automata and Computability. Solutions to Exercises

Automata and Computability. Solutions to Exercises Automata and Computability Solutions to Exercises Fall 25 Alexis Maciel Department of Computer Science Clarkson University Copyright c 25 Alexis Maciel ii Contents Preface vii Introduction 2 Finite Automata

More information

The number of marks is given in brackets [ ] at the end of each question or part question. The total number of marks for this paper is 72.

The number of marks is given in brackets [ ] at the end of each question or part question. The total number of marks for this paper is 72. ADVANCED SUBSIDIARY GCE UNIT 4736/01 MATHEMATICS Decision Mathematics 1 THURSDAY 14 JUNE 2007 Afternoon Additional Materials: Answer Booklet (8 pages) List of Formulae (MF1) Time: 1 hour 30 minutes INSTRUCTIONS

More information

Cartesian Products and Relations

Cartesian Products and Relations Cartesian Products and Relations Definition (Cartesian product) If A and B are sets, the Cartesian product of A and B is the set A B = {(a, b) :(a A) and (b B)}. The following points are worth special

More information