# Speeding Up Two String-Matching Algorithms 1

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 Algorithmica (1994) 12:24%267 Algorithmica Springer-Verlag New York nc. Speeding Up Two String-Matching Algorithms 1 M. Crochemore, 2 A. Czumaj, 3 L. Gasieniec, s S. Jarominek, 3 T. Lecroq, 2 W. Plandowski, 3 and W. Rytter 3 Abstract. We show how to speed up two string-matching algorithms: the Boyer-Moore algorithm (BM algorithm), and its version called here the reverse factor algorithm (RF algorithm). The RF algorithm is based on factor graphs for the reverse of the pattern.the main feature of both algorithms is that they scan the text right-to-left from the supposed right position of the pattern. The BM algorithm goes as far as the scanned segment (factor) is a suffix of the pattern. The RF algorithm scans while the segment is a factor of the pattern. Both algorithms make a shift of the pattern, forget the history, and start again. The RF algorithm usually makes bigger shifts than BM, but is quadratic in the worst case. We show that it is enough to remember the last matched segment (represented by two pointers to the text) to speed up the RF algorithm considerably (to make a linear number of inspections of text symbols, with small coefficient), and to speed up the BM algorithm (to make at most 2.n comparisons). Only a constant additional memory is needed for the search phase. We give alternative versions of an accelerated RF algorithm: the first one is based on combinatorial properties of primitive words, and the other two use the power of suffix trees extensively. The paper demonstrates the techniques to transform algorithms, and also shows interesting new applications of data structures representing all subwords of the pattern in compact form. Key Words. Analysis of algorithms, Pattern matching, String matching, Suffix tree, Suffix automaton, Combinatorial problems, Periods, Text processing, Data retrieval. 1. ntroduction. The Boyer-Moore algorithm BM] is one of the string-matching algorithms which is very fast on the average. However, it is successful mainly in the case of large alphabets. For small alphabets, its average complexity is f~(n) (see [BR]) for the Boyer-Moore-Horspool version [H. The reader can refer to [HS] for a discussion on practical fast string-searching algorithms. We discuss here a version of this algorithm, called the RF algorithm, which is much faster on the average, not only on large alphabets but also for small alphabets. f the alphabet is of size at least 2, then the average complexity of the new algorithm is O(n log(m)/m), and reaches the lower bound given in [Y]. The main feature of both algorithms is that they scan the text right-to-left from a supposed right position of the pattern. The BM algorithm goes as far as the scanned segment (also called the factor) is a suffix of the pattern, while the RF algorithm matches the text x The work by M. Crochemore and T. Lecroq was partially supported by PRC "Mathrmatiques- nformatique," M. Crochemore was also partially supported by NATO Grant CRG , and the work by A. Czumaj, L. Gasieniec, S. Jarominek, W. Plandowski, and W. Rytter was supported by KBN of the Polish Ministry of Education. 2 LTP, nstitut Blaise Pascal, Universit6 Paris 7, 2 Place Jussieu, Paris Cedex 05, France. 3 nstitute of nformatics, Warsaw University, ul. Banacha 2, Warsaw 59, Poland. Received August 15, 1991; revised August 17, 1992, and November 11, Communicated by Alberto Apostolico.

2 248 M. Crochemore, A. Czumaj, L. Gasieniec, S. Jarominek, T. Lecroq, W. Plandowski, W. Rytter against any factor of the pattern, traversing the factor graph or the suffix tree of the reverse pattern. Afterward, both algorithms make a shift of the pattern to the right, forget the history, and start again. We show that it is enough to re-~ member the last matched segment to speed up the algorithms: an additional constant memory is sufficient. We derive a version of the BM algorithm, called the Turbo_BM algorithm. One of the advantages of this algorithm with respect to the original BM algorithm is the simplicity of its analysis of complexity. At the same time, the Turbo_BM algorithm looks like a superficial modification of the BM algorithm. Only a few additional lines are inserted inside the search phase of the original algorithm, and two registers (constant memory to keep information about the last match) are added. The preprocessing phase is left unchanged. Recall that an algorithm remembering a linear number of previous matches has been given before by Apostolico and Giancarlo [AG] as a version of the BM algorithm. The Turbo_BM algorithm given here seems to be an efficient compromise between recording a linear-size history, as in the Apostolico-Giancarlo algorithm, and not recording the history of previous matches, as in the original BM algorithm. Our method to speed up the BM and RF algorithms is an example of a general technique called the dynamic simulation in [BKR]--for a given algorithm A construct an algorithm A' which works in the same way as A, but remembering part of the information that A is wasting; during the process this information is used to save part of the computation carried out by the original algorithm A. n our case, the additional information is the constant-size information about the last match. The transformation of the Boyer-Moore algorithm gives an algorithm of the same simplicity as the original Boyer-Moore algorithm, but with the upper bound of 2.n on the number of comparisons, which improves slightly on the bound 3" n of the original algorithm. The derivation of this bound is also much simpler than the 3" n bound in [Co]. The previous bounds, established when the pattern does not occur in the text, are 7" n in [KMP] and 4" n in [GO]. t should be noted that a simple transformation of the BM algorithm to search for all occurrences of the pattern has quadratic-time complexity. Galil [G] has shown how to make it linear in this case. Several transformations of the RF algorithm show the applicability of data structures representing succinctly the set of all subwords of a pattern p of length m. We denote this set by FACT(p). The set of all suffices ofp is denoted by SUF(p). For simplicity of presentation, we assume that the size of the alphabet is constant. The general structure of the BM and RF algorithms is shown in Figure 1. text scanned part x of the text i i+j i+m! window on the text shift of the window Fig. 1. One iteration of Algorithm 1. The algorithm scans right-to-left a segment (factor) x of the text. m - - m - -

3 Speeding Up Two String-Matching Algorithms 249 Algorithm 1 /* common scheme for the BM and RF algorithms */ i:= O; while i < n - m do { align pattern with positions t[i + 1..i + m] of the text; scan the text right-to-left from position i + m; let x be the scanned part of the text; if x = p then report a match at position i; compute the shift; i:= i + shift; } end. n algorithms BM and RF we use the synonym x for the last-scanned segment t[i +j..i+ m] of the text t. This shortens the presentation. n one algorithm it is checked whether x is a suffix of p and, in the second algorithm, whether it is a factor of p. Shifts use a precomputed function on x. n fact, in the BM algorithm x is identified with a position j on the pattern, while in the RF algorithm x is identified with a node corresponding to x R in a data structure representing FACT(pR). We use the reverse pattern because we scan right-to-left, while most data structures for the set of factors are oriented to left-to-right scanning of the pattern. These orders are equivalent after reversing the pattern. n both cases a constant size memory is sufficient to identify x. Both the BM and RF algorithms can be viewed as instances of Algorithm 1. For a suffix x at position k, denote here by BM_shift[x] the match-shift d2[k] defined in [KMP] for the BM algorithm (see also [Ah] or [R]). The value of d2[k] is, roughly speaking, the minimal (nontrivial) shift of the pattern over itself such that the symbols aligned with the suffix x, except the first letter of x, agree. The symbol at the position, denoted by *, aligned with the first letter of x in Figure 2, is distinct if, in fact, any symbol aligns. The BM algorithm also uses heuristics on the alphabet. A second shift function serves to align the mismatch symbol in the text with an occurrence of it in the pattern. We mainly consider the BM algorithm without the heuristics. However, this feature is integrated in the final version of the Turbo_BM algorithm. Algorithm BM /* reversed-suffix string matching */ i:= O; /* denote t[i + j.. i + m] by x, it is the last-scanned part of the text */ while i _< n - m do { j := m; while j > 1 and x ~ SUF(p) do j := j - 1; if x = p then report a match at position i; shift := B M_shift[ x ] ; i:-= i + shift; } end.

4 250 M. Crochemore, A. Czumaj, L. Gasieniec, S. Jarominek, T. Lecroq, W. Plandowski, W. Rytter next alignment of pattern m text different letters ~ i, J! ' "~, ~ ~, sl~ft=r~tchshifi[l ] 1 ', m',, 9 i+j i+m Fig. 2. One iteration in the BM algorithm. m. Algorithm RF /* reverse factor string matching */ i:= 0; /* denote t[i + j.. i + m] by x, it is the last-scanned part of the text */ while i _< n - m do { j:= m; whilej > 1 and xefact(p) doj:=j- 1; /* in fact, we check the equivalent condition x R ~ FACT(p R) */ if x = p then report a match at position i; shift := RF shift[x]; i:= i + shift; } end. The work which Algorithm 1 spends at one iteration is denoted here by cost, and the length of the shift is denoted by shift. n the BM algorithm cost is usually small but it gives a small shift. The strategy of the RF algorithm is more "optimal": the smaller the cost, the bigger the shift. n practice on the average, the match (and cost) at a given iteration is usually small; hence, the algorithm, whose shifts are inversely proportional to local matches, is close to optimal. The straightforward application of this strategy gives an RF algorithm that is very successful on the average. t is, however, quadratic in the worst case. Algorithm RF makes essential use of a data structure representing the set FACT(p). See [BBE +] for the definition of directed acyclic word graphs (dawg's), see [-Cr] for the definition of suffix automata, and see [Ap] for details on suffix trees9 The graph G = dawg(p R) represents all subwords of p~ as labeled paths starting from the root of G. The factor z corresponds in a many-to-one fashion to a node vert(z), such that the path from the root to that node "spells" z. Additionally, we add information to each node indicating whether all nodes corresponding to that node are suffixes of the reversed pattern pr (prefixes of p). We traverse this graph when scanning the text right-to-left in the RF algorithm. Let x' be the longest word, which is a factor of p, found in a given iteration. When x = p, then x' = x; otherwise x' is obtained by cutting off the first letter of x (the mismatch symbol). The time spent scanning x is proportional to [xl. The multiplicative factor is constant if a matrix representation is used for transitions in the data structure. Otherwise, it is O(log ]Af) (where A can be restricted to the pattern alphabet), which applies for arbitrary alphabets. We now define the shift RF_shift, and describe how to compute it easily. Let u be the longest suffix of x' which is a proper prefix of the pattern p. We can assume

5 Speeding Up Two String-Matching Algorithms 251 i!!! 1 i+j next alignment of pattern l u 1 shift = m - lul i m! factor of patternl. -- '- - Fig. 3. One iteration of algorithm RF. Word u is the longest prefix of the pattern that is a suffix of x'. i+m that we always know the actual value of u, associated with the last node on the scanned path in G corresponding to a suffix ofp R. Then shift RF_shift[x] = m - l ul (see Figure 3). The use of information about the previous match at a given iteration is the key to improvement. However, this application can be realized in many ways: we discuss three alternative transformations for RF. They lead to three versions of the RF algorithm, Turbo_RF, Turbo_RF', and Turbo_RF", that are presented in Sections 2 and 3. Algorithms Turbo_BM, Turbo_RF, Turbo_RF', and Turbo_RF" can be viewed as instances of Algorithm 2 presented below. Algorithm 2 /* general scheme for algorithms Turbo_RF, Turbo_RF', Turbo_RF", and Turbo_BM: a version of Algorithm 1 with an additional memory */ i:= 0; memory := nil; while i < n -- m do { align pattern with positions t[i + 1..i + m] of the text; scan the text right-to-left from the position i + m, using memory to reduce number of inspections; let x' be the part of the text scanned; if x = p then report a match at position i; compute the shift shiftl according to x and memory; i:= i + shifti; update memory using x; } end. 2. Speeding up the Reverse Factor Algorithm. To speed up the RF algorithm we memorize the prefix u of size m-shift of the pattern (see Figure 6). The scan, between the part of the text to align with part v of the pattern, is done from right to left. When we arrive at the boundary between u and v in a successful scan (all comparisons positive), then we are at a decision point. Now, instead of scanning u until a mismatch is found, we can just scan (again) a part of u, due to the combinatorial properties of primitive words. A word is primitive iffit is not a proper power of a smaller word. We denote by per(u) the length of the smallest period of u. Primitive words have the following useful properties: (a) The prefix of u of size per(u) is primitive.

6 252 M. Crochemore, A. Czumaj, L Gasieniec, S. Jarominek, T. Lecroq, W. Plandowski, W. Rytter 1 z,, ', z / Fig. 4. f z is primitive, then such an overlap is impossible. (b) A cyclic shift of a primitive word is also primitive, hence the suffix z of u of size per(u) is primitive. (c) f z is primitive, then the situation presented in the Figure 4 is impossible. f y~fact(p) we denote by displ(y) the least integer d such that y = p[m - d - [y[ m - d] (see Figure 5). The crucial point is that if we successfully scan v and the suffix of u of size per(u), then we know the shift without further calculations: many comparisons are saved and the RF algorithm increases speed in this moment. n terms of the next lemma, we save xl - zvl comparisons when xl >-- zvl. Algorithm Turbo_RF /* denote t[i +j..i + m] by x; it is the last-scanned part of the text; we memorize the last prefix u of the pattern; initially u is the empty word; */ i:= 0; u.'= empty; while i <_ n - m do { j.'= m; whilej > [u[ and x~fact(p) doj:=j- 1; ifj = lul then /* we are at the decision point between u and v, after v has been successfully scanned */ if v ~ SUF(p) then report a match at position i; else { scan right-to-left up at most per(u) symbols stopping at a mismatch; let x be the successfully scanned text; if x] = m - l ul + per(u) then shift := displ(x) else shift := RF_shift(x); }; else shift:= RF_shift(x); i:= i + shift; u := prefix of pattern of length m - shift; } end. LEMMA 1 (Key Lemma). Let u, v be as in Figure 3. Assume that u is periodic (per(u) <_ [u[/2.). Let z be the suffix of u of length per(u) and let x be the longest factor y displ(y) Fig. S. Displacement of factor y in the pattern.

7 Speeding Up Two String-Matching Algorithms 253 suffix of uv that belongs to FACT(p). Then zv ~ FACT(p) implies RF_shift(x) = displ(zv). PROOF. t follows from the definition of per(u), as the smallest period of u, and the periodicity of u that z is a primitive word. The primitivity of z implies that occurrences of z can appear from the end of u only at distances which are multiples of per(u). Hence, displ(zv) should be a multiple of per(u), and this easily implies that the smallest proper suffix of uv which is a prefix of p has size l uv[ - displ(zv). Hence, the next shift in the original RF algorithm is shift = displ(zv). [] We need to explain how we add only a constant memory to the RF algorithm. The variables u and x need only pointers to the text. The values of displacements are in the data structure which represents FACT(p); similarly, the values of periods per(u) for all prefixes of the pattern are precomputed with the representation of FACT(p) and read-only in the algorithm. n fact, the table of periods can be removed and values of per(u) can be computed dynamically inside the Turbo_RF algorithm using constant additional memory. This is quite technical and is explained in the Appendix. THEOREM 2. On a text of length n, the Turbo RF algorithm runs in time O(n.log [A[) for arbitrary alphabets. t makes at most 2.n inspections of text symbols. PROOF. n the Turbo_BM algorithm at most per(u) symbols of u are scanned. Let extra_cost be the number of symbols of u scanned at the actual stage. Since the next shift has length at least per(u), extra_cost <_ next_shift. Hence, all extra inspections of symbols are amortized by th toal sum of shifts, which gives at most n inspections. The symbols in parts v are scanned for the first time in a given stage. They are disjoint in distinct phases. Hence, they also give at most n inspections. The work spent inside segments u, and inside segments v, is thus bounded by 2" n. This completes the proof. [] 3. Two Other Variations of Algorithm Turbo_RF Assume that we are at a decision point when we have just finished scanning v (see Figure 6). At this moment, we know that the part of the text immediately to the left is a prefix u of p of size m - [vl. Denote by nextpref(v) the longest suffix of uv that is a proper prefix of p, and that is longer than v. f there is no such suffix, then denote the corresponding value by nil. The next RF_shifi will be equal to m - nextpref(v) l. The next value of u will be nextpref(v). All these elements are uniquely determined by v. Hence, after suitable preprocessing of the pattern no symbols of u need to be read. The algorithm will make at most n inspections of text symbols against the pattern. However, the complexity is affected by the computation of nextpref(v). There are at least two possible approaches. One is to precompute a data structure which allows the computation at the kth iteration of the value of nextpref(v) in a time cost'k such that the sum of all cost'k's is linear. The second

8 254 M. Crochemore, A. Czumaj, L. Gasieniec, S. Jarominek, T. Lecroq, W. Plandowski, W. Rytter pattern 1! prefix of pattern : : m i ~ per(u) Fig. 6. We keep the prefix u of the pattern in memory. f u is periodic, then at most per(u) symbols of u are scanned again, however, this extra work is amortized by the next shift. The symbols of v are scanned for the first time. /+m text possible solution is to preprocess the pattern in such a way that the value of nextpref(v) can be computed in constant time. Technically, it is convenient to deal with suffixes. Denote p'= pr. We look at the computation of nextpref from a "reverse" perspective. Let nextsuf(v) = (nextpref(v)) R. n other words, nextsuf(v) is the longest prefix of vru R, which is a suffix of p', where u is the suffix of p' of length m -vl. n both approaches we present the set FACT(p') by the suffix tree T. The edges of this tree are labeled by factors of the text represented by pairs (start-position, end-position). We take the compacted suffix tree for p'\$, in the sense of [Ap], then we cut off all edges labeled by \$. Afterward, each suffix of p' is represented by a node of T. Figure 7 shows an uncompacted suffix tree and a (compacted) suffix tree. The term "compacted" is omitted later. Call the factors of p', which correspond to nodes of T, the main factors. The tree T has only a linear number of nodes, hence, not all factors of p' are main. Nonmain factors correspond to a point on an edge of T. For a word v denote by repr(v) the node of T corresponding to the shortest word v' which is an extension of v (possibly v = v'). For example, the whole string p' is a main factor, and for p' = aabbabd we have repr(aa) = p'. [ 2, starting position of a suffix Uncompacted suffix tree Tl(p) Suffix tree/'(p) Compact version of Tl(p) Fig. 7. The uncompacted tree and the suffix tree T for p' = aabbabd. Each factor corresponds to a node in the first tree. Call the factors corresponding to the nodes in the suffix tree main nodes. The representative repr(v) of the word v is the first descendant of v in the first uneompacted tree which is a node in the suffix tree.

9 Speeding Up Two String-Matching Algorithms 255 The First Approach. Let P be the failure function of p, see [KMP]. The value P(j) is the length of the longest proper suffix of p[1..j] which is also a prefix of it (it is called a border). Assume that P is precomputed. Let j = m - J v[. Let suf(k) denote the node corresponding to the suffix of size k. Then it is easy to prove the following fact: nextsuf(v)[ = MAX{k/k = [v] + ph(j) and suf(k) is a descendant of repr(vr), for h >_ 0}. f the set on the right side is empty, then nextsuf(v) = nil. We can check whether suf(k) is a descendant of repr(v R) in a constant time, after preprocessing the suffix tree T. We can number the nodes of the tree in a depth first search order. Then the nodes which are descendants of a given node form an interval of consecutively numbered nodes. Associate such an interval with each node. The question about descendants is then reduced to the inclusion of an integer in a known interval. This can be answered in a constant time. This gives the Turbo_RF' algorithm presented above. Algorithm Turbo_RF' /* denote t[i + j.. i + m] by x, it is the last-scanned part of the text, we remember the last prefix u of the pattern; initially u is the empty word; */ i := 0; u := empty; while i < n - m do { j:= m; whilej > lul and xefact(p) doj:=j - 1; ifj = l ul then /* we are at the decision point between u and v, after v has been successfully scanned */ if v ~ SUF(p) then report a match at position i; else { compute MN{k/k = vl + ph(j) and suf(k) is a descendant of repr(vr), h >_ O}; if k ~ nil then shift:= m - k; else shift:= RF_shift(v); } else shift := RF_shift(x); i := i + shift; u := prefix of pattern of length m - shift; } end. THEOREM 3. The Turbo_RF' algorithm finds all occurrences of a pattern of length m in a text of length n in O(n) time. t makes at most n inspections of text symbols against the pattern. The total number of iterations ph(j) done by the algorithm does not exceed n. The preprocessing time is also linear. PROOF. We have already discussed the preprocessing phase. Each time we make an iteration of type ph(j) the pattern is shifted to the right of the text by at least one position, hence there are at most n such iterations. This completes the proof. []

10 256 M. Crochemore, A. Czumaj, L. Gasieniec, S. Jarominek, T. Lecroq, W. Plandowski, W. Rytter The Second Approach. Here we considerably improve the complexity of the search phase of the algorithm. This increases the cost of the preprocessing phase which, however, remains linear. n the Turbo_RF' algorithm, at a decision point, we sometimes have to spend linear time to make many iterations of type ph(j). n this new version we compute the shift in constant time. t is enough to show how to preprocess the suffix tree T for p' to compute nextsuf(v) for any factor v of p' in constant time whenever it is needed. First we show how to compute nextsuf(v) for main factors, i.e., factors corresponding to nodes of T. Let us identify the main factors with their corresponding nodes. The computation is in a bottom-up manner on the tree T. Case of a bottom node: v is a leaf. nextsuf (v) = nil; Case of an internal node: assume v has sons vt, v2,...,vq, then a son vj exists such that nextsuf(v) = nextsuf(v~) or, if vj is a leaf, then nextsuf(v) = vj. We scan the sons vi of v and, for each of them, we check if nextsuf(vj) or vj is a 9ood candidate for nextsuf(v). We choose the longest good candidate, if there is one. Otherwise the result is nil. The word v is a prefix of each of the candidates. What exactly does it mean for a word y, that has v as a prefix, to be a good candidate? Let u be the prefix of the pattern p of length m - v l. The candidate y is good iff the prefix of y of length [Y[ - F vl is a suffix of u (see Figure 8). This means that the prefix of the pattern which starts at position [u] - ([y] - v[) continues to the end of [u[. We have to be able to check this situation in constant time. t is enough to have a table PREF such that, for each position k in the pattern, the value PREF[k] is the length of the longest prefix of p which starts at position k in p. This table can be easily computed in linear time as a side effect of the Knuth-Morris-Pratt string-matching algorithm. After the table is computed, we can check for each candidate in 0(1) time if it is good or not. For nodes which are not main, we set nextpref(v) = y, where y = nextpref(repr(vr)), if y is a good candidate for v, i.e., PREF[k] >_ lu[ - k, where k = [u[ - ([y[ - v[). Hence, after preprocessing we keep a certain amount of additional data: the suffix tree, the table of nextpref(v) for all main nodes of this tree, and the table PREF. Anyway, altogether this needs only linear-size memory, and is later accessed in a read-only way. prefix of pattern factor of pattern u v candidate y prefix of pattern Fig. 8. The candidate y is good iff PREF[k] > lul - k, where k = lul - ([y[ - [v[).

11 Speeding Up Two String-Matching Algorithms 257 THEOREM 4. The pattern can be preprocessed in linear time in such a way that the computation of the RF_shift in the Turbo_RF algorithm can be accomplished in constant time whenever it is needed. The data strucure used in the preprocessing phase is read-only at the search phase. Only a constant read-write memory is used at search phase. Denote by Turbo_RF" the version of the Turbo_RF algorithm in which the computation of the RF_shift is computed at decision points according to Theorem 4. The resulting Turbo_RF' algorithm can be viewed as an automata-oriented string matching. We scan the text backward and change the state of the automaton. The shift is then specified by the state of the automaton where the scanning stops. Applying this idea directly gives a kind of Boyer-Moore automaton of polynomial (but not linear) size [L]. However, it is enough to keep in memory only a linear part of such an automaton (implied by the preprocessing referred in the theorem). 4. The Analysis of the Average-Case Complexity of the Algorithm RF and ts Versions. Denote by r the size of the alphabet. Assume r > 1, and log m < 3" m/8 (all logarithms are to base r). We consider the situation when the text is random. The probability of the occurrence of a specified letter on the ith position is 1/r and does not depend on letters on other positions. THEOREM 5. The expected time of the RF algorithm is O(n.log(m)/m). PROOF. Let Li be the length of shift in the ith iteration of the algorithm and let S~ be the length of substring of the pattern that is found in this iteration. Examine the first iteration of the algorithm. There are no more than m subwords of the pattern of length 2. log m and there are r 2'l~ = m 2 possibilities of words that we read from the text which are all equally probable. Thus, with probability greater than (or equal to) 1-1/m, \$1 < 2" log m, so L 1 > m - 2. log m > m/4. Let us call the ith shift long iff L i >_ m/4 and short otherwise. Divide computations into phases. Each phase ends on the first long shift. This means that there is exactly one long shift in each phase. t is obvious (by the definition of the long shift) that there are O(n/m) phases in the algorithm. We now prove that an expected cost of each phase is O(log m). CLAM 1. Assume that shifts i and i + 1 are both short. Then with probability more than 1/2, the (i + 2)th shift is long. PROOF. f the ith and (i + 1)th shifts are both short, then the pattern is of the form V(wv)ksz where k > 3, w, z r e, [wv[ = Li+ 1, and [szl = Li (s may be equal to when L i < L i + 1). Without loss of generality we can assume that wv is the minimal period of v(wv) k in such a sense that if there exists a word w'v' such that v(wv) k = v'(w' v') k' and w'v'l <_ wvl, then w' v' = wv. We can also assume (eventually changing wv and k) that (wv) k and sz do not have a common prefix. Now we have

12 258 M. Crochemore, A. Czumaj, L. Gasieniec, S. Jarominek, T. Lecroq, W. Plandowski, W. Rytter [ wv[ < L i + 1 and sz [ < L i. A suffix of the read part of the text is of the form (wv)ks, and we have A = min(l i + 1, Li) new symbols to read in the (i + 2)th iteration. Let y be a random word of length A to be read. Note that zl _< [yl. The question is what is the probability that wvsy is a substring of the pattern. t is easy to see that if wvsy is a substring of V(wv)ksz, then y must be either equal to pref(z) if s ~ e, or pref((wv)lsz) otherwise, so there is at most A possibilities of such a y because the length of y is fixed. Thus the probability that reading A new symbols leads to a long (longer than Li + Li+ 1 which is less than 2" m/4) substring of the pattern is not greater than A/r a < 1/2. So, with probability _> 1/2, Li+ 2 = m - Pi+2 > m - S i + 2 >- m/2 > m/4. This completes the proof of the Claim 1. [] CLAM 2. The probability that the kth shift of the phase is short is ~ 1/2 for k >_ 3. PROOF. The assumptions say that the (k - 1)th and (k - 2)th shifts are also short, so by Claim 1 the kth shift is long with probability > 1/2. This completes the proof of Claim 2. [] We now end the proof of Theorem 5. Let X be the random variable which is the number of short shifts in the phase. What can we say about the distribution of X? Pr(X : 0) > 1-1/m, Pr(X = 1) < 1/m, Pr(X = 2) < 1/m, Pr(X = 3) < 1/2.m, Pr(X = k) < 1/2 k- 2rn for k>2. Let Y be the random variable which is a cost of the phase. Y is a function of the random variable X and Y < 2. log m + X" m. E(Y) < 2.log m Pr(X = 0) + (2.log m + m) Pr(X = 1) + ~ Pr(X = k)(2.1og m + k.m) k=2 < O(log m) + 2"log m/m y' 1/2 k-z + ~ k/2 k-z = O(log m). k=2 k=2 This completes the proof of Theorem 5. [] 5. Speeding up the Boyer-Moore Algorithm. The linear-time complexity of the Boyer-Moore algorithm [BM] is quite nontrivial. The first proof of the linearity

13 Speeding Up Two String-Matching Algorithms 259 of the algorithm appeared in [KMP]. However, it needed more than a decade for full analysis. Cole has proved that the algorithm makes at most 3" n comparisons, see [Co], and that this bound is tight. The "mysterious" behavior of the algorithm is due to the fact that it forgets the history and the same part of the text can be scanned an unbounded number of times. The whole "mystery" disappears when the whole history is memorized and additional O(m)-size memory is used. Then in successful comparisons each position of the text is inspected at most once. The resulting algorithm is an elegant string-matching algorithm (see [AG]) with a straightforward analysis of the text-searching phase. However, it requires more preprocessing and more tables than the original BM algorithm. n our approach no extra preprocessing is needed and the only table we keep is the original table of shifts used in the BM algorithm. Hence, all extra memory is of a constant size (two integers). The resulting Turbo_BM algorithm forgets all its history except the most recent one and its behavior again has a "mysterious" character. Despite that, the complexity is improved and the analysis is simple. The main feature of the Turbo_BM algorithm is that during the search of the pattern, one factor of the pattern that matches the text at the current position is memorized (this factor can be empty). This has two advantages: --t can lead to a jump over the memorized factor during the scanning phase. --t allows the execution of what we call a Turbo_shift. We now explain what a Turbo_shift is. Let x (match) be the longest suffix of p that matches the text at a given position. Also, let y (memory) be the memorized factor that matches the text at the same position. We assume that x and y do not overlap (for some nonempty word z, yzx is a suffix of p). For different letters a and b, ax is a suffix of p aligned with bx in the text (see Figure 9). The only interesting situation is when y is nonempty, which occurs only immediately after a BM_shift (a shift determined by the BM_shift function). Let shift be its length. A Turbo_shift can occur when x is shorter than y. n this situation ax is a suffix of y. Thus a and b occur at distance shift in the text. However, suffix yzx of p has period shift (by definition of the BM_shift), and thus it cannot overlap both occurrences of letters a and b in the text. As a consequence, the smallest valid shift of the pattern is [Yl - x, which we call a Turbo_shift. memory match text, L~ ~ ~ ~ ~,~t a L~::~ ~:~ ~i! b [~,~... i] i[ ', z+j, pattern ~iliiii!iii~!ii~!~l~i~iii~i~] a [i!ii~iiiiii~iiii] ] [~:.'~...~ii~ turbo_shift ', ::'::::'::":::'::'::::::::'::::::::::::: a '==:========='=:==:=====:'=:= a ~'~"~'~:~ Fig. 9. Turbo_shift is memory-match. Distinct letters a and b in text are at distance h, and h is a period of the fight part of pattern.

14 260 M. Crochemore, A. Czumaj, L. Gasieniec, S. Jarominek, T. Lecroq, W. Plandowski, W. Rytter memory match, } m p++++++~++,.lb.[ i i, l+j, patter. ' ~+++~M+~+M a+++?+m l,a [ d +++~ j turbo_shift t i ~+m:++:m+mm:+++:++" a +m:+:+++:~:~:m++ Fig. 10. Turbo_shift is memory-match. Distinct letters a and b in text are at distance h, and h is a period of the right part of pattern. n the Turbo_BM algorithm the length of the shift is shift = max(bm_shift(x), Turbo_shift). From the analysis of the BM algorithm itself, and the above computation of shifts, it is straightforward to derive a correctness proof of the Turbo_BM algorithm. n the Turbo_BM algorithm below, the reader can note that, in case a BM_shift does not apply, the length of the actual shift is made greater than the length of the matched suffix x. The proof of correctness of this fact is similar to the above argument and is explained in Figure 10. Algorithm Turbo_BM /* reversed-suffix string matching */ i.'= 0; memory :-- 0; while i _< n - m do { j:=m; while j > 0 and x in SUF(p) do if memory # 0 and j = m - shift then j..= j - memory /* jump */ else j:= j- 1; if x = p then report match at position i; Turbo_shift:= memory - match; /* match = Jxf - 1 */ if Turboshift > BM_shift[j] then { i := i + max(turbo_shift, match + 1); memory := O; } else { i.'= i + BM_shift[j]; memory := min(m - shift, match); } } end. THEOREM 6. The search phase of the Turbo_BM algorithm is linear. t makes at most 2" n comparisons. PROOF. We decompose the search into stages. Each stage is itself divided into the two operations: scan and shift. At stage k we call SUfk the suffix of the pattern that matches the text and SUfk its length. t is preceded by a letter that does not

15 Speeding Up Two String-Matching Algorithms 261 match the aligned letter in the text (in the case SUfk is not the p itself). We also call shiftk the length of the shift done at stage k. Consider three types of stages according to the nature of the scan and of the shift. We say that the shift at stage k is short if 2" shiftk < SUfk + 1. The three types are: (i) A stage followed by a stage with jump. (ii) A stage with long shift, not followed by a stage with jump. (iii) A stage with short shift, not followed by a stage with jump. The idea of the proof is to amortize comparisons with shifts. We define cost k as follows: --f stage k is of type (i), cost k = 1. --f stage k is of type (ii) or (iii), cost k = SUfk + 1. n the case of a type (i) stage, the cost corresponds to the only mismatch comparison. Other comparisons done during the same stage are reported to the cost of next stages. The total number of comparisons executed by the algorithm is the sum of the costs. We want to prove Ecosts < 2" Xshifts. n the second E the length of the last shift is replaced by m. Even with this assumption, we have Eshifts _< t l, and, if the above inequality holds, so is the result, Ecosts < 2" t l. For stage k of type (i), cost k ( = 1) is trivially less than 2- shiftk, because shift k > O. For stage k of type (ii), costk = SUfk + 1 <_ 2" shift k, by definition of long shifts. t remains to consider stages of type (iii). Since in this situation we have shift k < SUfk, the only possibility is that a BM_shift is applied at stage k. Memory is then set up. At the next sgage, k + 1, the memory is not empty, which leads to a potential turbo-shift. The situation at stage k + 1 is the general situation when a turbo-shift is possible (see Figure 11). Before continuing the proof, we first consider two cases and establish inequalities (on the cost of stage k) that are used later. Case (a): SUfk+Shiftk< lp[. By definition of the turbo-shift, we have SUfk -- SUfk+ l < shiftk+ 1. Thus, COStk = SUfk + 1 <_ sufk + l + shiftk + l + 1 <_ shiftk + shiftk + l. Case (b): SUfk + shift k > [p]. By definition of the turbo-shift, we have SUfk+a + shiftk + shiftk + 1 > m. Then cost k ~ m <_ 2"shift k shiftk + 1. We can consider that at stage k + 1 Case (b) occurs, because this gives the higher bound on cost k (this is true if shiftk > 2; the case shiftk = 1 can be treated directly). f stage k + 1 is of type (i), then eostk+ 1 = 1, and then costk +eostk+l <2"shiftk +shiftk+l, an even better bound than expected. f at stage k+ 1 we have SUfk+l <shiftk+l, then we get what was expected: eostk + COStk + 1 <-- 2" shift k + 2" shift k + 1.

16 262 M. Crochemore, A. Czumaj, L. Gasieniec, S. Jarominek, T. Lecroq, W. Plandowski, W. Rytter SUfk SUlk+ 1 ~x7 - [ii~niini~i!iiiiiii~!iniiininiii!niiiiiii!~i~ii~!!ii~n blnini~i~i~?in, pattern [!iiiii!ii~iiii~#iiiiii!~ii~!ii~i~i~i~ii~iii~!i~i~:~iii~iii!iiiiiiiiii~iii~iiiiiiii!!i!~!iii!~ '-'~- shiflk ~ i '[i,i,~,~:>~i!i~i!!i!i~{ i~!~! i~ i!i{ i: a i,i!~...! CASE (A) shift k+ 1 \$ / p,~tt~r,, [Ni~i!iiiiiiiiii~!iiiii~ti~iN~iiiiiiii:~i~iiNNiiiiiNii!iiii!~!i~Niiiiii ~ ~h/~, [i!ili~iiii~niiinn~!iiii!iiiiil a [!iiiii~i~!~i Niii a [~:.~iii~iiii~ii~:i}~] ~'~ shiftk+ ~ CASE (n) Fig. 11. Costs of stages k and k + 1 correspond to the shadowed areas plus mismatches. f shift k is small, then shiftk + shift,+~ is big enough to amortize the costs partially. The last situation to consider is when, at stage k + 1, we have SUfk + 1 > shiftk + ~. This means, as previously mentioned, that a BM-shift is applied at stage k + 1. Thus, the above analysis also applies at stage k + 1, and, since only Case (a) can occur then, we get costk + 1 ~ shiftk shiftk + 2. We finally get cost k + costk + 1 <_ 2"shift k + 2 " shiftk + l + shiftk + 2. The last argument proves the first step of an induction: if all stages k to k + j are such that SUfk > shiftk,..., SUfk + j > shiftk + j, then cost k COStk+j <_ 2"shift k "shiftk+ j -]- shiftk+j+ i. Let k' be the first stage after stage k such that SUfk, < shiftk,. nteger k' exists because the contrary would produce an infinte sequence of shifts with decreasing lengths. We then get COSt k + "'" + COSt k, <_ 2"shift k + "'" -]- 2" shiftk,. Which shows that Zcost k ~ 2" ]~shiftk, as expected. [] REMARK (On the Additional Application of Occurrence Shifts in the Turbo_BM Algorithm). n the Turbo_BM algorithm we deal only with match shifts of the BM algorithm. f the alphabet is binary, then occurrence shifts in the BM algorithm are useless. Generally, for small alphabets the occurrence heuristics have little effect. For bigger alphabets, we can include the occurrence shifts in the algorithm. The version of Turbo_BM including occurrence shifts is given below. n case an

17 Speeding Up Two String-Matching Algorithms 263 match_shift match_shift memory b ~ a ~ match match_shift is a period of this segment c a match i! these two segments cannot overlap Fig. 12. foccshift > match_shiftor if Turbo_shift > matchshift, then there is a symbol b in the pattern such that a # b. We then have an additional Turbo_shift of size add-shift = matchl = xl - 1. occurrence shift is possible, the length of the shift is made greater than match. This is similar to the case where a turbo-shift applies. The proof of correctness is also similar, and is explained in Figure 12. The match shift of Boyer-Moore is a period of the segment match of the text (Figure 12). At the same time we have two distinct symbols a, b whose distance is the period of the segment. Hence we know that the shift is at least match + 1. This shows that Theorem 6 is still valid for the Turbo_BM algorithm with occurrence shifts. Algorith Turbo_BM with occurrence shifts /* reversed-suffix string matching */ i := 0; memory := 0; while i < n - m do { j:=m; while j > 0 and x in SUF(p) do if memory ~ 0 and j = m - shift then j := j - memory /* jump */ elsej:=j- 1; if x = p then report match at position i; Turbo_shift := memory-match; /* match = xl - 1 */ shift := max(bm shift[j], BM-occ-shift[j], Turbo_shift); if shift > BM_shift then { i:= i + max(shift, match + 1); memory := 0; } else { i:= shift; memory := min(m - shift, match); ) ) end. THEOREM 7. comparisons. The Turbo_BM algorithm with occurrence shifts makes at most 2. n

18 264 M. Crochemore, A. Czumaj, L. Gasieniec, S. Jarominek, T. Lecroq, W. Plandowski, W. Rytter Table 1. Experiments on a binary alphabet. m BM TRF , Experimental Results. n this section we present experiments done on the Turbo_RF algorithm versus the BM algorithm. This shows the good behavior of the Turbo RF algorithm, that seems faster than the fastest practical algorithm. Experiments to count the number of inspections are done on characters of the text. We have implemented both algorithms in C language. We use the binary alphabet {0, 1}. The text, randomly built on this alphabet, has length 15,000 (49.59% of 0 and 50.41% of 1). We search for all the patterns of length from 2 to 7. For longer patterns we search for 100 different patterns randomly chosen. The values given in Table 1 are the average numbers of inspections, c, for all symbols of the text, and all considered patterns. Table 1 is translated into Figures 13 and 14. 1,2 Binary alphabet 0,8 c 0,6 0,4 0,2 0 n~ m ' m Fig. 13. Experiments for short patterns (11, the BM algorithm; D, the Turbo_RF algorithm).

19 Speeding Up Two String-Matching Algorithms 265 0,7 T Binary alphabet 0,6 ~N~ 0,5 c 0,4 0,3 0,2 0,1 0 ' m Fig. 14. Experiments for long patterns. (U, the BM algorithm; [7, the Turbo_RF algorithm). These results show that, for a binary alphabet, the Turbo_RF algorithm is always better than the BM algorithm. t even achieves a score less than ~o for patterns longer than 80 characters. These experimental results confirm the theoretical properties of the Turbo_RF algorithm. The Remark in Section 5 and the Appendix both show flexibility of the RF and Turbo_BM algorithms. Many changes and improvements are possible, probably the most interesting is the possibility of the extension for multipattern matching. We believe that the RF algorithm is a practical algorithm. t is usually extremely fast. The constant coefficient is small. A reasonable number of computer experiments performed so far support the claim. Appendix. Computation of Periods in the Turbo_RF Algorithm. We show how the period of the prefix u of the pattern can be computed dynamically in the Turbo_RF algorithm. This means that the table of periods for all prefixes of the pattern need not be precomputed nor stored through the algorithm. The general situation is as follows: We have to recognize a prefix u of the text and we are scanning the portion v of the text which consists of m - l ul characters at the right of u. Let u = (ul%)rul with luw21 = per(u). Case 1: A mismatch appears when scanning for u2ulv. n this situation the period of the new prefix can be computed with the difference of the two last positions where the pattern can start (see Figure 15). Case 2: No mismatch appears when scanning for uzulv. Case 2.1: vl s per(u). n this situation the period of the new prefix is equal to per(u) because this new prefix is also a prefix of u greater than per(u) (see Figure 16). Case 2.2: [vl > per(u). Let L be the length of the path between the start state and the last final state encountered when scanning for u2ulv in the automaton. Case 2.2.1: L > v[. n this case L must be equal to vl + lull because L > v[ means that the beginning of the pattern is in u2ul and by the minimality of

20 266 M. Crochemore, A. Czumaj, L. Gasieniec, S. Jarominek, T. Lecroq, W. Plandowski, W. Rytter text pattern Fig. 15. A mismatch occurs while scanning u2u~v. i i i J :::::: pattern l u~ lu:l u, lu~ l u~ l u~ l u~ ' i text ', ::: ] U 1 [U2 j U 1 lu2[ U 1 [U2 [ U 1 [ v [ Fig. 16. v[ < per(u). text l ui l u21 u, l u2 l ul l u~ l ul pattern, --'1 "1 lull ut ~1.~, lu2j u~ 'v... Fig. 17. L > v[. L text pattern i i l~w prefix shift ', period ~'~'-J --- ] ul [U2[ Ul lu2 u* lu=l u* V i i... L i i Fig. 18. L < Jvl.

### On line construction of suffix trees 1

(To appear in ALGORITHMICA) On line construction of suffix trees 1 Esko Ukkonen Department of Computer Science, University of Helsinki, P. O. Box 26 (Teollisuuskatu 23), FIN 00014 University of Helsinki,

### A Fast Pattern Matching Algorithm with Two Sliding Windows (TSW)

Journal of Computer Science 4 (5): 393-401, 2008 ISSN 1549-3636 2008 Science Publications A Fast Pattern Matching Algorithm with Two Sliding Windows (TSW) Amjad Hudaib, Rola Al-Khalid, Dima Suleiman, Mariam

### The Union-Find Problem Kruskal s algorithm for finding an MST presented us with a problem in data-structure design. As we looked at each edge,

The Union-Find Problem Kruskal s algorithm for finding an MST presented us with a problem in data-structure design. As we looked at each edge, cheapest first, we had to determine whether its two endpoints

### Improved Approach for Exact Pattern Matching

www.ijcsi.org 59 Improved Approach for Exact Pattern Matching (Bidirectional Exact Pattern Matching) Iftikhar Hussain 1, Samina Kausar 2, Liaqat Hussain 3 and Muhammad Asif Khan 4 1 Faculty of Administrative

### A Partition-Based Efficient Algorithm for Large Scale. Multiple-Strings Matching

A Partition-Based Efficient Algorithm for Large Scale Multiple-Strings Matching Ping Liu Jianlong Tan, Yanbing Liu Software Division, Institute of Computing Technology, Chinese Academy of Sciences, Beijing,

### The Goldberg Rao Algorithm for the Maximum Flow Problem

The Goldberg Rao Algorithm for the Maximum Flow Problem COS 528 class notes October 18, 2006 Scribe: Dávid Papp Main idea: use of the blocking flow paradigm to achieve essentially O(min{m 2/3, n 1/2 }

### Robust Quick String Matching Algorithm for Network Security

18 IJCSNS International Journal of Computer Science and Network Security, VOL.6 No.7B, July 26 Robust Quick String Matching Algorithm for Network Security Jianming Yu, 1,2 and Yibo Xue, 2,3 1 Department

### Maxime Crochemore, Thierry Lecroq. HAL Id: hal https://hal.archives-ouvertes.fr/hal

Suffix tree Maxime Crochemore, Thierry Lecroq To cite this version: Maxime Crochemore, Thierry Lecroq. Suffix tree. Ling Liu and M. Tamer Özsu. Encyclopedia of Dataase Systems, Springer Verlag, pp.2876-2880,

### Two General Methods to Reduce Delay and Change of Enumeration Algorithms

ISSN 1346-5597 NII Technical Report Two General Methods to Reduce Delay and Change of Enumeration Algorithms Takeaki Uno NII-2003-004E Apr.2003 Two General Methods to Reduce Delay and Change of Enumeration

### Approximation Algorithms

Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms

### Regular Expressions with Nested Levels of Back Referencing Form a Hierarchy

Regular Expressions with Nested Levels of Back Referencing Form a Hierarchy Kim S. Larsen Odense University Abstract For many years, regular expressions with back referencing have been used in a variety

### Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 17 Shannon-Fano-Elias Coding and Introduction to Arithmetic Coding

### Lecture 7: Approximation via Randomized Rounding

Lecture 7: Approximation via Randomized Rounding Often LPs return a fractional solution where the solution x, which is supposed to be in {0, } n, is in [0, ] n instead. There is a generic way of obtaining

### Markov Algorithm. CHEN Yuanmi December 18, 2007

Markov Algorithm CHEN Yuanmi December 18, 2007 1 Abstract Markov Algorithm can be understood as a priority string rewriting system. In this short paper we give the definition of Markov algorithm and also

### A Multiple Sliding Windows Approach to Speed Up String Matching Algorithms

A Multiple Sliding Windows Approach to Speed Up String Matching Algorithms Simone Faro and Thierry Lecroq Università di Catania, Viale A.Doria n.6, 95125 Catania, Italy Université de Rouen, LITIS EA 4108,

### Network (Tree) Topology Inference Based on Prüfer Sequence

Network (Tree) Topology Inference Based on Prüfer Sequence C. Vanniarajan and Kamala Krithivasan Department of Computer Science and Engineering Indian Institute of Technology Madras Chennai 600036 vanniarajanc@hcl.in,

### Finding and counting given length cycles

Finding and counting given length cycles Noga Alon Raphael Yuster Uri Zwick Abstract We present an assortment of methods for finding and counting simple cycles of a given length in directed and undirected

### 2.3 Scheduling jobs on identical parallel machines

2.3 Scheduling jobs on identical parallel machines There are jobs to be processed, and there are identical machines (running in parallel) to which each job may be assigned Each job = 1,,, must be processed

### Efficient Data Structures for the Factor Periodicity Problem

Efficient Data Structures for the Factor Periodicity Problem Tomasz Kociumaka Wojciech Rytter Jakub Radoszewski Tomasz Waleń University of Warsaw, Poland SPIRE 2012 Cartagena, October 23, 2012 Tomasz Kociumaka

### Lecture 4: Exact string searching algorithms. Exact string search algorithms. Definitions. Exact string searching or matching

COSC 348: Computing for Bioinformatics Definitions A pattern (keyword) is an ordered sequence of symbols. Lecture 4: Exact string searching algorithms Lubica Benuskova http://www.cs.otago.ac.nz/cosc348/

### The Trip Scheduling Problem

The Trip Scheduling Problem Claudia Archetti Department of Quantitative Methods, University of Brescia Contrada Santa Chiara 50, 25122 Brescia, Italy Martin Savelsbergh School of Industrial and Systems

### Local periods and binary partial words: An algorithm

Local periods and binary partial words: An algorithm F. Blanchet-Sadri and Ajay Chriscoe Department of Mathematical Sciences University of North Carolina P.O. Box 26170 Greensboro, NC 27402 6170, USA E-mail:

### A. V. Gerbessiotis CS Spring 2014 PS 3 Mar 24, 2014 No points

A. V. Gerbessiotis CS 610-102 Spring 2014 PS 3 Mar 24, 2014 No points Problem 1. Suppose that we insert n keys into a hash table of size m using open addressing and uniform hashing. Let p(n, m) be the

### Partitioning edge-coloured complete graphs into monochromatic cycles and paths

arxiv:1205.5492v1 [math.co] 24 May 2012 Partitioning edge-coloured complete graphs into monochromatic cycles and paths Alexey Pokrovskiy Departement of Mathematics, London School of Economics and Political

### Fast string matching

Fast string matching This exposition is based on earlier versions of this lecture and the following sources, which are all recommended reading: Shift-And/Shift-Or 1. Flexible Pattern Matching in Strings,

### COT5405 Analysis of Algorithms Homework 3 Solutions

COT0 Analysis of Algorithms Homework 3 Solutions. Prove or give a counter example: (a) In the textbook, we have two routines for graph traversal - DFS(G) and BFS(G,s) - where G is a graph and s is any

### GRAPH THEORY LECTURE 4: TREES

GRAPH THEORY LECTURE 4: TREES Abstract. 3.1 presents some standard characterizations and properties of trees. 3.2 presents several different types of trees. 3.7 develops a counting method based on a bijection

### Offline sorting buffers on Line

Offline sorting buffers on Line Rohit Khandekar 1 and Vinayaka Pandit 2 1 University of Waterloo, ON, Canada. email: rkhandekar@gmail.com 2 IBM India Research Lab, New Delhi. email: pvinayak@in.ibm.com

### Efficiency of algorithms. Algorithms. Efficiency of algorithms. Binary search and linear search. Best, worst and average case.

Algorithms Efficiency of algorithms Computational resources: time and space Best, worst and average case performance How to compare algorithms: machine-independent measure of efficiency Growth rate Complexity

### 1 Approximating Set Cover

CS 05: Algorithms (Grad) Feb 2-24, 2005 Approximating Set Cover. Definition An Instance (X, F ) of the set-covering problem consists of a finite set X and a family F of subset of X, such that every elemennt

### An efficient matching algorithm for encoded DNA sequences and binary strings

An efficient matching algorithm for encoded DNA sequences and binary strings Simone Faro and Thierry Lecroq faro@dmi.unict.it, thierry.lecroq@univ-rouen.fr Dipartimento di Matematica e Informatica, Università

### Bounded Cost Algorithms for Multivalued Consensus Using Binary Consensus Instances

Bounded Cost Algorithms for Multivalued Consensus Using Binary Consensus Instances Jialin Zhang Tsinghua University zhanggl02@mails.tsinghua.edu.cn Wei Chen Microsoft Research Asia weic@microsoft.com Abstract

### Planar Tree Transformation: Results and Counterexample

Planar Tree Transformation: Results and Counterexample Selim G Akl, Kamrul Islam, and Henk Meijer School of Computing, Queen s University Kingston, Ontario, Canada K7L 3N6 Abstract We consider the problem

### Lecture 2: The Complexity of Some Problems

IAS/PCMI Summer Session 2000 Clay Mathematics Undergraduate Program Basic Course on Computational Complexity Lecture 2: The Complexity of Some Problems David Mix Barrington and Alexis Maciel July 18, 2000

### Lecture 3: Linear Programming Relaxations and Rounding

Lecture 3: Linear Programming Relaxations and Rounding 1 Approximation Algorithms and Linear Relaxations For the time being, suppose we have a minimization problem. Many times, the problem at hand can

### Fairness in Routing and Load Balancing

Fairness in Routing and Load Balancing Jon Kleinberg Yuval Rabani Éva Tardos Abstract We consider the issue of network routing subject to explicit fairness conditions. The optimization of fairness criteria

### Large induced subgraphs with all degrees odd

Large induced subgraphs with all degrees odd A.D. Scott Department of Pure Mathematics and Mathematical Statistics, University of Cambridge, England Abstract: We prove that every connected graph of order

### 4.4 The recursion-tree method

4.4 The recursion-tree method Let us see how a recursion tree would provide a good guess for the recurrence = 3 4 ) Start by nding an upper bound Floors and ceilings usually do not matter when solving

### Knuth-Morris-Pratt Algorithm

December 18, 2011 outline Definition History Components of KMP Example Run-Time Analysis Advantages and Disadvantages References Definition: Best known for linear time for exact matching. Compares from

### 4 Basics of Trees. Petr Hliněný, FI MU Brno 1 FI: MA010: Trees and Forests

4 Basics of Trees Trees, actually acyclic connected simple graphs, are among the simplest graph classes. Despite their simplicity, they still have rich structure and many useful application, such as in

### Can you design an algorithm that searches for the maximum value in the list?

The Science of Computing I Lesson 2: Searching and Sorting Living with Cyber Pillar: Algorithms Searching One of the most common tasks that is undertaken in Computer Science is the task of searching. We

### A single minimal complement for the c.e. degrees

A single minimal complement for the c.e. degrees Andrew Lewis Leeds University, April 2002 Abstract We show that there exists a single minimal (Turing) degree b < 0 s.t. for all c.e. degrees 0 < a < 0,

### Shortest Inspection-Path. Queries in Simple Polygons

Shortest Inspection-Path Queries in Simple Polygons Christian Knauer, Günter Rote B 05-05 April 2005 Shortest Inspection-Path Queries in Simple Polygons Christian Knauer, Günter Rote Institut für Informatik,

### The relationship of a trees to a graph is very important in solving many problems in Maths and Computer Science

Trees Mathematically speaking trees are a special class of a graph. The relationship of a trees to a graph is very important in solving many problems in Maths and Computer Science However, in computer

### The LCA Problem Revisited

The LA Problem Revisited Michael A. Bender Martín Farach-olton SUNY Stony Brook Rutgers University May 16, 2000 Abstract We present a very simple algorithm for the Least ommon Ancestor problem. We thus

### Multimedia Communications. Huffman Coding

Multimedia Communications Huffman Coding Optimal codes Suppose that a i -> w i C + is an encoding scheme for a source alphabet A={a 1,, a N }. Suppose that the source letter a 1,, a N occur with relative

### Randomized algorithms

Randomized algorithms March 10, 2005 1 What are randomized algorithms? Algorithms which use random numbers to make decisions during the executions of the algorithm. Why would we want to do this?? Deterministic

### Comparing Leaf and Root Insertion

30 Research Article SACJ, No. 44., December 2009 Comparing Leaf Root Insertion Jaco Geldenhuys, Brink van der Merwe Computer Science Division, Department of Mathematical Sciences, Stellenbosch University,

### A The Exact Online String Matching Problem: a Review of the Most Recent Results

A The Exact Online String Matching Problem: a Review of the Most Recent Results SIMONE FARO, Università di Catania THIERRY LECROQ, Université de Rouen This article addresses the online exact string matching

### Catalan Numbers. Thomas A. Dowling, Department of Mathematics, Ohio State Uni- versity.

7 Catalan Numbers Thomas A. Dowling, Department of Mathematics, Ohio State Uni- Author: versity. Prerequisites: The prerequisites for this chapter are recursive definitions, basic counting principles,

### Reliability Guarantees in Automata Based Scheduling for Embedded Control Software

1 Reliability Guarantees in Automata Based Scheduling for Embedded Control Software Santhosh Prabhu, Aritra Hazra, Pallab Dasgupta Department of CSE, IIT Kharagpur West Bengal, India - 721302. Email: {santhosh.prabhu,

### arxiv:0810.2390v2 [cs.ds] 15 Oct 2008

Efficient Pattern Matching on Binary Strings Simone Faro 1 and Thierry Lecroq 2 arxiv:0810.2390v2 [cs.ds] 15 Oct 2008 1 Dipartimento di Matematica e Informatica, Università di Catania, Italy 2 University

### Theorem A graph T is a tree if, and only if, every two distinct vertices of T are joined by a unique path.

Chapter 3 Trees Section 3. Fundamental Properties of Trees Suppose your city is planning to construct a rapid rail system. They want to construct the most economical system possible that will meet the

Load Balancing Backtracking, branch & bound and alpha-beta pruning: how to assign work to idle processes without much communication? Additionally for alpha-beta pruning: implementing the young-brothers-wait

### Lecture 1: Course overview, circuits, and formulas

Lecture 1: Course overview, circuits, and formulas Topics in Complexity Theory and Pseudorandomness (Spring 2013) Rutgers University Swastik Kopparty Scribes: John Kim, Ben Lund 1 Course Information Swastik

### Average-Case Analysis of Approximate Trie Search

Moritz G. Maaß maass@in.tum.de Institut für Informatik Technische Universität München 15th CPM, July 2004 1 Introduction Definitions Algorithms Probabilistic Model 2 Related Work 3 Main Results LS Algorithm

### International Journal of Information Technology, Modeling and Computing (IJITMC) Vol.1, No.3,August 2013

FACTORING CRYPTOSYSTEM MODULI WHEN THE CO-FACTORS DIFFERENCE IS BOUNDED Omar Akchiche 1 and Omar Khadir 2 1,2 Laboratory of Mathematics, Cryptography and Mechanics, Fstm, University of Hassan II Mohammedia-Casablanca,

### Sample Problems in Discrete Mathematics

Sample Problems in Discrete Mathematics This handout lists some sample problems that you should be able to solve as a pre-requisite to Computer Algorithms Try to solve all of them You should also read

### minimal polyonomial Example

Minimal Polynomials Definition Let α be an element in GF(p e ). We call the monic polynomial of smallest degree which has coefficients in GF(p) and α as a root, the minimal polyonomial of α. Example: We

### Formal Languages and Automata Theory - Regular Expressions and Finite Automata -

Formal Languages and Automata Theory - Regular Expressions and Finite Automata - Samarjit Chakraborty Computer Engineering and Networks Laboratory Swiss Federal Institute of Technology (ETH) Zürich March

### Notes on Factoring. MA 206 Kurt Bryan

The General Approach Notes on Factoring MA 26 Kurt Bryan Suppose I hand you n, a 2 digit integer and tell you that n is composite, with smallest prime factor around 5 digits. Finding a nontrivial factor

### Algorithms. Theresa Migler-VonDollen CMPS 5P

Algorithms Theresa Migler-VonDollen CMPS 5P 1 / 32 Algorithms Write a Python function that accepts a list of numbers and a number, x. If x is in the list, the function returns the position in the list

### Discrete Mathematics, Chapter 5: Induction and Recursion

Discrete Mathematics, Chapter 5: Induction and Recursion Richard Mayr University of Edinburgh, UK Richard Mayr (University of Edinburgh, UK) Discrete Mathematics. Chapter 5 1 / 20 Outline 1 Well-founded

### Improved Single and Multiple Approximate String Matching

Improved Single and Multiple Approximate String Matching Kimmo Fredrisson and Gonzalo Navarro 2 Department of Computer Science, University of Joensuu fredri@cs.joensuu.fi 2 Department of Computer Science,

### Mathematics of Cryptography

CHAPTER 2 Mathematics of Cryptography Part I: Modular Arithmetic, Congruence, and Matrices Objectives This chapter is intended to prepare the reader for the next few chapters in cryptography. The chapter

### Analysis of Algorithms, I

Analysis of Algorithms, I CSOR W4231.002 Eleni Drinea Computer Science Department Columbia University Thursday, February 26, 2015 Outline 1 Recap 2 Representing graphs 3 Breadth-first search (BFS) 4 Applications

### Full and Complete Binary Trees

Full and Complete Binary Trees Binary Tree Theorems 1 Here are two important types of binary trees. Note that the definitions, while similar, are logically independent. Definition: a binary tree T is full

### On the k-path cover problem for cacti

On the k-path cover problem for cacti Zemin Jin and Xueliang Li Center for Combinatorics and LPMC Nankai University Tianjin 300071, P.R. China zeminjin@eyou.com, x.li@eyou.com Abstract In this paper we

### Homework MA 725 Spring, 2012 C. Huneke SELECTED ANSWERS

Homework MA 725 Spring, 2012 C. Huneke SELECTED ANSWERS 1.1.25 Prove that the Petersen graph has no cycle of length 7. Solution: There are 10 vertices in the Petersen graph G. Assume there is a cycle C

### 2.3 Convex Constrained Optimization Problems

42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

### An example of a computable

An example of a computable absolutely normal number Verónica Becher Santiago Figueira Abstract The first example of an absolutely normal number was given by Sierpinski in 96, twenty years before the concept

### Primality - Factorization

Primality - Factorization Christophe Ritzenthaler November 9, 2009 1 Prime and factorization Definition 1.1. An integer p > 1 is called a prime number (nombre premier) if it has only 1 and p as divisors.

### CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis Lecture 7 Binary heap, binomial heap, and Fibonacci heap 1 Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 The slides were

### A FAST STRING MATCHING ALGORITHM

Ravendra Singh et al, Int. J. Comp. Tech. Appl., Vol 2 (6),877-883 A FAST STRING MATCHING ALGORITHM H N Verma, 2 Ravendra Singh Department of CSE, Sachdeva Institute of Technology, Mathura, India, hnverma@rediffmail.com

### I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called

### Approximated Distributed Minimum Vertex Cover Algorithms for Bounded Degree Graphs

Approximated Distributed Minimum Vertex Cover Algorithms for Bounded Degree Graphs Yong Zhang 1.2, Francis Y.L. Chin 2, and Hing-Fung Ting 2 1 College of Mathematics and Computer Science, Hebei University,

### The following themes form the major topics of this chapter: The terms and concepts related to trees (Section 5.2).

CHAPTER 5 The Tree Data Model There are many situations in which information has a hierarchical or nested structure like that found in family trees or organization charts. The abstraction that models hierarchical

### New Hash Function Construction for Textual and Geometric Data Retrieval

Latest Trends on Computers, Vol., pp.483-489, ISBN 978-96-474-3-4, ISSN 79-45, CSCC conference, Corfu, Greece, New Hash Function Construction for Textual and Geometric Data Retrieval Václav Skala, Jan

### Chapter 4. Trees. 4.1 Basics

Chapter 4 Trees 4.1 Basics A tree is a connected graph with no cycles. A forest is a collection of trees. A vertex of degree one, particularly in a tree, is called a leaf. Trees arise in a variety of applications.

### 1. What s wrong with the following proofs by induction?

ArsDigita University Month : Discrete Mathematics - Professor Shai Simonson Problem Set 4 Induction and Recurrence Equations Thanks to Jeffrey Radcliffe and Joe Rizzo for many of the solutions. Pasted

### CSC 310: Information Theory

CSC 310: Information Theory University of Toronto, Fall 2011 Instructor: Radford M. Neal Week 2 What s Needed for a Theory of (Lossless) Data Compression? A context for the problem. What are we trying

### The minimum number of monochromatic 4-term progressions

The minimum number of monochromatic 4-term progressions Rutgers University May 28, 2009 Monochromatic 3-term progressions Monochromatic 4-term progressions 1 Introduction Monochromatic 3-term progressions

### Some Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.

Some Polynomial Theorems by John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.com This paper contains a collection of 31 theorems, lemmas,

Lecture 10 Union-Find The union-nd data structure is motivated by Kruskal's minimum spanning tree algorithm (Algorithm 2.6), in which we needed two operations on disjoint sets of vertices: determine whether

### 1 if 1 x 0 1 if 0 x 1

Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

### Towards Optimal Firewall Rule Ordering Utilizing Directed Acyclical Graphs

Towards Optimal Firewall Rule Ordering Utilizing Directed Acyclical Graphs Ashish Tapdiya and Errin W. Fulp Department of Computer Science Wake Forest University Winston Salem, NC, USA nsg.cs.wfu.edu Email:

### Graphs without proper subgraphs of minimum degree 3 and short cycles

Graphs without proper subgraphs of minimum degree 3 and short cycles Lothar Narins, Alexey Pokrovskiy, Tibor Szabó Department of Mathematics, Freie Universität, Berlin, Germany. August 22, 2014 Abstract

### Bounded-width QBF is PSPACE-complete

Bounded-width QBF is PSPACE-complete Albert Atserias 1 and Sergi Oliva 2 1 Universitat Politècnica de Catalunya Barcelona, Spain atserias@lsi.upc.edu 2 Universitat Politècnica de Catalunya Barcelona, Spain

### root node level: internal node edge leaf node CS@VT Data Structures & Algorithms 2000-2009 McQuain

inary Trees 1 A binary tree is either empty, or it consists of a node called the root together with two binary trees called the left subtree and the right subtree of the root, which are disjoint from each

### MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

### Theory of Computation Prof. Kamala Krithivasan Department of Computer Science and Engineering Indian Institute of Technology, Madras

Theory of Computation Prof. Kamala Krithivasan Department of Computer Science and Engineering Indian Institute of Technology, Madras Lecture No. # 31 Recursive Sets, Recursively Innumerable Sets, Encoding

### AN ALGORITHM FOR DETERMINING WHETHER A GIVEN BINARY MATROID IS GRAPHIC

AN ALGORITHM FOR DETERMINING WHETHER A GIVEN BINARY MATROID IS GRAPHIC W. T. TUTTE. Introduction. In a recent series of papers [l-4] on graphs and matroids I used definitions equivalent to the following.

### Computing Primitively-Rooted Squares and Runs in Partial Words

Computing Primitively-Rooted Squares and Runs in Partial Words F. Blanchet-Sadri 1, Jordan Nikkel 2, J.D. Quigley 3, and Xufan Zhang 4 1 Department of Computer Science, University of North Carolina, P.O.

### It is not immediately obvious that this should even give an integer. Since 1 < 1 5

Math 163 - Introductory Seminar Lehigh University Spring 8 Notes on Fibonacci numbers, binomial coefficients and mathematical induction These are mostly notes from a previous class and thus include some

### Symbol Tables. Introduction

Symbol Tables Introduction A compiler needs to collect and use information about the names appearing in the source program. This information is entered into a data structure called a symbol table. The

### A simple algorithm with no simple verication

A simple algorithm with no simple verication Laszlo Csirmaz Central European University Abstract The correctness of a simple sorting algorithm is resented, which algorithm is \evidently wrong" at the rst

### Lecture 4: BK inequality 27th August and 6th September, 2007

CSL866: Percolation and Random Graphs IIT Delhi Amitabha Bagchi Scribe: Arindam Pal Lecture 4: BK inequality 27th August and 6th September, 2007 4. Preliminaries The FKG inequality allows us to lower bound

### Binary Heap Algorithms

CS Data Structures and Algorithms Lecture Slides Wednesday, April 5, 2009 Glenn G. Chappell Department of Computer Science University of Alaska Fairbanks CHAPPELLG@member.ams.org 2005 2009 Glenn G. Chappell