AutoTest: A Tool for Automatic Test Case Generation in Spreadsheets
|
|
|
- Hillary Kelly
- 10 years ago
- Views:
Transcription
1 AutoTest: A Tool for Automatic Test Case Generation in Spreadsheets Robin Abraham and Martin Erwig School of EECS, Oregon State University [abraharo,erwig]@eecs.oregonstate.edu Abstract In this paper we present a system that helps users test their spreadsheets using automatically generated test cases. The system generates the test cases by backward propagation and solution of constraints on cell values. These constraints are obtained from the formula of the cell that is being tested when we try to execute all feasible DU associations within the formula. AutoTest generates test cases that execute all feasible DU pairs. If infeasible DU associations are present in the spreadsheet, the system is capable of detecting and reporting all of these to the user. We also present a comparative evaluation of our approach against the Help Me Test mechanism in Forms/3 and show that our approach is faster and produces test suites that give better DU coverage. Introduction Studies have shown that there is a high incidence of errors in spreadsheets [0], up to 90% in some cases [0]. These errors oftentimes lead to companies and institutions losing millions of dollars [3, 4, 4]. A recent study has also shown that spreadsheets are among the most widely used programming systems []. To carry out testing in commercially available spreadsheet systems like Microsoft Excel, users are forced to proceed in an ad hoc manner because tool support for testing is not available. In particular, the lack of tools leaves users without information about how much of their spreadsheet has been tested. In this situation, users come away with a very high level of confidence about the correctness of their spreadsheets even when, in reality, their spreadsheets have many non-trivial errors in them [8]. This situation is highly problematic because users have spreadsheets with potentially lots of errors in them, but they are not aware of it. Given the dilemma that testing does not guarantee the correctness of a program, how does a practitioner go about testing a program? Researchers have come up with test adequacy criteria which allow the tester to decide when to stop testing. Test adequacy criteria have different levels of confidence about absence of faults in the program being tested. This work is partially supported by the National Science Foundation under the grant ITR and by the EUSES Consortium ( The WYSIWYT methodology (to be explained later) has been implemented for the Forms/3 spreadsheet system and is currently being ported to Excel. One often employed criterion is DU adequacy, which requires that each possible path from any definition to all its uses is covered by a test case. The relative effectiveness of this and other criteria at fault detection have been compared in [6, 6]. In addition to monitoring the coverage of the test cases, a user is faced with the problem of inventing new test cases, which is generally tedious and prone to errors. In addition, it might not always be immediately clear to the user whether a new test case really improves the coverage. This is where an automatic tool for generating test cases comes into play: The user only has to inspect suggested test cases and approve or reject them. Generation and monitoring of coverage is reliably and automatically handled by the system. Regarding DU coverage, we can observe that in the case of a single spreadsheet cell, different paths that require different test cases can principally result only from IF expressions in that cell s formula. In general, through nested IF expressions, each cell gives rise to a tree of subexpressions that need different test cases to be executed. The method that underlies our spreadsheet testing tool AutoTest is based on representing expressions as trees in which internal nodes carry conditions of IF expressions, and leaves of the tree carry arbitrary expressions. The edges of the tree are labeled T or F leading to the expressions of the then and else branches. From such an expression tree we can generate in several steps constraints that are solved to yield test cases to cover all expressions in the tree. In the next section, we describe related work. In Section 3 we describe the scenario of an end user working with a spreadsheet, faced with the problem of testing it. In Section 4, we describe formally what it means to test a spreadsheet and what constitutes spreadsheet test cases. The notion of DU coverage as a test adequacy criteria is presented in Section 5, and in Section 6 we describe how AutoTest generates DU adequate test cases. A comparative evaluation of AutoTest against the Help Me Test (HMT) [3] test case generation system of Forms/3 [7] is described in Section 7. We present conclusions and plans for future work in Section 8. Related Work In earlier work we have developed the systems described in [, ] that allow the end users to create specifications of their spreadsheets and then use the specifications to generate spreadsheets that are provably free from refer-
2 ence, range, and type errors. The system described in [3] enables users to extract the specifications (also called templates) from their spreadsheets so they can adopt and work within the safety of these specification-based approaches. The systems described in [5, 6] allow the user to carry out consistency checking of spreadsheet formulas on the basis of user annotations, or on the basis of automatically inferred headers [], and flag the inconsistent formulas as potential faults. Consistency checking can also be carried out using assertions on the range of values allowed in spreadsheet cells [8]. Approaches as the ones described usually require additional effort from the user. For example, in order to be able to use the specification-based approach to the generation of safe spreadsheets [], the user has to learn the specification language [4]. Sometimes these systems have only limited expressiveness. On the other hand, static analysis techniques cannot find all faults. The systems described above that do consistency checking of the spreadsheets do so without any information about the specifications from which the spreadsheet was created. As a result of this shortcoming we could have spreadsheets that would pass the consistency check and still not be correct with respect to the specifications. As an alternative to static analysis and program generation techniques, testing has been used as a means for identifying faults in spreadsheets and thus improving the correctness of programs by removing the faults. Much effort in the area of testing has focused on automating it because of the high costs involved in testing. Effort invested in automating testing pays off in the long run when the user needs to test programs after modifications. This aspect makes a strong case in favor of systematically building test suites, based on some coverage criterion, that can be run in as little time as possible, resulting in thorough testing of the program. The What You See Is What You Test (WYSIWYT) methodology for testing spreadsheets [] allows users to test their spreadsheets to achieve DU adequacy. WYSIWYT has been developed for the Forms/3 spreadsheet language [7] and gives the user feedback of the overall level of testedness of the spreadsheet by means of a progress bar. Help Me Test (HMT) is a component of the Forms/3 engine that does automatic test case generation to help the users minimize the cost of testing their spreadsheets [3]. Automatic test case generation has also been studied for generalpurpose programming languages [9, 5]. The WYSIWYT methodology has been evaluated within the Forms/3 environment and found to be quite helpful in detecting faults in end-user spreadsheets [9]. Detecting faults is only the first step in correcting a spreadsheet. Fixing incorrect formulas is generally required to remove faults. The spreadsheet debugger described in [] exploits the end users understanding of their problem domain and expectations on values computed by the spreadsheet. The system allows the users to mark cells with incorrect output and specify their expected output. The system then generates a list of change suggestions that would result in the expected output being computed in the marked cell. The user can simply pick from the list of automatically generated change suggestions, thereby minimizing the number of formula edits they have to perform manually. 3 A Scenario Nancy is the office manager of a small-sized firm and has developed the spreadsheet shown in Figure to keep track of the office supplies. The amount in B (000 in this case) is the budget allowed for the purchase of office supplies. Rows 4, 5, and 6 store information about the different items that need to be purchased. B4 has the number of pens that need to be ordered, C4 has the cost per pen, and the formula in D4 computes the product of the numbers in B4 and C4 to calculate the proposed expenditure on the purchase of pens. Similarly, rows 5 and 6 keep track of the proposed expenses for paper clips and paper, respectively. The formula in B8 checks to ensure that none of the numbers in B4, B5, or B6 is less than zero. If one or more of the numbers are less than 0, the cell output is to flag the error. Otherwise, the cell output is 0. Cell D7 contains the formula IF(B8=,-,D4+D5+D6), which computes the total cost across the three items if the error flag in B8 is set to 0. If the error flag in B8 is set to the formula results in -. The formula in B9 checks if the total proposed expenditure is within the maximum allowed budget for office supplies. Figure. Office supplies spreadsheet After creating the spreadsheet, Nancy goes through the formula cells, one at a time, to ensure that the formulas look correct to the best of her knowledge. 3 She then uses historical data from the previous month as input to verify if the spreadsheet output matches the actual expenses incurred. Once this verification is done, Nancy is confident about the correctness of her spreadsheet and starts using it for planning the office expenses. Overconfidence in the correctness The office budget spreadsheet shown in Figure was among the spreadsheets used in the evaluation described in Section 7. 3 Code inspection of spreadsheet formulas done by individuals working alone has been shown to detect 63% of errors, and group code inspection has up to 83% success rate at detecting errors [7].
3 of her spreadsheet might even keep her from doing the cursory testing the next time she modifies the spreadsheet. From a software engineering perspective, the single set of test inputs Nancy used would not qualify as adequate testing of the spreadsheet. Even if she uses historical data from a few more months, she might not necessarily gain coverage since the inputs might only cause the execution of the same parts of the spreadsheet program. Given the nonexistent support for testing in Microsoft Excel, Nancy would have to come up with the test cases on her own without knowing if the new tests were actually resulting in more thorough testing of her spreadsheet. Moreover, with no feedback on meeting any test adequacy criteria, she also would have no idea of when she can consider her spreadsheet well tested. Using the AutoTest system, Nancy can simply right-click on the cell whose formula she wants to test and pick the option Test formula from the popup menu. Assuming Nancy asks AutoTest to test the formula in cell B9, the system generates a set of candidate test cases for the formula in the cell and presents it to Nancy as shown in Figure. A candidate test case is defined as the set of inputs generated by the system, together with the corresponding output computed by the formula that is to be tested. cell with the formula that is being tested is also shaded red. The user can inspect the formula within the cell and make changes to correct it since testing detected a failure. Once the formula has been modified, the user can revisit the corrected test case to ensure the computed output matches the expected output for the cell, and then validate the test case. 3. They can also ignore generated test cases if they are unable to decide if the computed output is right or wrong. The users can come back to them at any later point during the course of testing. For every candidate test case that Nancy approves, the updated progress bar shows how well tested the spreadsheet program is. Internally, the system uses DU adequacy (described in Section 5) to compute the level of testedness. AutoTest saves Nancy the effort of coming up with test cases by automatically generating test cases aimed at achieving 00% DU adequacy. The automatic generation of effective test suites lowers the cost of testing by reducing and directing the effort invested by the user. Moreover, the progress bar is an accurate indicator of the testedness of the spreadsheet and lets the user know when the spreadsheet has been thoroughly tested. 4 Spreadsheet Programs and Test Cases A spreadsheet is a partial function S : A F mapping cell addresses to formulas (and values). An element (a, f ) S is called a cell. Cell addresses are taken from the set A = IN IN, and formulas ( f F) are either plain values v V, references to other cells (given by addresses a A), or operations (ψ) applied to one or more argument formulas. f F ::= v a ψ( f,..., f ) Figure. Automatically generated test cases for B9 AutoTest allows the user to do any one of the following three things to a candidate test case.. Users can validate generated test cases, thereby indicating that the computed output matches the expected output for the formula given the generated input values. Once a user validates a test case, it is moved to the test suite, and it is displayed on the interface in green-colored font.. Users can flag generated test cases to indicate that the computed output value is incorrect given the generated inputs. This action implies the formula is faulty since it is computing the wrong result. A flagged test case is displayed in red-colored font on the interface, and the Operations include binary operations, aggregations, and, in particular, a branching construct IF( f, f, f ). The function σ : F A that computes for a formula the addresses of the cells it references is defined as follows. σ(v) = σ(a) = {a} σ(ψ( f,..., f k )) = σ( f )... σ( f k ) A set of addresses s A is called a shape. We call σ( f ) the shape of f. The function σ can be naturally extended to work on cells and cell addresses by σ(a, f ) = σ( f ) and σ(a) = σ(s(a)), that is, for a given spreadsheet S, σ(a) gives the shape of the formula stored in cell a. Related is the function σ S : S F A that transitively chases references to determine all the input cells for a formula. The definition of σ S is identical to that of σ, except for the following case: { σ {a} if S(a) V S (a) = σ S (S(a)) otherwise Like σ, σ S can be extended to work on cells and addresses.
4 The cells addressed by σ S (c) are also called c s input cells. To apply the view of programs and their inputs to spreadsheets, we observe that each spreadsheet contains a program together with the corresponding input. More precisely, the program part of a spreadsheet S is given by all of its cells that contain (non-trivial) formulas, that is, P S = {(a, f ) S σ( f ) }. This definition ignores formulas like + 3 and does not regard them as part of the spreadsheet program, because they always evaluate to the same result and can be effectively replaced by a constant. Correspondingly, the input of a spreadsheet S is given by all of its cells containing values (and locally evaluable formulas), that is, I S = {(a, f ) S σ( f ) = }. Note that with these two definitions we have S = P S I S and P S I S =. Without loss of generalization we can assume from now on that all input cells are of the form (a,v). Based on these definitions we can now say more precisely what test cases are in the context of spreadsheets. A test case for a cell (a, f ) is a pair (I,v) consisting of values for all the input cells transitively referenced by f, given by I, and the expected output for f, given by v V. Since the input values are tied to addresses, the input part of a test case is itself essentially a spreadsheet, that is I : A V. However, not any I will do: we require that the domain of I matches f s shape, that is, dom(i) = σ S ( f ). In other words, the input values are given by cells whose addresses are exactly the input cells contributing to f. Running a formula f on a test case means to evaluate f in the context of I. The evaluation of a formula f in the context of a spreadsheet (that is, cell definitions) S is denoted by [[ f ]] S. Now we can define that a formula f passes a test t = (I,v) if [[ f ]] I = v. Otherwise, f fails the test t. Likewise, we say that a cell (a, f ) passes (fails) t if f passes (fails) t. 5 Definition-Use Coverage The idea behind the DU coverage criterion is to test for each definition of a variable (or cell in the case of spreadsheets) all of its uses. In other words, test all DU pairs. In a spreadsheet every cell defines a value. In fact, cells with conditionals generally give rise two or more definitions, contained in the different branches. Likewise, one cell may contain different uses of a cell definition in different branches of conditionals. Therefore, definitions and uses cannot simply be represented by cell addresses. Instead, we generally need paths to subformulas to identify definitions and uses. 5. Expression and Constraint Trees To formalize the notions of definitions and uses we employ an abstract tree representation of formulas that stores conditions of conditionals in internal nodes and conditionalfree formulas in leaves. We can construct such a representation through two simple transformations of formulas. First, we lift all conditionals out of subformulas (that are not conditionals) so that the formula has the form of a nested conditional. This transformation can be achieved by repeatedly applying the following semantics-preserving rewrite rule to conditionals that are subformulas. ψ(...,if(c, f, f ),...) IF(c,ψ(..., f,...),ψ(..., f,...)) Note that the rewrite rule is only applied when ψ IF. In a second step, we transform a lifted formula into its corresponding expression tree (see also Figure 3(a)) using the function T, which creates for each conditional an internal node labeled with the condition and two subtrees for the two branches. The edges to the branches are labeled T and F to indicate which subtree corresponds to the then and else branch of the conditional. c T F T (IF(c, f, f )) = T ( f ) T ( f ) T ( f ) = f The second case leaves all non-conditional formulas unchanged. Each condition c stored in an internal node of an expression tree can be transformed into two constraints γ T and γ F that guarantee that c evaluates to true or false, respectively. These constraints will replace the edge labels T and F in the expression tree. Constraints have the following form γ ::= f ωv γ γ γ γ ω ::= < = > For example, a condition B3 > 4 will be transformed into the two constraints B3 > 4 and B3 4, which will replace the labels T and F, respectively, in the expression tree. A traversal of the whole expression tree that transforms conditions in internal nodes into constraints that are attached to the outgoing edges produces a factored constraint tree, shown in Figure 3(b). The constraints along each path from the root to a condition c in an internal node or an expression e in a leaf characterize the conditions under which the original formula would evaluate the c and e, respectively. In a final traversal 4 we can collect all the constraints along each path and attach the resulting conditions to the leaf expressions, which results in a constraint tree, shown in Figure 3(c). For a condition or expression to be executed, the constraint attached to it has to be satisfied. For example, for expression e to be executed, we need both, the constraints γ T and γt, to be satisfied. That is why the leaf for e has been annotated with γ T γt in Figure 3(c). 5. DU Pairs A DU pair is given by a definition and a use, which are both essentially represented by paths. To give a precise definition, we observe that while only expressions in leaves can be definitions, conditions in internal nodes as well as leaf expressions can be uses. Moreover, since a (sub)formula 4 In an implementation, both traversals can be combined into one.
5 c c c T F T c e 3 F c e 3 :c :e 3 e e e e :e :e (a) Expression Tree (b) Factored Constraint Tree (c) Constraint Tree Figure 3. Stages of test-case generation defining a cell may refer to other cells defined by conditionals, a single path is generally not sufficient to describe a definition. Instead, a definition is given be a set of paths. These observations lead to the following definitions. Let C(a) be the constraint tree obtained from the expression T (S(a)) as described above. We define the uses of a as the set U S (a), which contains the nodes of all trees C(a ) for which a σ(s(a )). Correspondingly, the immediate definitions of a are given by the the leaves of C(a). We refer to this set as D 0 S (a). To obtain the complete set of definitions for a we have to combine each expression e in D 0 S (a) with all definitions for any cell referenced by e, which leads to the following inductive definition for D S (a), the set of definitions of a. D S (a) is initially defined to be {{γ:e} γ:e D 0 S (a)}. Then we repeatedly replace a set of paths P = {γ :e,...,γ k :e k } D S (a) for which a σ(e i ) by the set P D 0 S (a ) until no such a exists anymore. Now the set of all DU pairs for address a is given by {(a,d,u) d D S (a) u U S (a)}. For each DU pair we can try to generate a test by solving the constraints stored in the paths (as described in the next section). Whenever the constraint solving fails, a test cannot be generated and an infeasible DU pair has been identified. A test suite that consists of a test for every feasible DU pair is said to be DU-pair adequate. 6 Generating DU-Adequate Test Cases The generation of a test case for a DU pair (a,{γ :e,...,γ k :e k },γ k+ :e k+ ) requires solving the constraint γ... γ k γ k+. In a first step, we group the constraints by involved addresses so that we obtain a constraint of the form γ a γ a... γ a n where each γ a i is of the form and each γ a i j is of the form γ a i γa i... γa i k i a i ω v i a i ω v i... a i ω v m i j i That is, for each address a i we obtain an alternative of constraints, each of which determines through a conjunction of value comparisons possible input values for the cell at address a i. Note that by construction each γ a i j contains at most one address, namely a i. The attempt at solving each constraint γ a i can have one out of two possible outcomes.. The constraint solver might succeed, in which case the solution is, for each address, a range of values that satisfy the constraints. For each address, any value from the range can be used as a test input.. The constraint solver might fail. This situation arises when for at least one a i none of the alternative constraints γ a i j is solvable. In this case it is not possible to generate any test case that would be able to execute the path. Therefore, failure of the constraint solving process indicates that the particular path for a definition or use cannot be exercised. If all constraints have been successfully solved, a test case can be created by taking values from computed ranges for each address and by evaluating the formula to be tested (of which e k+ is a subformula) using these values (see Section 4). A test case has the following form. ({(a,v ),...,(a n,v n )},v) If the constraint solver fails while trying to solve the constraints for a DU pair, that DU association cannot be exercised given the constraints. In other words, unsolvable constraints on input data cells allow us to automatically detect infeasible DU pairs in the spreadsheet program. In general, it might not be possible to execute all of the DU associations in spreadsheets. The problem of identifying infeasible DU pairs in programs written in general-purpose programming languages is undecidable [5]. Detection of infeasible DU pairs is easier in the case of spreadsheets languages like Excel since they do not have loop constructs or recursion. To illustrate how our algorithm works, we revisit the scenario described in Section 3 and explain how AutoTest generates test cases for the spreadsheet shown in Figure. In
6 particular, we show how test cases are generated for the formula in B9. To test the formula in B9 we need to test all definitions of D7 and B and their uses in the formula in B9. IF(D7 =, Error, IF(D7 > B, Over Budget, BudgetOK )) The formulas that affect the definitions of D7 are: D7 = IF(B8 =,,D4 + D5 + D6) B8 = IF(OR(B4 < 0,B5 < 0,B6 < 0),,0) The constraint tree for B9 shows the uses of D7 and B in B9. Only the constraint γ F D7 is needed in the following. D7 = - g: T Error : Over Budget : D7 > B : Budget OK The use of D7 in the condition D7 = is always executed, so we do not need to generate any constraints for it. However, to reach the use of D7 and B in the condition D7 > B we need to satisfy the constraint γ F. (Since B is an input cell, satisfying the constraint γ F fully tests all DU associations of B in B9.) The constraint tree for the formula in B8 is shown below. The two leaves represent two definitions for which we have the constraints: γ T 3 B4 < 0 B5 < 0 B6 < 0 γ F 3 B4 0 B5 0 B6 0 : - 4 OR(B4<0,B5<0,B6<0) Since B4, B5, and B6 are input cells, the definitions in B8 are determined by the two constraints inferred for the formula in the cell 3 3 : : 0 alone. Cell D7 also has two definitions since the expression tree of the formula in the cell has two leaves. In this case we have the following constraints for the definitions. B8 = γ T 4 B8 = γ F 4 B8 Since the formulas in D4, D5, and D6 do not contain : D4 + D5 + D6 4 conditionals, no constraints are generated for their definitions. Combining the definitions of B8 with those of D7, we obtain the following four definitions for D7. {γ T 4 :,γt 3 :},{γt 4 :,γf 3 :0}, {γ F 4 :D4+D5+D6,γT 3 :},{γf 4 :D4+D5+D6,γF 3 :0} The four definitions combined with the two uses in B9 give rise to 8 DU pairs. As mentioned earlier, both definitions of D7 already hit the use in the condition D7 =. Therefore, for generating test cases for the four DU pairs resulting from all the definitions of D7 and this use, we can solve the sets of constraints shown above. Since the sets {γ T 4 :,γf 3 :0} and {γ F 4 :D4+D5+D6,γT 3 :} cannot be satisfied,5 we are left with {γ T 4 :,γt 3 :} and {γf 4 :D4+D5+D6,γF 3 :0}. The first set of constraint can be satisfied by setting value in B4 to, 6 and the second set of constraints is already satisfied by the values in the spreadsheet. To test the use of D7 in the condition D7 > B, we combine the definitions of D7 with those of this use to get the following sets of constraints. {γ T 4 :,γt 3 :,γf :D7 > B}, {γ T 4 :,γf 3 :0,γF :D7 > B}, {γ F 4 :D4+D5+D6,γT 3 :,γf :D7 > B}, {γ F 4 :D4+D5+D6,γF 3 :0,γF :D7 > B} Two sets of constraints cannot be solved in this case, for the same reason discussed above. Moreover, satisfying γ T 4 : and γ T 3 : leads to the output in D7, which will not satisfy γ F :D7 > B. Therefore, the only set of constraints that can be solved is {γ F 4 :D4+D5+D6,γF 3 :0,γF :D7 > B}, and this is already satisfied by the current values in the spreadsheet. Note that, formally, the actual test case is the set of address-value pairs that satisfy the constraints for a DU pair. Only the generated values are shown in the AutoTest interface the values for the other input cells that affect the formula output that are already in the spreadsheet are implicitly part of the test case and not shown in the interface. 7 Evaluation For AutoTest to be useful as a tool for testing spreadsheets, it has to be both effective and efficient. Effectiveness is judged with respect to a test-adequacy criterion, DU adequacy in this case. A more effective tool in this respect would be one which is capable of generating test cases that exercises more of the feasible DU pairs. Efficiency is measured in terms of the time taken by the system to generate the test cases that meet the adequacy criterion. This factor is especially important in the case of spreadsheet systems with their support for immediate visual feedback. Since we are comparing AutoTest against HMT, we follow the evaluation of HMT described in [3] and take into consideration two dependent variables.. Ultimate effectiveness, defined as the percentage of the total number of feasible DU associations 5 Satisfying γ T 3 leads to in B8 which, in turn, results in γt 4 being satisfied. On the other hand, satisfying γ F 3 leads to 0 in B8, which results in γ F 4 being satisfied. 6 Actually, the first constraint can also be satisfied by setting values in B5 or B6 to. Since the three test cases exercise the same DU pair, the system only generates the first one.
7 . Response time for test generation 7. Effectiveness and Efficiency Since HMT uses randomization, depending on the technique used, the measures of the dependent variables may change from one run to another. Therefore the ultimate effectiveness score for HMT is averaged over 35 runs and the median response time over 35 runs has been presented in [3]. Our system, on the other hand, always produces the same output given the same starting spreadsheet configuration. We reproduce the results from [3] in Tables and and compare with the numbers obtained by running AutoTest on the same spreadsheets. 7 AutoTest is also efficient in the sense that it only generates one test case per feasible DU pair for the formula that is being tested. Therefore it generates the minimum number of test cases to be able to execute all feasible DU pairs. Such optimal test suites would save the user time and effort during generation (since the user has to approve a candidate test case before it can be added to the suite of test cases for the formula), test runs, and maintenance of test suites. The ultimate effectiveness scores reported in Table and the response times reported in Table for AutoTest are based on the optimal test suites that are generated by the system. The size of the test suites generated by HMT for achieving the level of coverage shown in Table are not available. 7. Discussion As can be seen from the ultimate effectiveness scores shown in Table, AutoTest generates test cases that cause the execution of all feasible DU associations for the spreadsheets used in the evaluation. HMT running the Chaining algorithm has comparable ultimate effectiveness scores. The response times for test generation for each of the spreadsheets used in the evaluation are shown in Table. Note that the figures show the time taken to generate a set of test cases that come as close as possible to a 00% ultimate effectiveness score. From the numbers in Table, we see that AutoTest far outperforms the algorithms used by HMT for the generation of test cases. We can conclude therefore that our AutoTest system, whose approach to generate test cases is based on an derivation, propagation, and solving of constraints, is efficient and accurate. Moreover, AutoTest can also detect infeasible DU associations automatically 8 Conclusions and Future Work We have presented a system, AutoTest, that supports users of Microsoft Excel in the systematic testing of their spreadsheets. AutoTest automatically generates a minimal set of tests for each formula cell that guarantees a DU adequate test coverage. The system runs efficiently and also produces, as a by-product, information about infeasible DU 7 Note that the first test case Budget is the spreadsheet we have used in the scenario in Section 3. associations in the spreadsheet. The test generation approach, which is based on constraint generation, propagation, and solving, is conceptually simple, which is important for at least two reasons. First, it allows its reuse in other systems. For example, we believe that it would be straightforward to integrate this new technique into the WYSIWYT tool. Second, it facilitates extensions to be investigated in future. For example, we plan to extend AutoTest to allow whole regions to be tested by the test cases for a single region-representative formula. We can base this extension effectively on the region inference mechanisms reported in [3]. Acknowledgments We thank Marc Fisher for providing us access to the spreadsheets and data from the studies described in [3]. References [] R. Abraham and M. Erwig. Header and Unit Inference for Spreadsheets Through Spatial Analyses. In IEEE Int. Symp. on Visual Languages and Human-Centric Computing, pages 65 7, 004. [] R. Abraham and M. Erwig. Goal-Directed Debugging of Spreadsheets. In IEEE Int. Symp. on Visual Languages and Human-Centric Computing, pages 37 44, 005. [3] R. Abraham and M. Erwig. Inferrinemplates from Spreadsheets. In 8th IEEE Int. Conf. on Software Engineering, pages 8 9, 006. [4] R. Abraham, M. Erwig, S. Kollmansberger, and E. Seifert. Visual Specifications of Correct Spreadsheets. In IEEE Int. Symp. on Visual Languages and Human-Centric Computing, pages 89 96, 005. [5] Y. Ahmad, T. Antoniu, S. Goldwater, and S. Krishnamurthi. A Type System for Statically Detecting Spreadsheet Errors. In 8th IEEE Int. Conf. on Automated Software Engineering, pages 74 83, 003. [6] T. Antoniu, P. A. Steckler, S. Krishnamurthi, E. Neuwirth, and M. Felleisen. Validating the Unit Correctness of Spreadsheet Programs. In 6th IEEE Int. Conf. on Software Engineering, pages , 004. [7] M. M. Burnett, J. Atwood, R. Djang, H. Gottfried, J. Reichwein, and S. Yang. Forms/3: A First-Order Visual Language to Explore the Boundaries of the Spreadsheet Paradigm. Journal of Functional Programming, :55 06, March 00. [8] M. M. Burnett, C. Cook, J. Summet, G. Rothermel, and C. Wallace. End-User Software Engineering with Assertions. In 5th IEEE Int. Conf. on Software Engineering, pages 93 03, 003. [9] R. A. DeMillo and A. J. Offutt. Constraint-Based Automatic Test Data Generation. IEEE Transactions on Software Engineering, 7:900 90, 99. [0] S. Ditlea. Spreadsheets Can be Hazardous to Your Health. Personal Computing, ():60 69, 987. [] G. Engels and M. Erwig. ClassSheets: Automatic Generation of Spreadsheet Applications from Object-Oriented Specifications. In 0th IEEE/ACM Int. Conf. on Automated Software Engineering, pages 4 33, 005.
8 HMT AutoTest Spreadsheet Random Random Chaining Chaining (without ranges) (with ranges) (without ranges) (with ranges) Budget 99.6% 00.0% 00.0% 00% 00% Digits 59.4% 97.9% 00.0% 00% 00% FitMachine 50.5% 50.5% 97.9% 97.9% 00% Grades 67.% 99.8% 99.7% 99.9% 00% MBTI 5.6% 00.0% 99.9% 99.6% 00% MicroGen 7.4% 99.% 00.0% 00% 00% NetPay 40.0% 00.0% 00.0% 00% 00% NewClock 57.% 00.0% 99.0% 99.4% 00% RandomJury 78.8% 83.% 94.3% 9.7% 00% Solution 57.7% 78.8% 00.0% 00% 00% Table. Ultimate effectiveness scores of the different techniques per spreadsheet. HMT AutoTest Spreadsheet Random Random Chaining Chaining (without ranges) (with ranges) (without ranges) (with ranges) Budget < 0.0 Digits < 0.0 FitMachine < 0.0 Grades MBTI MicroGen < 0.0 NetPay < 0.0 NewClock < 0.0 RandomJury Solution < 0.0 Table. Response times (in seconds) of the different techniques per spreadsheet. [] M. Erwig, R. Abraham, I. Cooperstein, and S. Kollmansberger. Gencel A Program Generator for Correct Spreadsheets. Journal of Functional Programming, 6(3):93 35, May 006. [3] M. Fisher, G. Rothermel, D. Brown, M. Cao, C. Cook, and B. Burnett. Integrating Automated Test Generation into the WYSIWYT Spreadsheet Testing Methodology. ACM Trans. on Software Engineering and Methodology, 006. To appear. [4] K. Godfrey. Computing Error at Fidelity s Magellan Fund. In Forum on Risks to the Public in Computers and Related Systems, January 995. [5] A. J. Offutt. An Integrated Automatic Test Data Generation System. Journal of Systems Integration, (3):39 409, 99. [6] A. J. Offutt, J. Pan, K. Tewary, and T. Zhang. An Experimental Evaluation of Data Flow and Mutation Testing. Software Practice and Experience, 6():65 76, 996. [7] R. R. Panko. Applying Code Inspection to Spreadsheet Testing. Journal of Management Information Systems, 6():59 76, 999. [8] R. R. Panko. Spreadsheet Errors: What We Know. What We Think We Can Do. In Symp. of the European Spreadsheet Risks Interest Group (EuSpRIG), 000. [9] S. Prabhakarao, C. Cook, J. Ruthruff, E. Creswick, M. Main, M. Durham, and M. Burnett. Strategies and Behaviors of End-User Programmers with Interactive Fault Localization. In IEEE Int. Symp. on Human-Centric Computing Languages and Environments, pages 03 0, 003. [0] K. Rajalingham, D. Chadwick, B. Knight, and D. Edwards. Quality Control in Spreadsheets: A Software Engineering- Based Approach to Spreadsheet Development. In 33rd Hawaii Int. Conf. on System Sciences, pages 9, 000. [] G. Rothermel, M. M. Burnett, L. Li, C. DuPuis, and A. Sheretov. A Methodology for Testing Spreadsheets. ACM Transactions on Software Engineering and Methodology, pages 0 47, 00. [] C. Scaffidi, M. Shaw, and B. Myers. Estimating the Numbers of End Users and End User Programmers. In IEEE Symp. on Visual Languages and Human-Centric Computing, pages 07 4, 005. [3] A. Scott. Shurgard Stock Dives After Auditor Quits Over Company s Accounting. The Seattle Times, November 003. [4] R. E. Smith. University of Toledo loses $.4M in projected revenue. Toledo Blade, May 004. [5] E. J. Weyuker. More Experience with Data Flow Testing. IEEE Trans. on Software Engineering, 9(9):9 99, 993. [6] E. J. Weyuker, S. N. Weiss, and D. Hamlet. Comparison of Program Testing Strategies. In Fourth Symposium on Software Testing, Analysis and Verification, pages 54 64, 99.
Spreadsheet Programming
Spreadsheet Programming Robin Abraham, Microsoft Corporation Margaret Burnett, Oregon State University Martin Erwig, Oregon State University Spreadsheets are among the most widely used programming systems
End-User Software Engineering
End-User Software Engineering Margaret Burnett, Curtis Cook, and Gregg Rothermel School of Electrical Engineering and Computer Science Oregon State University Corvallis, OR 97331 USA {burnett, cook, grother}@eecs.orst.edu
SheetDiff: A Tool for Identifying Changes in Spreadsheets
SheetDiff: A Tool for Identifying Changes in Spreadsheets Chris Chambers Oregon State University [email protected] Martin Erwig Oregon State University [email protected] Markus Luckey
A Generalised Spreadsheet Verification Methodology
A Generalised Spreadsheet Verification Methodology Nick Randolph Software Engineering Australia (WA) Enterprise Unit 5 De Laeter Way Bentley 6102 Western Australia [email protected] John Morris and
STSC CrossTalk - Software Engineering for End-User Programmers - Jun 2004
CrossTalk Only Home > CrossTalk Jun 2004 > Article Jun 2004 Issue - Mission - Staff - Contact Us - Subscribe Now - Update - Cancel Software Engineering for End-User Programmers Dr. Curtis Cook, Oregon
End-User Software Visualizations for Fault Localization
Proceedings of ACM Symposium on Software Visualization, June 11-13, 2003, San Diego, CA End-User Software Visualizations for Fault Localization J. Ruthruff, E. Creswick, M. Burnett, C. Cook, S. Prabhakararao,
End-User Development Framework with DSL for Spreadsheets
End-User Development Framework with DSL for Spreadsheets Vineta Arnicane University Of Latvia, Faculty of Computing, Raina blvd. 19, Riga, Latvia [email protected] Abstract. We propose a framework
SOFTWARE ENGINEERING FOR VISUAL PROGRAMMING LANGUAGES
Handbook of Software Engineering and Knowledge Engineering Vol. 2 (2001) World Scientific Publishing Company SOFTWARE ENGINEERING FOR VISUAL PROGRAMMING LANGUAGES MARGARET BURNETT Department of Computer
Random vs. Structure-Based Testing of Answer-Set Programs: An Experimental Comparison
Random vs. Structure-Based Testing of Answer-Set Programs: An Experimental Comparison Tomi Janhunen 1, Ilkka Niemelä 1, Johannes Oetsch 2, Jörg Pührer 2, and Hans Tompits 2 1 Aalto University, Department
Web Data Extraction: 1 o Semestre 2007/2008
Web Data : Given Slides baseados nos slides oficiais do livro Web Data Mining c Bing Liu, Springer, December, 2006. Departamento de Engenharia Informática Instituto Superior Técnico 1 o Semestre 2007/2008
The Trip Scheduling Problem
The Trip Scheduling Problem Claudia Archetti Department of Quantitative Methods, University of Brescia Contrada Santa Chiara 50, 25122 Brescia, Italy Martin Savelsbergh School of Industrial and Systems
Quality Control in Spreadsheets: A Software Engineering-Based Approach to Spreadsheet Development
Quality Control in Spreadsheets: A Software Engineering-Based Approach to Spreadsheet Development Kamalasen Rajalingham, David Chadwick, Brian Knight, Dilwyn Edwards Information Integrity Research Centre
Checking for Dimensional Correctness in Physics Equations
Checking for Dimensional Correctness in Physics Equations C.W. Liew Department of Computer Science Lafayette College [email protected] D.E. Smith Department of Computer Science Rutgers University [email protected]
Test case design techniques I: Whitebox testing CISS
Test case design techniques I: Whitebox testing Overview What is a test case Sources for test case derivation Test case execution White box testing Flowgraphs Test criteria/coverage Statement / branch
Software testing. Objectives
Software testing cmsc435-1 Objectives To discuss the distinctions between validation testing and defect testing To describe the principles of system and component testing To describe strategies for generating
Regression Testing Based on Comparing Fault Detection by multi criteria before prioritization and after prioritization
Regression Testing Based on Comparing Fault Detection by multi criteria before prioritization and after prioritization KanwalpreetKaur #, Satwinder Singh * #Research Scholar, Dept of Computer Science and
Testing LTL Formula Translation into Büchi Automata
Testing LTL Formula Translation into Büchi Automata Heikki Tauriainen and Keijo Heljanko Helsinki University of Technology, Laboratory for Theoretical Computer Science, P. O. Box 5400, FIN-02015 HUT, Finland
EVALUATING METRICS AT CLASS AND METHOD LEVEL FOR JAVA PROGRAMS USING KNOWLEDGE BASED SYSTEMS
EVALUATING METRICS AT CLASS AND METHOD LEVEL FOR JAVA PROGRAMS USING KNOWLEDGE BASED SYSTEMS Umamaheswari E. 1, N. Bhalaji 2 and D. K. Ghosh 3 1 SCSE, VIT Chennai Campus, Chennai, India 2 SSN College of
Supporting Software Development Process Using Evolution Analysis : a Brief Survey
Supporting Software Development Process Using Evolution Analysis : a Brief Survey Samaneh Bayat Department of Computing Science, University of Alberta, Edmonton, Canada [email protected] Abstract During
A Decision Theoretic Approach to Targeted Advertising
82 UNCERTAINTY IN ARTIFICIAL INTELLIGENCE PROCEEDINGS 2000 A Decision Theoretic Approach to Targeted Advertising David Maxwell Chickering and David Heckerman Microsoft Research Redmond WA, 98052-6399 [email protected]
Test Case Design Techniques
Summary of Test Case Design Techniques Brian Nielsen, Arne Skou {bnielsen ask}@cs.auc.dk Development of Test Cases Complete testing is impossible Testing cannot guarantee the absence of faults How to select
Using Excel (Microsoft Office 2007 Version) for Graphical Analysis of Data
Using Excel (Microsoft Office 2007 Version) for Graphical Analysis of Data Introduction In several upcoming labs, a primary goal will be to determine the mathematical relationship between two variable
Lecture 10: Regression Trees
Lecture 10: Regression Trees 36-350: Data Mining October 11, 2006 Reading: Textbook, sections 5.2 and 10.5. The next three lectures are going to be about a particular kind of nonlinear predictive model,
Introduction to Computers and Programming. Testing
Introduction to Computers and Programming Prof. I. K. Lundqvist Lecture 13 April 16 2004 Testing Goals of Testing Classification Test Coverage Test Technique Blackbox vs Whitebox Real bugs and software
Standard for Software Component Testing
Standard for Software Component Testing Working Draft 3.4 Date: 27 April 2001 produced by the British Computer Society Specialist Interest Group in Software Testing (BCS SIGIST) Copyright Notice This document
ISTQB Certified Tester. Foundation Level. Sample Exam 1
ISTQB Certified Tester Foundation Level Version 2015 American Copyright Notice This document may be copied in its entirety, or extracts made, if the source is acknowledged. #1 When test cases are designed
How To Analyze Data In Excel 2003 With A Powerpoint 3.5
Microsoft Excel 2003 Data Analysis Larry F. Vint, Ph.D [email protected] 815-753-8053 Technical Advisory Group Customer Support Services Northern Illinois University 120 Swen Parson Hall DeKalb, IL 60115 Copyright
Automatic Configuration Generation for Service High Availability with Load Balancing
Automatic Configuration Generation for Service High Availability with Load Balancing A. Kanso 1, F. Khendek 1, M. Toeroe 2, A. Hamou-Lhadj 1 1 Electrical and Computer Engineering Department, Concordia
Drawing a histogram using Excel
Drawing a histogram using Excel STEP 1: Examine the data to decide how many class intervals you need and what the class boundaries should be. (In an assignment you may be told what class boundaries to
GameTime: A Toolkit for Timing Analysis of Software
GameTime: A Toolkit for Timing Analysis of Software Sanjit A. Seshia and Jonathan Kotker EECS Department, UC Berkeley {sseshia,jamhoot}@eecs.berkeley.edu Abstract. Timing analysis is a key step in the
!"!!"#$$%&'()*+$(,%!"#$%$&'()*""%(+,'-*&./#-$&'(-&(0*".$#-$1"(2&."3$'45"
!"!!"#$$%&'()*+$(,%!"#$%$&'()*""%(+,'-*&./#-$&'(-&(0*".$#-$1"(2&."3$'45"!"#"$%&#'()*+',$$-.&#',/"-0%.12'32./4'5,5'6/%&)$).2&'7./&)8'5,5'9/2%.%3%&8':")08';:
Using Excel for descriptive statistics
FACT SHEET Using Excel for descriptive statistics Introduction Biologists no longer routinely plot graphs by hand or rely on calculators to carry out difficult and tedious statistical calculations. These
A binary search tree or BST is a binary tree that is either empty or in which the data element of each node has a key, and:
Binary Search Trees 1 The general binary tree shown in the previous chapter is not terribly useful in practice. The chief use of binary trees is for providing rapid access to data (indexing, if you will)
Environment Modeling for Automated Testing of Cloud Applications
Environment Modeling for Automated Testing of Cloud Applications Linghao Zhang, Tao Xie, Nikolai Tillmann, Peli de Halleux, Xiaoxing Ma, Jian Lv {lzhang25, txie}@ncsu.edu, {nikolait, jhalleux}@microsoft.com,
Optimizations. Optimization Safety. Optimization Safety. Control Flow Graphs. Code transformations to improve program
Optimizations Code transformations to improve program Mainly: improve execution time Also: reduce program size Control low Graphs Can be done at high level or low level E.g., constant folding Optimizations
Two Flavors in Automated Software Repair: Rigid Repair and Plastic Repair
Two Flavors in Automated Software Repair: Rigid Repair and Plastic Repair Martin Monperrus, Benoit Baudry Dagstuhl Preprint, Seminar #13061, 2013. Link to the latest version Abstract In this paper, we
Design and Implementation of Firewall Policy Advisor Tools
Design and Implementation of Firewall Policy Advisor Tools Ehab S. Al-Shaer and Hazem H. Hamed Multimedia Networking Research Laboratory School of Computer Science, Telecommunications and Information Systems
Managing Multi-Valued Attributes in Spreadsheet Applications
Managing Multi-Valued Attributes in Spreadsheet Applications Clare Churcher, Theresa McLennan and Wendy Spray Lincoln University New Zealand churcher/mclennan/[email protected] Abstract End-users frequently
Verification and Validation of Software Components and Component Based Software Systems
Chapter 5 29 Verification and Validation of Software Components and Component Based Christina Wallin Industrial Information Technology Software Engineering Processes ABB Corporate Research [email protected]
Software Testing. Quality & Testing. Software Testing
Software Testing Software Testing Error: mistake made by the programmer/developer Fault: a incorrect piece of code/document (i.e., bug) Failure: result of a fault Goal of software testing: Cause failures
A methodology for secure software design
A methodology for secure software design Eduardo B. Fernandez Dept. of Computer Science and Eng. Florida Atlantic University Boca Raton, FL 33431 [email protected] 1. Introduction A good percentage of the
International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS) www.iasir.net
International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research) International Journal of Emerging Technologies in Computational
Random Testing: The Best Coverage Technique - An Empirical Proof
, pp. 115-122 http://dx.doi.org/10.14257/ijseia.2015.9.12.10 Random Testing: The Best Coverage Technique - An Empirical Proof K Koteswara Rao 1 and Prof GSVP Raju 2 1 Asst prof, (PhD) @JNTUK, CSE Department,
AUTOMATED TEST GENERATION FOR SOFTWARE COMPONENTS
TKK Reports in Information and Computer Science Espoo 2009 TKK-ICS-R26 AUTOMATED TEST GENERATION FOR SOFTWARE COMPONENTS Kari Kähkönen ABTEKNILLINEN KORKEAKOULU TEKNISKA HÖGSKOLAN HELSINKI UNIVERSITY OF
InvGen: An Efficient Invariant Generator
InvGen: An Efficient Invariant Generator Ashutosh Gupta and Andrey Rybalchenko Max Planck Institute for Software Systems (MPI-SWS) Abstract. In this paper we present InvGen, an automatic linear arithmetic
Formal Software Testing. Terri Grenda, CSTE IV&V Testing Solutions, LLC www.ivvts.com
Formal Software Testing Terri Grenda, CSTE IV&V Testing Solutions, LLC www.ivvts.com Scope of Testing Find defects early Remove defects prior to production Identify Risks Unbiased opinion When Should Testing
How To Check For Differences In The One Way Anova
MINITAB ASSISTANT WHITE PAPER This paper explains the research conducted by Minitab statisticians to develop the methods and data checks used in the Assistant in Minitab 17 Statistical Software. One-Way
Keywords: Regression testing, database applications, and impact analysis. Abstract. 1 Introduction
Regression Testing of Database Applications Bassel Daou, Ramzi A. Haraty, Nash at Mansour Lebanese American University P.O. Box 13-5053 Beirut, Lebanon Email: rharaty, [email protected] Keywords: Regression
Integrating Pattern Mining in Relational Databases
Integrating Pattern Mining in Relational Databases Toon Calders, Bart Goethals, and Adriana Prado University of Antwerp, Belgium {toon.calders, bart.goethals, adriana.prado}@ua.ac.be Abstract. Almost a
So today we shall continue our discussion on the search engines and web crawlers. (Refer Slide Time: 01:02)
Internet Technology Prof. Indranil Sengupta Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Lecture No #39 Search Engines and Web Crawler :: Part 2 So today we
Checking Access to Protected Members in the Java Virtual Machine
Checking Access to Protected Members in the Java Virtual Machine Alessandro Coglio Kestrel Institute 3260 Hillview Avenue, Palo Alto, CA 94304, USA Ph. +1-650-493-6871 Fax +1-650-424-1807 http://www.kestrel.edu/
MONITORING AND DIAGNOSIS OF A MULTI-STAGE MANUFACTURING PROCESS USING BAYESIAN NETWORKS
MONITORING AND DIAGNOSIS OF A MULTI-STAGE MANUFACTURING PROCESS USING BAYESIAN NETWORKS Eric Wolbrecht Bruce D Ambrosio Bob Paasch Oregon State University, Corvallis, OR Doug Kirby Hewlett Packard, Corvallis,
Performance Based Evaluation of New Software Testing Using Artificial Neural Network
Performance Based Evaluation of New Software Testing Using Artificial Neural Network Jogi John 1, Mangesh Wanjari 2 1 Priyadarshini College of Engineering, Nagpur, Maharashtra, India 2 Shri Ramdeobaba
Calc Guide Chapter 9 Data Analysis
Calc Guide Chapter 9 Data Analysis Using Scenarios, Goal Seek, Solver, others Copyright This document is Copyright 2007 2011 by its contributors as listed below. You may distribute it and/or modify it
Breviz: Visualizing Spreadsheets using Dataflow Diagrams
Breviz: Visualizing Spreadsheets using Dataflow Diagrams Felienne Hermans, Martin Pinzger, Arie van Deursen Delft University of Technology, Mekelweg 4, Delft, The Netherlands {f.f.j.hermans,m.pinzger,arie.vandeursen}@tudelft.nl
The Road from Software Testing to Theorem Proving
The Road from Software Testing to Theorem Proving A Short Compendium of my Favorite Software Verification Techniques Frédéric Painchaud DRDC Valcartier / Robustness and Software Analysis Group December
Enforcing Data Quality Rules for a Synchronized VM Log Audit Environment Using Transformation Mapping Techniques
Enforcing Data Quality Rules for a Synchronized VM Log Audit Environment Using Transformation Mapping Techniques Sean Thorpe 1, Indrajit Ray 2, and Tyrone Grandison 3 1 Faculty of Engineering and Computing,
Web Application Regression Testing: A Session Based Test Case Prioritization Approach
Web Application Regression Testing: A Session Based Test Case Prioritization Approach Mojtaba Raeisi Nejad Dobuneh 1, Dayang Norhayati Abang Jawawi 2, Mohammad V. Malakooti 3 Faculty and Head of Department
Adaptive Tolerance Algorithm for Distributed Top-K Monitoring with Bandwidth Constraints
Adaptive Tolerance Algorithm for Distributed Top-K Monitoring with Bandwidth Constraints Michael Bauer, Srinivasan Ravichandran University of Wisconsin-Madison Department of Computer Sciences {bauer, srini}@cs.wisc.edu
Regular Expressions with Nested Levels of Back Referencing Form a Hierarchy
Regular Expressions with Nested Levels of Back Referencing Form a Hierarchy Kim S. Larsen Odense University Abstract For many years, regular expressions with back referencing have been used in a variety
4 Testing General and Automated Controls
4 Testing General and Automated Controls Learning Objectives To understand the reasons for testing; To have an idea about Audit Planning and Testing; To discuss testing critical control points; To learn
A Formal Model of Program Dependences and Its Implications for Software Testing, Debugging, and Maintenance
A Formal Model of Program Dependences and Its Implications for Software Testing, Debugging, and Maintenance Andy Podgurski Lori A. Clarke Computer Engineering & Science Department Case Western Reserve
Analysis of Algorithms I: Optimal Binary Search Trees
Analysis of Algorithms I: Optimal Binary Search Trees Xi Chen Columbia University Given a set of n keys K = {k 1,..., k n } in sorted order: k 1 < k 2 < < k n we wish to build an optimal binary search
Proceedings of the 6th Educators Symposium: Software Modeling in Education at MODELS 2010 (EduSymp 2010)
Electronic Communications of the EASST Volume 34 (2010) Proceedings of the 6th Educators Symposium: Software Modeling in Education at MODELS 2010 (EduSymp 2010) Position Paper: m2n A Tool for Translating
Module 10. Coding and Testing. Version 2 CSE IIT, Kharagpur
Module 10 Coding and Testing Lesson 26 Debugging, Integration and System Testing Specific Instructional Objectives At the end of this lesson the student would be able to: Explain why debugging is needed.
Regression Verification: Status Report
Regression Verification: Status Report Presentation by Dennis Felsing within the Projektgruppe Formale Methoden der Softwareentwicklung 2013-12-11 1/22 Introduction How to prevent regressions in software
USC Marshall School of Business Marshall Information Services
USC Marshall School of Business Marshall Information Services Excel Dashboards and Reports The goal of this workshop is to create a dynamic "dashboard" or "Report". A partial image of what we will be creating
Microsoft Excel Tips & Tricks
Microsoft Excel Tips & Tricks Collaborative Programs Research & Evaluation TABLE OF CONTENTS Introduction page 2 Useful Functions page 2 Getting Started with Formulas page 2 Nested Formulas page 3 Copying
INTERNATIONAL JOURNAL OF COMPUTER ENGINEERING & TECHNOLOGY (IJCET)
INTERNATIONAL JOURNAL OF COMPUTER ENGINEERING & TECHNOLOGY (IJCET) International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 6367(Print), ISSN 0976 6367(Print) ISSN 0976 6375(Online)
Reliability Guarantees in Automata Based Scheduling for Embedded Control Software
1 Reliability Guarantees in Automata Based Scheduling for Embedded Control Software Santhosh Prabhu, Aritra Hazra, Pallab Dasgupta Department of CSE, IIT Kharagpur West Bengal, India - 721302. Email: {santhosh.prabhu,
Using simulation to calculate the NPV of a project
Using simulation to calculate the NPV of a project Marius Holtan Onward Inc. 5/31/2002 Monte Carlo simulation is fast becoming the technology of choice for evaluating and analyzing assets, be it pure financial
OPRE 6201 : 2. Simplex Method
OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2
Errors in Operational Spreadsheets: A Review of the State of the Art
Errors in Operational Spreadsheets: A Review of the State of the Art Stephen G. Powell Tuck School of Business Dartmouth College [email protected] Kenneth R. Baker Tuck School of Business Dartmouth College
Chapter 5. Regression Testing of Web-Components
Chapter 5 Regression Testing of Web-Components With emergence of services and information over the internet and intranet, Web sites have become complex. Web components and their underlying parts are evolving
CS510 Software Engineering
CS510 Software Engineering Propositional Logic Asst. Prof. Mathias Payer Department of Computer Science Purdue University TA: Scott A. Carr Slides inspired by Xiangyu Zhang http://nebelwelt.net/teaching/15-cs510-se
A Structured Methodology For Spreadsheet Modelling
A Structured Methodology For Spreadsheet Modelling ABSTRACT Brian Knight, David Chadwick, Kamalesen Rajalingham University of Greenwich, Information Integrity Research Centre, School of Computing and Mathematics,
Comparing Methods to Identify Defect Reports in a Change Management Database
Comparing Methods to Identify Defect Reports in a Change Management Database Elaine J. Weyuker, Thomas J. Ostrand AT&T Labs - Research 180 Park Avenue Florham Park, NJ 07932 (weyuker,ostrand)@research.att.com
UNIVERSITY OF WATERLOO Software Engineering. Analysis of Different High-Level Interface Options for the Automation Messaging Tool
UNIVERSITY OF WATERLOO Software Engineering Analysis of Different High-Level Interface Options for the Automation Messaging Tool Deloitte Inc. Toronto, ON M5K 1B9 Prepared By Matthew Stephan Student ID:
Automatic Test Data Synthesis using UML Sequence Diagrams
Vol. 09, No. 2, March April 2010 Automatic Test Data Synthesis using UML Sequence Diagrams Ashalatha Nayak and Debasis Samanta School of Information Technology Indian Institute of Technology, Kharagpur
ABSTRACT INTRODUCTION CLINICAL PROJECT TRACKER OF SAS TASKS. Paper PH-02-2015
Paper PH-02-2015 Project Management of SAS Tasks - Excel Dashboard without Using Any Program Kalaivani Raghunathan, Quartesian Clinical Research Pvt. Ltd, Bangalore, India ABSTRACT Have you ever imagined
IBM Operational Decision Manager Version 8 Release 5. Getting Started with Business Rules
IBM Operational Decision Manager Version 8 Release 5 Getting Started with Business Rules Note Before using this information and the product it supports, read the information in Notices on page 43. This
