Comparison of Unit-Level Automated Test Generation Tools

Size: px
Start display at page:

Download "Comparison of Unit-Level Automated Test Generation Tools"

Transcription

1 Comparison of Unit-Level Automated Test Generation Tools Shuang Wang and Jeff Offutt Software Engineering George Mason University Fairfax, VA 22030, USA Abstract Data from projects worldwide show that many software projects fail and most are completed late or over budget. Unit testing is a simple but effective technique to improve software in terms of quality, flexibility, and time-to-market. A key idea of unit testing is that each piece of code needs its own tests and the best person to design those tests is the developer who wrote the software. However, generating tests for each unit by hand is very expensive, possibly prohibitively so. Automatic test data generation is essential to support unit testing and as unit testing is achieving more attention, developers are using automated unit test data generation tools more often. However, developers have very little information about which tools are effective. This experiment compared three well-known public-accessible unit test data generation tools, JCrasher, TestGen4j, and JUB. We applied them to Java classes and evaluated them based on their mutation scores. As a comparison, we created two additional sets of tests for each class. One test set contained random values and the other contained values to satisfy edge coverage. Results showed that the automatic test data generation tools generated tests with almost the same mutation scores as the random tests. 1 Introduction An important goal of unit testing is to verify that each unit of software works properly. Unit testing allows many problems to be found early in software development. A comprehensive unit test suite that runs together with daily builds is essential to a successful software project [20]. As the computing field uses more agile processes, relies more on test driven development, and has higher reliability requirements for software, unit testing will continue to increase in importance. However, some developers still do not do much unit testing. One possible reason is they do not think the time and effort will be worthwhile [2]. Another possible reason is that unit tests must be maintained, and maintenance for unit tests is often not budgeted. A third possibility is that developers may not know how to design and implement high quality unit tests; they are certainly not taught this crucial knowledge in most undergraduate computer science programs. Using automated unit test tools instead of manual testing can help with all three problems. Automated unit test tools can reduce the time and effort needed to design and implement unit tests, they can make it easier to maintain tests as the program changes, and they can encapsulate knowledge of how to design and implement high quality tests so that developers do not need to know as much. But an important question developers must answer is which tool should I use? This empirical study looks at the most technical challenging part of unit testing, test data generation. We selected tools to empirically evaluate based on the following three factors: (1) the tool must automatically generate test values with little or no input from the tester, (2) the tool must test Java classes, and (3) the tool must be free and readily available (for example, through the web). We selected three well-known, public-accessible automated tools (Section 2.1). The first is JCrasher [3], a random testing tool that causes the class under test to crash. The second is TestGen4J [11], whose primary focus is to exercise boundary value testing of arguments passed to methods. The third is JUB (JUnit test case Builder) [19], which is a framework based on the Builder pattern [7]. We use these tools to automatically generate tests for a collection of Java classes (Section 2.3). As a control, our second step was to manually generate two additional sets of tests. A set of purely random tests was generated for each class as a minimal effort comparison, and tests to satisfy edge coverage on the control flow graph were generated. Third, we seeded faults using the mutation analysis tool mujava [9, 10] (Section 2.4). MuJava is an automated class

2 mutation system that automatically generates mutants for Java classes, and evaluates test sets by calculating the number of mutants killed. Finally, we applied the tests to mujava (Section 2.4), and compared their mutation scores (the percentage of killed mutants). Results are given in Section 3 and discussed in Section 4. Related work is presented in Section 5, and Section 6 concludes the paper. 2 Experimental Design This section describes the design of the experiment. First, each unit test data generation tool used is described, then the process used to manually generate additional tests is presented. Next, the Java classes used in the study are discussed and the mujava mutation testing tool is presented. The process used in conducting the experiment is then presented, followed by possible threats to validity. 2.1 Subjects Unit Testing Tools Table 1 summarizes the three tools this study examined in this paper. Each tool is described in detail below. JCrasher [3] is an automatic robustness testing tool for Java classes. JCrasher examines the type information of methods in Java classes and constructs code fragments that will create instances of different types to test the behavior of the public methods with random data. JCrasher explicitly attempts to detect bugs by causing the class under test to crash, that is, to throw an undeclared runtime exception. Although limited by the randomness of the input values, this approach has the advantage of being completely automatic. No inputs are required from the developer. TestGen4J [11] automatically generates JUnit test cases from Java class files. Its primary focus is to perform boundary value testing of the arguments passed to methods. It uses rules, written in a user-configurable XML file, that defines boundary conditions for the data types. The test code is separated from test data with the help of JTestCase 1. JUB (JUnit test case Builder) [19] is a JUnit test case generator framework accompanied by a number of IDE specific extensions. These extensions (tools, plug-ins, etc.) are invoked from within the IDE and must store generated test case code inside the source code repository administered by the IDE. 2.2 Additional Test Sets As a control comparison, we generated two additional sets of tests for each class by hand with some limited tool support. Testing with random values is widely considered 1 the weakest effort testing strategy, and it seems natural to expect a unit test data generator tool to at least do better than random value generation. We wrote a special-purpose tool that generated random tests in two steps. For each test, the tool arbitrarily selected a method from the class to test. (The number of methods in each class is given in Table 2.) Then the tool randomly generated values for each parameter for that method. The tool did not parse the classes the methods and parameters were hard-coded into tables in the tool. We decided to create the same random number of tests for each subject class as the tool from Section 2.1 that created the most tests for that class. For all the subject classes, JCrasher generated the most tests, so the study had the same number of random tests as JCrasher had. We elected to go with a test criterion as a second control. Formal test criteria are widely promoted by researchers and educators, but are only spottily used in industry [8]. We chose one of the weakest and most basic test criterion: edge coverage on the control flow graphs. We created control flow graphs by hand for each method in each class, then designed inputs to cover each edge in the graphs. 2.3 Java Classes Tested Table 2 lists the Java classes used in this experiment. BoundedStack is a small, fixed sized, implementation of a Stack from the Eclat tool s website [14]. Inventory is taken from the MuClipse [18] project, the eclipse plug-in version of mujava. Node is a mutable set of Strings that is a small part of a publish/subscribe system. It was a sample solution from a graduate class at George Mason University. Queue is a mutable, bounded FIFO data structure of fixed size. Recipe is also taken from the MuClipse project; it is a javabean class that represents a real-world Recipe object. Twelve is another sample solution. It tries to combine three given integers with arithmetic operators to compute exactly twelve. VendingMachine is from Ammann and Offutt s book [1]. It models a simple vending machine for chocolate candy. Table 2. Subject Classes Used Name LOC Methods BoundedStack Inventory Node 77 9 Queue 59 6 Recipe TrashAndTakeOut 26 2 Twelve 94 1 VendingMachine 52 6 Total

3 Table 1. Automated Unit Testing Tools Name Version Inputs Interface JCrasher (2004) Source File Eclipse Plug-in TestGen4J alpha (2005) Jar File Command Line (Linux) JUB (2002) Source File Eclipse Plug-in 2.4 MuJava Our primary measurement of the test sets in this experiment is their ability to find faults. MuJava is used to seed faults (mutants) into the classes and evaluate how many mutants each test set kills. MuJava [9, 10] is a mutation system for Java classes. It automatically generates mutants for both traditional mutation testing and class-level mutation testing. MuJava can test individual classes and packages of multiple classes. Tests are supplied by the users as sequences of method calls to the classes under test encapsulated in methods in separate classes. MuJava creates object-oriented mutants for Java classes according to 24 operators that include object-oriented operators. Method level (traditional) mutants are based on the selective operator set by Offutt et al. [13]. After creating mutants, mujava allows the tester to enter and run tests, and evaluates the mutation coverage of the tests. In mujava, tests for the class under test are encoded in separate classes that make calls to methods in the class under test. Mutants are created and executed automatically. 2.5 Experimental Conduct Figure 1 illustrates the experimental process. The Java classes are represented by the leftmost box, P. Each of the three automated tools (JCrasher, TestGen4J, JUB) were used to create sets of tests (Test Set JC,...). Then we manually created the random tests, then the edge coverage tests. MuJava was designed and developed before JUnit and other widely used test harness tools, so it has its own syntactic requirements. Each mujava test must be in a public method that returns a String that encapsulates the result of the test. Thus many of the tests from the three tools had to be modified to run with mujava. This modification is illustrated in Figure 2. Then mujava was used to generate mutants for each subject class, and run all five test sets against the mutants. This resulted in five mutation scores for each subject class (40 mutation scores in all). Because mujava separates scores for the traditional mutation operators from the class mutation operators, we also kept these scores separate. Table 3. Classes and Mutants Classes Mutants Traditional Class Total BoundedStack Inventory Node Queue Recipe TrashAndTakeOut Twelve VendingMachine Total Threats to Validity As with any study that involves specific programs or classes, there is no guarantee that the classes used are representative of the general population. As yet, nobody has developed a general theory for how to choose representative classes for empirical studies, or how many classes we may need. This general issue has a negative impact on our ability to apply statistical analysis tools and reduces the external validity of many software engineering empirical studies, including this one. Another question is whether the three tools used are representative. We searched for free, publicly available, tools that generated tests and were somewhat surprised at how few we found. We initially hoped to use an advanced tool called Agitar Test Runner, however it is no longer available to academic users. Another possible problem with internal validity may be in the manual steps that had to be applied. Tests from the three tools had to be translated to mujava tests. These changes were only to the structure of the test methods, and did not affect the values, so it is unlikely that these translations affected the results. The random and edge coverage tests were generated by hand with some tool support. These tests were generated without knowledge of the mutants, so as to avoid any bias. 3

4 JCrasher Test Set JC mujava JCrasher Mutation Score TestGen4J Test Set TG TestGen4J Mutation Score P JUB Test Set JUB Mutants JUB Mutation Score Manual Random Test Set Ran Random Mutation Score Manual Edge Cover Test Set EC Edge Cover Mutation Score Figure 1. Experiment Implementation public void test242() throws Throwable public String test242() throws Throwable { { try { try { int i1 = 1; int i1 = 1; int i2 = 1; = int i2 = 1; int i3 = -1; int i3 = -1; Twelve.twelve (i1, i2, i3); return Twelve.twelve (i1, i2, i3); } } catch (Exception e) } catch (Exception e) } { dispatchexception (e); } { return e.tostring(); } } } Figure 2. Conversion to MuJava Tests 3 Results The number of mutants for each class are shown in Table 3. MuJava generates traditional (method level) and class mutants separately. The traditional operators focus on individual statements, and the class operators focus on connections between classes, especially in inheritance hierarchies. Node looks anomalous because of its small number of mutants. Node has only 18 traditional mutants over 77 lines of code. However, the statements are distributed over nine, mostly very small, methods. Most methods are very short (four have only one statement), they use no arithmetic, shift, or logical operators, they have no assignments, and only a few decision statements. Thus, Node has few locations where mujava s mutation operators could be applied. Results from running the five sets of tests under mujava are shown in Tables 4 through 7. Table 4 shows the number of tests in each test set and the mutation scores on the traditional mutants, in terms of the percentage of mutants killed. The Total row gives the sum of the tests for the subject classes and the mutation score across all subject classes. As can be seen, the random tests did better than the TestGen and JUB tests, but not as well as the JCrasher tests. The edge coverage tests, however, kill 24% more mutants than the strongest tool, with fewer tests. Table 5 shows the same data for the class level mutants. There is more diversity in the scores from the three tools, and JCrasher killed 12% more class level mutants than the random tests did. However, the edge coverage tests are still far stronger, killing 21% more mutants than the strongest performing tool (JCrasher). Table 6 combines the scores from Tables 5 and 4 for both traditional and class level mutants. Again, the JUB tests are the weakest, JCrasher, TestGen and the random tests are fairly close, and the edge coverage tests kill far more mutants; 24% more than the JCrasher tests. Table7summarizesthedatafor all subject classes for each set of tests. Because the number of tests diverged widely, we asked the question how efficient is each test generation method? To approximate efficiency, we computed the number of mutants killed per test. Not surprisingly, the edge coverage tests were at the high end with a score of 6.5. TestGen generated the least number of tests, and came out as being the second most efficient,

5 Table 4. Traditional Mutants Killed Classes JCrasher TestGen JUB Edge Coverage Random Tests % Killed Tests % Killed Tests % Killed Tests % Killed Tests % Killed BoundedStack % % % % % Inventory % % % % % Node % % % % % Queue % % % % % Recipe % % % % % TrashAndTakeOut % % % % % Twelve % 1 0.0% 1 0.0% % % VendingMachine % 5 9.1% 6 9.1% % % Total % % % % % Table 5. Class Mutants Killed Classes JCrasher TestGen JUB Edge Coverage Random Tests % Killed Tests % Killed Tests % Killed Tests % Killed Tests % Killed BoundedStack % % % % % Inventory % % % % % Node % % % % % Queue % % % % % Recipe % % % % % TrashAndTakeOut 13 N/A 2 N/A 2 N/A 5 N/A 5 N/A Twelve 27 N/A 1 N/A 1 N/A 12 N/A 27 N/A VendingMachine % 5 0.0% 6 0.0% % % Total % % % % % Table 6. Total Mutants Killed Classes JCrasher TestGen JUB Edge Coverage Random Tests % Killed Tests % Killed Tests % Killed Tests % Killed Tests % Killed BoundedStack % % % % % Inventory % % % % % Node % % % % % Queue % % % % % Recipe % % % % % TrashAndTakeOut % % % % % Twelve % 1 0.0% 1 0.0% % % VendingMachine % 5 8.3% 6 8.3% % % Total % % % % % 5

6 JCrasher and the random tests were the least efficient; they generated a lot of tests without much obvious benefit, which adds a burden on the developers who must evaluate the results of each test. Figure 3 illustrates the total percent of mutants killed by each test set in a bar chart. The difference between the edge coverage tests and the others is remarkable. Tables 8 and 9 give a more detailed look at the scores for each mutation operator. The mutation operators are described on the mujava website [10]. No mutants were generated for five traditional operators (AODS, SOR, LOR, LOD or ASRS) or for most of the class operators (IHI, IHD, IOP, IOR, ISI, ISD, IPC, PNC, PMD, PPD, PCI, PCC, PCD, PRV, OMR, OMD, OAN, EOA, or EOC), so they are not shown. The edge coverage tests had the highest scores for all mutation operators. None of the test sets did particularly well on the arithmetic operator mutants (operators whose names start with the letter A ). 4 Analysis and Discussion We have anecdotal evidence from in-class exercises that when hand crafting tests to kill mutants, it is trivially easy to kill between 40% to 50% of the mutants. This anecdotal note is confirmed by these data, where random values achieved an average mutation score of 36%. It was quite distressing to find that the three tools did little better than random testing. JCrasher was slightly better (6.5% overall), TestGen was worse (a 7.2% lower mutation score, and JUB was even worse (a 11.6% lower mutation score). An interesting observation from Table 6 is that the scores forvendingmachine are much lower for all sets of tests except for edge coverage. The other four mutation scores are below 10%. The reason is probably because of the relative complexity of VendingMachine. Ithasseveral multi-clause predicates that determine most of its behavior: (coin!=10 && coin!=25 && coin!=100) (credit >= 90) (credit < 90 stock.size() <= 0) (stock.size() >= MAX) MuJava creates dozens of mutants on these predicates, and the mostly random values created by the three tools in this study have a small chance of killing those mutants. Another interesting observation from Table 6 is that the scores for BoundedStack were the second lowest for all the test sets except edge coverage (in which it was the lowest). A difference in that class is that only two of the eleven methods have parameters. The three testing tools depend largely on the method signature, so fewer parameters may mean weaker tests. Another finding is that JCrasher got the highest mutation score among the three tools. We examined the tests generated by JCrasher, and concluded out that this is because JCrasher uses invalid values to attempt to crash the class, as shown in Figure 4. JUB only generates tests that uses 0 for integers and null for general subjects, and Test- Gen4J generates normal inputs, such as blanks and empty strings. JCrasher, of course, also created many more tests than the other two tools. public void test18() throws Throwable { try { String s1 = "!@$$%ˆ&*()_+{} [] ;:/.,<>? -="; Node n2 = new Node(); n2.disallow (s1); } catch (Exception e) {dispatchexception (e);} } Figure 4. JCrasher Test The measure of efficiency in Table 7 is a bit biased with mutation, as many mutants are very easy to kill. It is quite common for the first few tests to kill a lot of mutants and subsequent tests to kill fewer mutants, leading to a sort of diminishing returns. We separated the data for traditional and class mutants in Tables 4 and 5. JCrasher s tests had a slightly higher mutation score on the class mutants, and TestGen s, JUB s and the random tests were slightly lower. There was little difference in the mutation scores for the edge coverage tests. However, we are not able to draw any general conclusions from those data. We also looked for a correlation between the number of tests for each class and the mutation score. With the JCrasher tests and Random tests, the largest number of tests (for Recipe in both cases) led to the highest mutation scores. However, the other three test sets showed no such correlation and we see no correlation with the least numbers of tests. In fact, edge coverage produced the least number of tests for class Twelve (12) but had the highest mutation score (83.6%). Again, we can draw no general conclusions from these data. 5 Related Work This research project was partially inspired by a paper by d Amorim et al., which presented an empirical comparison of automated generation and classification techniques for object-oriented unit testing [4]. Their study compared pairs of test-generation techniques based on random generation or symbolic execution and test-classification tech- 6

7 Table 7. Summary Data % Killed Efficiency Tool Tests Traditional Class Total Killed / Tests JCrasher % 46.4% 42.5% 3.1 TestGen % 37.1% 28.8% 5.2 JUB % 25.6% 24.4% 4.1 Edge Coverage % 67.0% 66.3% 6.5 Random % 34.0% 36.0% % Percent mutants killed 80% 66% 60% 40% 20% 43% 28% 24% 36% 0 JCrasher Edge TestGen JUB Coverage Test Sets Random Figure 3. Total Percent Mutants Killed by Each Test Set niques based on uncaught exceptions or operational models. Specifically, they compared two tools that implement automated test generation; Eclat [15], which uses random generation and their own tool, Symclat, which uses symbolic generation. The tools also provide test classification based on an operational model and an uncaught exception model. The results showed that the two tools are complementary in revealing faults. In a similar study on static analysis test tools, Rutar et al. [17] compared five static analysis test tools against five open source projects. Their results showed that none of the five tools strictly subsumes any of the others in terms of the fault-finding capability. They proposed a meta-tool to combine and correlate the abilities of these five tools. Wagner et al. [21] presented a case study that applied three static fault-finding tools as well as code review and manual testing to several industrial projects. Their study showed that the static tools predominantly found different faults than manual testing but a subset of faults found by reviews. They proposed a combination of these three types of techniques. An early research tool that implemented automated unit testing was Godzilla, which was part of the Mothra mutation suite of tools [5]. Godzilla used symbolic evaluation to automatically generate tests to kill mutants, and a later version incorporated dynamic symbolic evaluation and a dynamic domain reduction procedure to generate tests [12]. A more recent tool that uses very similar techniques is the Daikon invariant detector tool [6]. It augments the kind of symbolic evaluation that Godzilla used with program invariants, an innovation that makes the test generation process more efficient and scalable. Daikon analyzes values that a program computes when running and reports properties that were true over the observed executions. Eclat, a model-driven random testing tool, uses Daikon to dynamically infer an operational model consisting of a set of likely program invariants [15]. Eclat requires classes to test plus an example of their use, such as a program that uses the classes or a small initial test suite. As Eclat s result may depend on the initial seeds, it was not directly comparable with the other tools in this study. Another tool that is based on Daikon is Jov [22], which presents an operational violation approach for unit test generation and selection, a black-box approach that does not require specifications. The approach dynamically generates operational abstractions from executions of the existing unit test suite. These operational abstractions guide test generation tools to generate tests to violate them. The approach selects tests that violate operational abstractions for inspec- 7

8 Table 8. Mutation Scores per Traditional Mutation Operator Traditional Mutants JCrasher TestGen JUB Edge Coverage Random AORB 56 32% 21% 30% 66% 29% AORS 11 46% 27% 36% 55% 27% AOIU 66 46% 32% 17% 79% 36% AOIS % 24% 22% 53% 22% AODU 1 100% 100% 100% 100% 100% ROR % 25% 17% 79% 57% COR 12 33% 25% 25% 58% 33% COD 6 33% 33% 17% 50% 33% COI 4 75% 75% 50% 75% 75% LOI % 48% 44% 80% 48% Total % 28% 24% 66% 36% Table 9. Mutation Scores per Class Mutation Operator Class Mutants JCrasher TestGen JUB Edge Coverage Random IOD 6 50% 50% 50% 50% 50% JTI 20 95% 50% 25% 100% 55% JTD 6 100% 100% 0% 100% 50% JSI 13 0% 0% 0% 23% 0% JSD 4 0% 0% 0% 50% 0% JID 2 0% 0% 0% 0% 0% JDC 6 83% 66% 83% 83% 67% EAM 28 0% 0% 0% 57% 0% EMM % 100% 100% 100% 100% Total 97 46% 36% 26% 69% 34% tion. These tests exercise new behavior that had not yet been exercised by the existing tests. Jov integrates the use of Daikon and Parasoft Jtest [16] (a commercial Java testing tool). Agitar Test Runner is a commercial test tool that was partially based on Daikon and Godzilla, but was unfortunately not available for this study. 6 Conclusions This paper compared three free, publicly accessible, unit test tools on the basis of their fault finding abilities. Faults were seeded into Java classes with an automated mutation tool and the tools tests were compared with hand generated random tests and edge coverage tests. Our findings are that these tools generate tests that are very poor at detecting faults. This can be viewed as a depressing comment on the state of practice. As users expectations for reliable software continue to grow, and as agile processes and test driven development continue to gain acceptance throughout the industry, unit testing is becoming increasingly important. Unfortunately, software developers have few choices in high quality test data generation tools. Whereas criteria-based testing has dominated the research community for more than two decades, industrial test data generatorsseldom, if ever, try to generate tests that satisfy test criteria. Tools that evaluate coverage are available, but they do not solve the hardest problem of generating test values. This study has led us to conclude that it is past time for criteria-based test data generation to migrate into tools that developers can use with minimal knowledge of software testing theory. These tools were compared with only one test criterion, edge coverage on control flow graphs. This is widely known in the research community to be one of the simplest, cheapest, and least effective test criterion. Our anecdotal experience with manually killing mutants indicates that scores of around 40% are trivial to achieve and 70% is fairly easy to achieve with a small amount of hand analsyis of the class s structure. This observation is supported by this study, in which random values reached the 40% level and edge coverage tests reached the 70% level. However, mutation scores of 80% to 90% are often quite hard to reach with handgenerated tests. This should be possible with more stringent criteria such as prime paths, all-uses, or logic-based coverage. We have also observed that software developers have few 8

9 educational opportunities to learn unit testing skills. Despite the fact that testing consumes more than half of the software industry s resources, we are not aware of any universities in the USA that require undergraduate computer science students to take a software testing course. Very few universities do more than teach a lecture or two on testing in a general software engineering survey; the material that is presented is typically 20 years old. This study has led us to conclude that it is past time for universities to teach more knowledge of software testing to undergraduate computer science and software engineering students. References [1] Paul Ammann and Jeff Offutt. Introduction to Software Testing. Cambridge University Press, Cambridge, UK, ISBN [2] AutomatedQA. Testcomplete. Online, last access December [3] Christoph Csallner and Yannis Smaragdakis. JCrasher: An automatic robustness tester for Java. Software: Practice and Experience, 34: , [4] Marcelo d Amorim, Carlos Pacheco, Tao Xie, Darko Marinov, and Michael D. Ernst. An empirical comparison of automated generation and classification techniques for object-oriented unit testing. In Proceedings of the 21st International Conference on Automated Software Engineering (ASE 2006), pages 59 68, Tokyo, Japan, September ACM / IEEE Computer Society Press. [5] Richard A. DeMillo and Jeff Offutt. Constraint-based automatic test data generation. IEEE Transactions on Software Engineering, 17(9): , September [6] Michael D. Ernst, Jake Cockrell, William G. Griswold, and David Notkin. Dynamically discovering likely program invariants to support program evolution. IEEE Transactions on Software Engineering, 27(2):99 123, February [7] Erich Gamma, Richard Helm, Ralph Johnson, and John M. Vlissides. Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, ISBN [8] Mats Grindal, Jeff Offutt, and Jonas Mellin. On the testing maturity of software producing organizations. In Testing: Academia & Industry Conference - Practice And Research Techniques (TAIC / PART 2006), Windsor, UK, August IEEE Computer Society Press. [9] Yu-Seung Ma, Jeff Offutt, and Yong-Rae Kwon. Mu- Java : An automated class mutation system. Software Testing, Verification, and Reliability, 15(2):97 133, June [10] Yu-Seung Ma, Jeff Offutt, and Yong-Rae Kwon. mujava home page. Online, offutt/mujava/, last access December [11] Manish Maratmu. Testgen4j. Online, last access December [12] Jeff Offutt, Zhenyi Jin, and Jie Pan. The dynamic domain reduction approach to test data generation. Software Practice and Experience, 29(2): , January [13] Jeff Offutt, Ammei Lee, Gregg Rothermel, Roland Untch, and Christian Zapf. An experimental determination of sufficient mutation operators. ACM Transactions on Software Engineering Methodology, 5(2):99 118, April [14] Carlos Pacheco and Michael D. Ernst. Eclat tutorial. Online. last access December [15] Carlos Pacheco and Michael D. Ernst. Eclat: Automatic generation and classification of test inputs. In 19th European Conference on Object-Oriented Programming (ECOOP 2005), Glasgow, Scotland, July [16] Parasoft. Jtest. Online, last access December [17] Nick Rutar, Christian B. Almazan, and Jeffrey S. Foster. A comparison of bug finding tools for Java. In Proceedings of the 15th International Symposium on Software Reliability Engineering, pages , Saint- Malo, Bretagne, France, November IEEE Computer Society Press. [18] Ben Smith and Laurie Williams. Killing mutants with MuClipse. Online, last access December

10 [19] Mark Tyborowski. JUB (JUnit test case Builder). Online, last access December [20] Sami Vaaraniemi. The benefits of automated unit testing. Online, last access December [21] S. Wagner, J. Jurjens, C. Koller, and P. Trischberger. Comparing bug finding tools with reviews and tests. In 17th IFIP TC6/WG 6.1 International Conference on Testing of Communicating Systems, pages 40 55, May [22] Tao Xie and David Notkin. Tool-assisted unit-test generation and selection based on operational abstractions. Automated Software Engineering Journal, 13(3): , July

Description of Class Mutation Mutation Operators for Java

Description of Class Mutation Mutation Operators for Java Description of Class Mutation Mutation Operators for Java Yu-Seung Ma Electronics and Telecommunications Research Institute, Korea ysma@etri.re.kr Jeff Offutt Software Engineering George Mason University

More information

MuJava : An Automated Class Mutation System

MuJava : An Automated Class Mutation System MuJava : An Automated Class Mutation System Yu-Seung Ma 1, Jeff Offutt 2, and Yong Rae Kwon 1 1 Division of Computer Science Department of Electrical Engineering and Computer Science Korea Advanced Institute

More information

Perspectives on Automated Testing of Aspect-Oriented Programs

Perspectives on Automated Testing of Aspect-Oriented Programs Perspectives on Automated Testing of Aspect-Oriented Programs ABSTRACT Tao Xie Department of Computer Science North Carolina State University Raleigh, NC 27695 xie@csc.ncsu.edu Aspect-oriented software

More information

Tool-Assisted Unit-Test Generation and Selection Based on Operational Abstractions

Tool-Assisted Unit-Test Generation and Selection Based on Operational Abstractions Tool-Assisted Unit-Test Generation and Selection Based on Operational Abstractions Tao Xie 1 and David Notkin 2 (xie@csc.ncsu.edu,notkin@cs.washington.edu) 1 Department of Computer Science, North Carolina

More information

Introduction to Software Testing Chapter 8.1 Building Testing Tools Instrumentation. Chapter 8 Outline

Introduction to Software Testing Chapter 8.1 Building Testing Tools Instrumentation. Chapter 8 Outline Introduction to Software Testing Chapter 8. Building Testing Tools Instrumentation Paul Ammann & Jeff Offutt www.introsoftwaretesting.com Chapter 8 Outline. Instrumentation for Graph and Logical Expression

More information

Object-Oriented Testing Capabilities and Performance Evaluation of the C# Mutation System

Object-Oriented Testing Capabilities and Performance Evaluation of the C# Mutation System Object-Oriented Testing Capabilities and Performance Evaluation of the C# Mutation System Anna Derezińska, Anna Szustek Institute of Computer Science, Warsaw University of Technology Nowowiejska 15/19,

More information

There are 2 approaches to starting a session: the mutation scope can contain a whole codebase or in a single chosen method.

There are 2 approaches to starting a session: the mutation scope can contain a whole codebase or in a single chosen method. Visual Mutator 2.1 User Manual A1. Overview Visual Mutator is a mutation testing tool and a Visual Studio extension that can be used to verify quality of a test suite in the active solution. It operates

More information

Software Testing. Definition: Testing is a process of executing a program with data, with the sole intention of finding errors in the program.

Software Testing. Definition: Testing is a process of executing a program with data, with the sole intention of finding errors in the program. Software Testing Definition: Testing is a process of executing a program with data, with the sole intention of finding errors in the program. Testing can only reveal the presence of errors and not the

More information

Automated Testing and Response Analysis of Web Services

Automated Testing and Response Analysis of Web Services Automated Testing and Response Analysis of Web Services Evan Martin Suranjana Basu Tao Xie North Carolina State University Raleigh, NC, USA {eemartin, sbasu2}@ncsu.edu, xie@csc.ncsu.edu Abstract Web services

More information

Random Testing: The Best Coverage Technique - An Empirical Proof

Random Testing: The Best Coverage Technique - An Empirical Proof , pp. 115-122 http://dx.doi.org/10.14257/ijseia.2015.9.12.10 Random Testing: The Best Coverage Technique - An Empirical Proof K Koteswara Rao 1 and Prof GSVP Raju 2 1 Asst prof, (PhD) @JNTUK, CSE Department,

More information

Code Coverage and Test Suite Effectiveness: Empirical Study with Real Bugs in Large Systems

Code Coverage and Test Suite Effectiveness: Empirical Study with Real Bugs in Large Systems Code Coverage and Test Suite Effectiveness: Empirical Study with Real Bugs in Large Systems Pavneet Singh Kochhar, Ferdian Thung, and David Lo Singapore Management University, Singapore {kochharps.2012,

More information

Experimental Comparison of Concolic and Random Testing for Java Card Applets

Experimental Comparison of Concolic and Random Testing for Java Card Applets Experimental Comparison of Concolic and Random Testing for Java Card Applets Kari Kähkönen, Roland Kindermann, Keijo Heljanko, and Ilkka Niemelä Aalto University, Department of Information and Computer

More information

Test Case Design Techniques

Test Case Design Techniques Summary of Test Case Design Techniques Brian Nielsen, Arne Skou {bnielsen ask}@cs.auc.dk Development of Test Cases Complete testing is impossible Testing cannot guarantee the absence of faults How to select

More information

Test case design techniques I: Whitebox testing CISS

Test case design techniques I: Whitebox testing CISS Test case design techniques I: Whitebox testing Overview What is a test case Sources for test case derivation Test case execution White box testing Flowgraphs Test criteria/coverage Statement / branch

More information

Comparing the effectiveness of automated test generation tools EVOSUITE and Tpalus

Comparing the effectiveness of automated test generation tools EVOSUITE and Tpalus Comparing the effectiveness of automated test generation tools EVOSUITE and Tpalus A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL UNIVERSITY OF MINNESOTA BY Sai Charan Raj Chitirala IN PARTIAL

More information

Test case design techniques I: Whitebox testing CISS

Test case design techniques I: Whitebox testing CISS Test case design techniques I: Whitebox testing Overview What is a test case Sources for test case derivation Test case execution White box testing Flowgraphs Test criteria/coverage Statement / branch

More information

CS 2112 Spring 2014. 0 Instructions. Assignment 3 Data Structures and Web Filtering. 0.1 Grading. 0.2 Partners. 0.3 Restrictions

CS 2112 Spring 2014. 0 Instructions. Assignment 3 Data Structures and Web Filtering. 0.1 Grading. 0.2 Partners. 0.3 Restrictions CS 2112 Spring 2014 Assignment 3 Data Structures and Web Filtering Due: March 4, 2014 11:59 PM Implementing spam blacklists and web filters requires matching candidate domain names and URLs very rapidly

More information

National University of Ireland, Maynooth MAYNOOTH, CO. KILDARE, IRELAND. Testing Guidelines for Student Projects

National University of Ireland, Maynooth MAYNOOTH, CO. KILDARE, IRELAND. Testing Guidelines for Student Projects National University of Ireland, Maynooth MAYNOOTH, CO. KILDARE, IRELAND. DEPARTMENT OF COMPUTER SCIENCE, TECHNICAL REPORT SERIES Testing Guidelines for Student Projects Stephen Brown and Rosemary Monahan

More information

Object-Oriented Testing Capabilities and Performance Evaluation of the C# Mutation System

Object-Oriented Testing Capabilities and Performance Evaluation of the C# Mutation System Object-Oriented Testing Capabilities and Performance Evaluation of the C# Mutation System Anna Derezińska and Anna Szustek Warsaw University of Technology, Institute of Computer Science Nowowiejska 15/19,

More information

Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder

Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder Matt Department of Computer Science and Engineering University of Minnesota staats@cs.umn.edu Abstract We present

More information

Automated Test Generation for AspectJ Programs

Automated Test Generation for AspectJ Programs Automated Test Generation for AspectJ Programs Tao Xie 1 Jianjun Zhao 2 Darko Marinov 3 David Notkin 1 1 Department of Computer Science & Engineering, University of Washington, USA 2 Department of Computer

More information

Unit Testing with Monkeys Karl-Mikael Björklid (bjorklid@jyu.) University of Jyväskylä Department of Mathematical Information Technology

Unit Testing with Monkeys Karl-Mikael Björklid (bjorklid@jyu.) University of Jyväskylä Department of Mathematical Information Technology Abstract Unit Testing with Monkeys Karl-Mikael Björklid (bjorklid@jyu.) University of Jyväskylä Department of Mathematical Information Technology Unit testing has become an incresingly popular way of assuring

More information

On the Predictability of Random Tests for Object-Oriented Software

On the Predictability of Random Tests for Object-Oriented Software On the Predictability of Random Tests for Object-Oriented Software Ilinca Ciupa, Alexander Pretschner, Andreas Leitner, Manuel Oriol, Bertrand Meyer Department of Computer Science, ETH Zurich, Switzerland

More information

Coverage Criteria for Search Based Automatic Unit Testing of Java Programs

Coverage Criteria for Search Based Automatic Unit Testing of Java Programs ISSN (Online): 2409-4285 www.ijcsse.org Page: 256-263 Coverage Criteria for Search Based Automatic Unit Testing of Java Programs Ina Papadhopulli 1 and Elinda Meçe 2 1, 2 Department of Computer Engineering,

More information

EVALUATING METRICS AT CLASS AND METHOD LEVEL FOR JAVA PROGRAMS USING KNOWLEDGE BASED SYSTEMS

EVALUATING METRICS AT CLASS AND METHOD LEVEL FOR JAVA PROGRAMS USING KNOWLEDGE BASED SYSTEMS EVALUATING METRICS AT CLASS AND METHOD LEVEL FOR JAVA PROGRAMS USING KNOWLEDGE BASED SYSTEMS Umamaheswari E. 1, N. Bhalaji 2 and D. K. Ghosh 3 1 SCSE, VIT Chennai Campus, Chennai, India 2 SSN College of

More information

Semantic Errors in SQL Queries: A Quite Complete List

Semantic Errors in SQL Queries: A Quite Complete List Semantic Errors in SQL Queries: A Quite Complete List Christian Goldberg, Stefan Brass Martin-Luther-Universität Halle-Wittenberg {goldberg,brass}@informatik.uni-halle.de Abstract We investigate classes

More information

An Empirical Study on the Scalability of Selective Mutation Testing

An Empirical Study on the Scalability of Selective Mutation Testing 2014 IEEE 25th International Symposium on Software Reliability Engineering An Empirical Study on the Scalability of Selective Mutation Testing Jie Zhang, Muyao Zhu, Dan Hao, Lu Zhang Key Laboratory of

More information

CS104: Data Structures and Object-Oriented Design (Fall 2013) October 24, 2013: Priority Queues Scribes: CS 104 Teaching Team

CS104: Data Structures and Object-Oriented Design (Fall 2013) October 24, 2013: Priority Queues Scribes: CS 104 Teaching Team CS104: Data Structures and Object-Oriented Design (Fall 2013) October 24, 2013: Priority Queues Scribes: CS 104 Teaching Team Lecture Summary In this lecture, we learned about the ADT Priority Queue. A

More information

The University of Saskatchewan Department of Computer Science. Technical Report #2012-01

The University of Saskatchewan Department of Computer Science. Technical Report #2012-01 The University of Saskatchewan Department of Computer Science Technical Report #2012-01 Evaluating Test Quality Minhaz Fahim Zibran Department of Computer Science Abstract Software testing has been an

More information

Towards a Framework for Differential Unit Testing of Object-Oriented Programs

Towards a Framework for Differential Unit Testing of Object-Oriented Programs Towards a Framework for Differential Unit Testing of Object-Oriented Programs Tao Xie 1 Kunal Taneja 1 Shreyas Kale 1 Darko Marinov 2 1 Department of Computer Science, North Carolina State University,

More information

Mutation testing: practical aspects and cost analysis

Mutation testing: practical aspects and cost analysis Mutation testing: practical aspects and cost analysis Macario Polo and Mario Piattini Alarcos Group Department of Information Systems and Technologies University of Castilla-La Mancha (Spain) Macario.Polo@uclm.es

More information

Module 10. Coding and Testing. Version 2 CSE IIT, Kharagpur

Module 10. Coding and Testing. Version 2 CSE IIT, Kharagpur Module 10 Coding and Testing Lesson 26 Debugging, Integration and System Testing Specific Instructional Objectives At the end of this lesson the student would be able to: Explain why debugging is needed.

More information

Judy a mutation testing tool for Java

Judy a mutation testing tool for Java This is a preprint of an article published in Lech Madeyski, Norbert Radyk: Judy - a mutation testing tool for Java, IET Software, 2010, vol. 4, iss. 1, pp. 32 42. http://dx.doi.org/10.1049/iet-sen.2008.0038

More information

Processing and data collection of program structures in open source repositories

Processing and data collection of program structures in open source repositories 1 Processing and data collection of program structures in open source repositories JEAN PETRIĆ, TIHANA GALINAC GRBAC AND MARIO DUBRAVAC, University of Rijeka Software structure analysis with help of network

More information

Scaling Up Automated Test Generation: Automatically Generating Maintainable Regression Unit Tests for Programs

Scaling Up Automated Test Generation: Automatically Generating Maintainable Regression Unit Tests for Programs Scaling Up Automated Test Generation: Automatically Generating Maintainable Regression Unit Tests for Programs Brian Robinson ABB Corporate Research brian.p.robinson@us.abb.com Michael D. Ernst University

More information

CALCULATIONS & STATISTICS

CALCULATIONS & STATISTICS CALCULATIONS & STATISTICS CALCULATION OF SCORES Conversion of 1-5 scale to 0-100 scores When you look at your report, you will notice that the scores are reported on a 0-100 scale, even though respondents

More information

Software Testing. Jeffrey Carver University of Alabama. April 7, 2016

Software Testing. Jeffrey Carver University of Alabama. April 7, 2016 Software Testing Jeffrey Carver University of Alabama April 7, 2016 Warm-up Exercise Software testing Graphs Control-flow testing Data-flow testing Input space testing Test-driven development Introduction

More information

Statistics 2014 Scoring Guidelines

Statistics 2014 Scoring Guidelines AP Statistics 2014 Scoring Guidelines College Board, Advanced Placement Program, AP, AP Central, and the acorn logo are registered trademarks of the College Board. AP Central is the official online home

More information

APPROACHES TO SOFTWARE TESTING PROGRAM VERIFICATION AND VALIDATION

APPROACHES TO SOFTWARE TESTING PROGRAM VERIFICATION AND VALIDATION 1 APPROACHES TO SOFTWARE TESTING PROGRAM VERIFICATION AND VALIDATION Validation: Are we building the right product? Does program meet expectations of user? Verification: Are we building the product right?

More information

Obsidian: Pattern-Based Unit Test Implementations

Obsidian: Pattern-Based Unit Test Implementations Journal of Software Engineering and Applications, 2014, 7, 94-103 Published Online February 2014 (http://www.scirp.org/journal/jsea) http://dx.doi.org/10.4236/jsea.2014.72011 Obsidian: Pattern-Based Unit

More information

Using Testing and JUnit Across The Curriculum

Using Testing and JUnit Across The Curriculum Using Testing and JUnit Across The Curriculum Michael Wick, Daniel Stevenson and Paul Wagner Department of Computer Science University of Wisconsin-Eau Claire Eau Claire, WI 54701 {wickmr, stevende, wagnerpj@uwec.edu

More information

On the effectiveness of manual and automatic unit test generation

On the effectiveness of manual and automatic unit test generation The Third International Conference on Software Engineering Advances On the effectiveness of manual and automatic unit test generation Alberto Bacchelli bacchelli@cs.unibo.it Paolo Ciancarini ciancarini@cs.unibo.it

More information

SERVING as the first line of defense against malicious

SERVING as the first line of defense against malicious IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, VOL. 9, NO. 1, MARCH 2012 1 Systematic Structural Testing of Firewall Policies JeeHyun Hwang, Tao Xie, Fei Chen, and Alex X. Liu Abstract Firewalls

More information

Coordinated Visualization of Aspect-Oriented Programs

Coordinated Visualization of Aspect-Oriented Programs Coordinated Visualization of Aspect-Oriented Programs Álvaro F. d Arce 1, Rogério E. Garcia 1, Ronaldo C. M. Correia 1 1 Faculdade de Ciências e Tecnologia Universidade Estadual Paulista Júlio de Mesquita

More information

Dsc+Mock: A Test Case + Mock Class Generator in Support of Coding Against Interfaces

Dsc+Mock: A Test Case + Mock Class Generator in Support of Coding Against Interfaces Dsc+Mock: A Test Case + Mock Class Generator in Support of Coding Against Interfaces Mainul Islam, Christoph Csallner Computer Science and Engineering Department University of Texas at Arlington Arlington,

More information

the first thing that comes to mind when you think about unit testing? If you re a Java developer, it s probably JUnit, since the

the first thing that comes to mind when you think about unit testing? If you re a Java developer, it s probably JUnit, since the By Matt Love W hat s the first thing that comes to mind when you think about unit testing? If you re a Java developer, it s probably JUnit, since the tool is generally recognized as the de facto standard

More information

Semantic Analysis: Types and Type Checking

Semantic Analysis: Types and Type Checking Semantic Analysis Semantic Analysis: Types and Type Checking CS 471 October 10, 2007 Source code Lexical Analysis tokens Syntactic Analysis AST Semantic Analysis AST Intermediate Code Gen lexical errors

More information

Introduction to Computers and Programming. Testing

Introduction to Computers and Programming. Testing Introduction to Computers and Programming Prof. I. K. Lundqvist Lecture 13 April 16 2004 Testing Goals of Testing Classification Test Coverage Test Technique Blackbox vs Whitebox Real bugs and software

More information

An Eclipse Plug-In for Visualizing Java Code Dependencies on Relational Databases

An Eclipse Plug-In for Visualizing Java Code Dependencies on Relational Databases An Eclipse Plug-In for Visualizing Java Code Dependencies on Relational Databases Paul L. Bergstein, Priyanka Gariba, Vaibhavi Pisolkar, and Sheetal Subbanwad Dept. of Computer and Information Science,

More information

Notes on Complexity Theory Last updated: August, 2011. Lecture 1

Notes on Complexity Theory Last updated: August, 2011. Lecture 1 Notes on Complexity Theory Last updated: August, 2011 Jonathan Katz Lecture 1 1 Turing Machines I assume that most students have encountered Turing machines before. (Students who have not may want to look

More information

Using XML to Test Web Software Services. Modern Web Sites

Using XML to Test Web Software Services. Modern Web Sites Using XML to Test Web Software Services Jeff Offutt Information & Software Engineering George Mason University Fairfax, VA USA www.ise.gmu.edu/faculty/ofut/ Joint research with Suet Chun Lee, GMU PhD student

More information

Consumer Price Indices in the UK. Main Findings

Consumer Price Indices in the UK. Main Findings Consumer Price Indices in the UK Main Findings The report Consumer Price Indices in the UK, written by Mark Courtney, assesses the array of official inflation indices in terms of their suitability as an

More information

IntroClassJava: A Benchmark of 297 Small and Buggy Java Programs

IntroClassJava: A Benchmark of 297 Small and Buggy Java Programs IntroClassJava: A Benchmark of 297 Small and Buggy Java Programs Thomas Durieux, Martin Monperrus To cite this version: Thomas Durieux, Martin Monperrus. IntroClassJava: A Benchmark of 297 Small and Buggy

More information

Standard for Software Component Testing

Standard for Software Component Testing Standard for Software Component Testing Working Draft 3.4 Date: 27 April 2001 produced by the British Computer Society Specialist Interest Group in Software Testing (BCS SIGIST) Copyright Notice This document

More information

Test case design techniques II: Blackbox testing CISS

Test case design techniques II: Blackbox testing CISS Test case design techniques II: Blackbox testing Overview Black-box testing (or functional testing): Equivalence partitioning Boundary value analysis Domain analysis Cause-effect graphing Behavioural testing

More information

Software Testing Strategies and Techniques

Software Testing Strategies and Techniques Software Testing Strategies and Techniques Sheetal Thakare 1, Savita Chavan 2, Prof. P. M. Chawan 3 1,2 MTech, Computer Engineering VJTI, Mumbai 3 Associate Professor, Computer Technology Department, VJTI,

More information

Evaluation of AgitarOne

Evaluation of AgitarOne Carnegie Mellon University, School of Computer Science Master of Software Engineering Evaluation of AgitarOne Analysis of Software Artifacts Final Project Report April 24, 2007 Edited for public release

More information

Overview. What is software testing? What is unit testing? Why/when to test? What makes a good test? What to test?

Overview. What is software testing? What is unit testing? Why/when to test? What makes a good test? What to test? Testing CMSC 202 Overview What is software testing? What is unit testing? Why/when to test? What makes a good test? What to test? 2 What is Software Testing? Software testing is any activity aimed at evaluating

More information

Comparing Four Static Analysis Tools for Java Concurrency Bugs

Comparing Four Static Analysis Tools for Java Concurrency Bugs Comparing Four Static Analysis Tools for Java Concurrency Bugs Md. Abdullah Al Mamun, Aklima Khanam, Håkan Grahn, and Robert Feldt School of Computing, Blekinge Institute of Technology SE-371 79 Karlskrona,

More information

TTCN-3, Qtronic and SIP

TTCN-3, Qtronic and SIP TTCN-3, Qtronic and SIP 1 (8) TTCN-3, Qtronic and SIP The Model-Based Testing of a Protocol Stack a TTCN-3 Integrated Approach Technical Whitepaper EXECUTIVE SUMMARY TTCN-3 (Test and Test Control Notation

More information

Objects-First vs. Structures-First Approaches to 00 Programming Education: A Replication Study

Objects-First vs. Structures-First Approaches to 00 Programming Education: A Replication Study Journal of Business & Leadership (2005-2012) Volume 7 Number 1 Journal of Business & Leadership Article 5 1-1-2011 Objects-First vs. Structures-First Approaches to 00 Programming Education: A Replication

More information

The Best Software Testing Arie Van Deursen TU Delft, 2011

The Best Software Testing Arie Van Deursen TU Delft, 2011 Favorite Picks in Software Testing Arie van Deursen TU Delft @avandeursen Kickoff presentation Test Week Stabiplan, 2011 1 Outline 1. Background 2. Test Strategies 3. Test Week Objectives 4. Selected Strategies

More information

Gadget: A Tool for Extracting the Dynamic Structure of Java Programs

Gadget: A Tool for Extracting the Dynamic Structure of Java Programs Gadget: A Tool for Extracting the Dynamic Structure of Java Programs Juan Gargiulo and Spiros Mancoridis Department of Mathematics & Computer Science Drexel University Philadelphia, PA, USA e-mail: gjgargiu,smancori

More information

Environment Modeling for Automated Testing of Cloud Applications

Environment Modeling for Automated Testing of Cloud Applications Environment Modeling for Automated Testing of Cloud Applications Linghao Zhang, Tao Xie, Nikolai Tillmann, Peli de Halleux, Xiaoxing Ma, Jian Lv {lzhang25, txie}@ncsu.edu, {nikolait, jhalleux}@microsoft.com,

More information

Testing LTL Formula Translation into Büchi Automata

Testing LTL Formula Translation into Büchi Automata Testing LTL Formula Translation into Büchi Automata Heikki Tauriainen and Keijo Heljanko Helsinki University of Technology, Laboratory for Theoretical Computer Science, P. O. Box 5400, FIN-02015 HUT, Finland

More information

Teaching Object-Oriented Concepts with Eclipse

Teaching Object-Oriented Concepts with Eclipse Teaching Object-Oriented Concepts with Eclipse Matthias Meyer, Lothar Wendehals Software Engineering Group Department of Computer Science University of Paderborn Warburger Straße 100 33098 Paderborn, Germany

More information

Automated Validation & Verification of Software Paper Presentation

Automated Validation & Verification of Software Paper Presentation Regression Test Selection for Java Software Salvador Valencia Rodríguez Automated Validation & Verification of Software Paper Presentation Paper authors Mary Jean Harrold James A. Jones Tongyu Li Donglin

More information

Designing with Exceptions. CSE219, Computer Science III Stony Brook University http://www.cs.stonybrook.edu/~cse219

Designing with Exceptions. CSE219, Computer Science III Stony Brook University http://www.cs.stonybrook.edu/~cse219 Designing with Exceptions CSE219, Computer Science III Stony Brook University http://www.cs.stonybrook.edu/~cse219 Testing vs. Debugging Testing Coding Does the code work properly YES NO 2 Debugging Testing

More information

Search-based Data-flow Test Generation

Search-based Data-flow Test Generation Search-based Data-flow Test Generation Mattia Vivanti University of Lugano Lugano, Switzerland mattia.vivanti@usi.ch Andre Mis Alessandra Gorla Saarland University Saarbrücken, Germany {amis,gorla}@cs.uni-saarland.de

More information

Object Oriented Design

Object Oriented Design Object Oriented Design Kenneth M. Anderson Lecture 20 CSCI 5828: Foundations of Software Engineering OO Design 1 Object-Oriented Design Traditional procedural systems separate data and procedures, and

More information

Secure Software Programming and Vulnerability Analysis

Secure Software Programming and Vulnerability Analysis Secure Software Programming and Vulnerability Analysis Christopher Kruegel chris@auto.tuwien.ac.at http://www.auto.tuwien.ac.at/~chris Testing and Source Code Auditing Secure Software Programming 2 Overview

More information

Topics. Introduction. Java History CS 146. Introduction to Programming and Algorithms Module 1. Module Objectives

Topics. Introduction. Java History CS 146. Introduction to Programming and Algorithms Module 1. Module Objectives Introduction to Programming and Algorithms Module 1 CS 146 Sam Houston State University Dr. Tim McGuire Module Objectives To understand: the necessity of programming, differences between hardware and software,

More information

Software Testing. Quality & Testing. Software Testing

Software Testing. Quality & Testing. Software Testing Software Testing Software Testing Error: mistake made by the programmer/developer Fault: a incorrect piece of code/document (i.e., bug) Failure: result of a fault Goal of software testing: Cause failures

More information

The Improvement of Test Case Selection for the Process Software Maintenance

The Improvement of Test Case Selection for the Process Software Maintenance The Improvement of Test Case Selection for the Process Software Maintenance Adtha Lawanna* Abstract following topics in software-development life cycle (SDLC) Software maintenance is one of the critical

More information

14 Model Validation and Verification

14 Model Validation and Verification 14 Model Validation and Verification 14.1 Introduction Whatever modelling paradigm or solution technique is being used, the performance measures extracted from a model will only have some bearing on the

More information

Performance Workload Design

Performance Workload Design Performance Workload Design The goal of this paper is to show the basic principles involved in designing a workload for performance and scalability testing. We will understand how to achieve these principles

More information

Brewer s Conjecture and the Feasibility of Consistent, Available, Partition-Tolerant Web Services

Brewer s Conjecture and the Feasibility of Consistent, Available, Partition-Tolerant Web Services Brewer s Conjecture and the Feasibility of Consistent, Available, Partition-Tolerant Web Services Seth Gilbert Nancy Lynch Abstract When designing distributed web services, there are three properties that

More information

Performance Based Evaluation of New Software Testing Using Artificial Neural Network

Performance Based Evaluation of New Software Testing Using Artificial Neural Network Performance Based Evaluation of New Software Testing Using Artificial Neural Network Jogi John 1, Mangesh Wanjari 2 1 Priyadarshini College of Engineering, Nagpur, Maharashtra, India 2 Shri Ramdeobaba

More information

Regression Testing Based on Comparing Fault Detection by multi criteria before prioritization and after prioritization

Regression Testing Based on Comparing Fault Detection by multi criteria before prioritization and after prioritization Regression Testing Based on Comparing Fault Detection by multi criteria before prioritization and after prioritization KanwalpreetKaur #, Satwinder Singh * #Research Scholar, Dept of Computer Science and

More information

Design by Contract beyond class modelling

Design by Contract beyond class modelling Design by Contract beyond class modelling Introduction Design by Contract (DbC) or Programming by Contract is an approach to designing software. It says that designers should define precise and verifiable

More information

Comparing the Effectiveness of Penetration Testing and Static Code Analysis

Comparing the Effectiveness of Penetration Testing and Static Code Analysis Comparing the Effectiveness of Penetration Testing and Static Code Analysis Detection of SQL Injection Vulnerabilities in Web Services PRDC 2009 Nuno Antunes, nmsa@dei.uc.pt, mvieira@dei.uc.pt University

More information

Chapter 1 - Web Server Management and Cluster Topology

Chapter 1 - Web Server Management and Cluster Topology Objectives At the end of this chapter, participants will be able to understand: Web server management options provided by Network Deployment Clustered Application Servers Cluster creation and management

More information

Structural Design Patterns Used in Data Structures Implementation

Structural Design Patterns Used in Data Structures Implementation Structural Design Patterns Used in Data Structures Implementation Niculescu Virginia Department of Computer Science Babeş-Bolyai University, Cluj-Napoca email address: vniculescu@cs.ubbcluj.ro November,

More information

PERFORMANCE COMPARISON OF COMMON OBJECT REQUEST BROKER ARCHITECTURE(CORBA) VS JAVA MESSAGING SERVICE(JMS) BY TEAM SCALABLE

PERFORMANCE COMPARISON OF COMMON OBJECT REQUEST BROKER ARCHITECTURE(CORBA) VS JAVA MESSAGING SERVICE(JMS) BY TEAM SCALABLE PERFORMANCE COMPARISON OF COMMON OBJECT REQUEST BROKER ARCHITECTURE(CORBA) VS JAVA MESSAGING SERVICE(JMS) BY TEAM SCALABLE TIGRAN HAKOBYAN SUJAL PATEL VANDANA MURALI INTRODUCTION Common Object Request

More information

Introduction to Parallel Programming and MapReduce

Introduction to Parallel Programming and MapReduce Introduction to Parallel Programming and MapReduce Audience and Pre-Requisites This tutorial covers the basics of parallel programming and the MapReduce programming model. The pre-requisites are significant

More information

Automatic Test Case Generation and Test Suite Reduction for Closed-Loop Controller Software

Automatic Test Case Generation and Test Suite Reduction for Closed-Loop Controller Software Automatic Test Case Generation and Test Suite Reduction for Closed-Loop Controller Software Christian Murphy, Zoher Zoomkawalla, Koichiro Narita Dept. of Computer and Information Science University of

More information

Hypothesis testing. c 2014, Jeffrey S. Simonoff 1

Hypothesis testing. c 2014, Jeffrey S. Simonoff 1 Hypothesis testing So far, we ve talked about inference from the point of estimation. We ve tried to answer questions like What is a good estimate for a typical value? or How much variability is there

More information

Statically Checking API Protocol Conformance with Mined Multi-Object Specifications Companion Report

Statically Checking API Protocol Conformance with Mined Multi-Object Specifications Companion Report Statically Checking API Protocol Conformance with Mined Multi-Object Specifications Companion Report Michael Pradel 1, Ciera Jaspan 2, Jonathan Aldrich 2, and Thomas R. Gross 1 1 Department of Computer

More information

Optimal Binary Search Trees Meet Object Oriented Programming

Optimal Binary Search Trees Meet Object Oriented Programming Optimal Binary Search Trees Meet Object Oriented Programming Stuart Hansen and Lester I. McCann Computer Science Department University of Wisconsin Parkside Kenosha, WI 53141 {hansen,mccann}@cs.uwp.edu

More information

Data Generation Techniques for Automated Software Robustness Testing *

Data Generation Techniques for Automated Software Robustness Testing * Data Generation Techniques for Automated Software Robustness Testing * Matthew Schmid & Frank Hill Reliable Software Technologies Corporation 21515 Ridgetop Circle #250, Sterling, VA 20166 phone: (703)

More information

Test Design: Functional Testing

Test Design: Functional Testing Test Design: Functional Testing Daniel Sundmark How should test cases be designed? Outline Functional Testing Basics Systematic Functional Test Design Input-Space Based Techniques Equivalence Partitioning

More information

VHDL Test Bench Tutorial

VHDL Test Bench Tutorial University of Pennsylvania Department of Electrical and Systems Engineering ESE171 - Digital Design Laboratory VHDL Test Bench Tutorial Purpose The goal of this tutorial is to demonstrate how to automate

More information

JUnit Automated Software Testing Framework. Jeff Offutt. SWE 437 George Mason University 2008. Thanks in part to Aynur Abdurazik. What is JUnit?

JUnit Automated Software Testing Framework. Jeff Offutt. SWE 437 George Mason University 2008. Thanks in part to Aynur Abdurazik. What is JUnit? JUnit Automated Software Testing Framework Jeff Offutt SWE 437 George Mason University 2008 Thanks in part to Aynur Abdurazik What is JUnit? Open source Java testing framework used to write and run repeatable

More information

Comparing Methods to Identify Defect Reports in a Change Management Database

Comparing Methods to Identify Defect Reports in a Change Management Database Comparing Methods to Identify Defect Reports in a Change Management Database Elaine J. Weyuker, Thomas J. Ostrand AT&T Labs - Research 180 Park Avenue Florham Park, NJ 07932 (weyuker,ostrand)@research.att.com

More information

Software Engineering. Software Testing. Based on Software Engineering, 7 th Edition by Ian Sommerville

Software Engineering. Software Testing. Based on Software Engineering, 7 th Edition by Ian Sommerville Software Engineering Software Testing Based on Software Engineering, 7 th Edition by Ian Sommerville Objectives To discuss the distinctions between validation testing and defect t testing To describe the

More information

Software Testing & Analysis (F22ST3): Static Analysis Techniques 2. Andrew Ireland

Software Testing & Analysis (F22ST3): Static Analysis Techniques 2. Andrew Ireland Software Testing & Analysis (F22ST3) Static Analysis Techniques Andrew Ireland School of Mathematical and Computer Science Heriot-Watt University Edinburgh Software Testing & Analysis (F22ST3): Static

More information

Summit Public Schools Summit, New Jersey Grade Level / Content Area: Mathematics Length of Course: 1 Academic Year Curriculum: AP Computer Science A

Summit Public Schools Summit, New Jersey Grade Level / Content Area: Mathematics Length of Course: 1 Academic Year Curriculum: AP Computer Science A Summit Public Schools Summit, New Jersey Grade Level / Content Area: Mathematics Length of Course: 1 Academic Year Curriculum: AP Computer Science A Developed By Brian Weinfeld Course Description: AP Computer

More information

Software Metrics. Lord Kelvin, a physicist. George Miller, a psychologist

Software Metrics. Lord Kelvin, a physicist. George Miller, a psychologist Software Metrics 1. Lord Kelvin, a physicist 2. George Miller, a psychologist Software Metrics Product vs. process Most metrics are indirect: No way to measure property directly or Final product does not

More information

A Comparison of Programming Languages for Graphical User Interface Programming

A Comparison of Programming Languages for Graphical User Interface Programming University of Tennessee, Knoxville Trace: Tennessee Research and Creative Exchange University of Tennessee Honors Thesis Projects University of Tennessee Honors Program 4-2002 A Comparison of Programming

More information

Random vs. Structure-Based Testing of Answer-Set Programs: An Experimental Comparison

Random vs. Structure-Based Testing of Answer-Set Programs: An Experimental Comparison Random vs. Structure-Based Testing of Answer-Set Programs: An Experimental Comparison Tomi Janhunen 1, Ilkka Niemelä 1, Johannes Oetsch 2, Jörg Pührer 2, and Hans Tompits 2 1 Aalto University, Department

More information