17 th Dutch Testing Day
|
|
|
- Godfrey Stokes
- 10 years ago
- Views:
Transcription
1 Proceedings of the 17 th Dutch Testing Day Testing Evolvability Editors: Mariëlle Stoelinga and Mark Timmer November 29, 2011 University of Twente, Enschede
2 Programme Committee Axel Belinfante (Universiteit Twente) Henk van Dam (Collis) Hans-Gerhard Gross (Technische Universiteit Delft) Bart-Jan de Leuw (Logica) Iris Pinkster (Professional Testing BV) Maurice Siteur (Capgemini) Mariëlle Stoelinga (Universiteit Twente) Jan Tretmans (ESI en Radboud Universiteit Nijmegen) Tim Willemse (Technische Universiteit Eindhoven) Steering Committee Henk van Dam (Collis) Jan Tretmans (ESI en Radboud Universiteit Nijmegen) Organisation Mariëlle Stoelinga (Formal Methods & Tools, Universiteit Twente) Axel Belinfante (Formal Methods & Tools, Universiteit Twente) Mark Timmer (Formal Methods & Tools, Universiteit Twente) Publication details Proceedings of the 17 th Dutch Testing Day Edited by Mariëlle Stoelinga and Mark Timmer Published: February 2012 University of Twente, Enschede, The Netherlands CTIT Workshop Proceedings Series WP12-01 Centre for Telematics and Information Technology University of Twente, P.O. Box 217, 7500 AE Enschede, The Netherlands ISSN
3 Preface These are the post-proceedings of 17th Dutch Testing Day, held on the 29 th of November 2011 at the recently completely renovated campus of the University of Twente. These postproceedings cover a selection of the material presented during the Dutch Testing Day. For me personally, the synergy between academic and industrial testing activities is one of the most attractive aspects of the Dutch Testing Day: we have participants, speakers, and financial contributions from both worlds, fostering cross-fertilization and building bridges between them. It is widely known that industrial-academic partnership is a key driver to innovation and to remain at the competitive edge. Therefore, I would like to thank everybody who made the 17th Dutch Testing Day a success: First of all, many thanks go to all participants: without participants, no Testing Day. Also, I would like to thank everybody who submitted an abstract, giving the Programme Committee a hard job in selecting the 7 best abstracts for presentation. Moreover, we are very much indebted to our sponsors: CIMSOLUTIONS, SQUERIST, AESTIS KMG, the Institute for Software Quality, ASML, Valori, the Centre for Telematics and Information Technology, NWO, Improve Quality Services, Better Be, Professional Testing, Refis and Collis. Their financial contributions allow the participants to attend the Dutch Testing Day free of charge and show the importance of the topic. I am grateful to Rector Magnificus Ed Brinksma for opening the Testing Day, and invited speaker Kim Larsen for giving the keynote speech. Finally, I would very much like to thank the other members of the organizing team of the 17th Dutch Testing Day: Axel Belinfante, Joke Lammerink and Mark Timmer for their fantastic support. Mariëlle Stoelinga Formal Methods & Tools University of Twente 3
4 4
5 Table of Contents TOPAAS- model for software reliability analysis... 7 Ed Brandt Refis Constructing Formal Models through Automata Learning Fides Aarts, Faranak Heidarian, and Frits Vaandrager Institute for Computing and Information Sciences, Radboud University Managing the co- evolution of software artifacts Joost Gabriels, Danny Holten, Martijn Klabbers, Wiljan van Ravensteijn, Alexander Serebrenik Laboratory for Quality Software, Eindhoven University of Technology SynerScope B.V. Model-Driven Software Engineering, Eindhoven University of Technology Risk Based Testing. A piece of cake or not? Jeanne Hofmans Improve Quality Services Testing Highly Distributed Service- oriented Systems using Virtual Environments Faris Nizamic, Rix Groenboom, Alexander Lazovik Johann Bernoulli Institute for Mathematics and Computer Science, University of Groningen Parasoft Netherlands B.V. State- driven Testing: An Innovative Approach to Successful UI Test Automation Jan de Coster Micro Focus 5
6 6
7 TOPAAS-model for software reliability analysis Ed Brandt Refis Rijkswaterstaat is introducing probabilistic management to storm surge barriers and other objects. A risk analysis is the centre of this approach. Software failure is an important component in this. Quantifying software failure using existing methods, however, appeared to be not feasible. In order to address this problem, a group of known experts in the field of software reliability engineering from industry and academia has developed a factor driven model called TOPAAS. This article explains the necessity of software failure quantification, the use of expert opinion, the basics of TOPAAS as a property driven model and the work still to be done. 1. Introduction Rijkswaterstaat is the Dutch Directorate for Infrastructure and Water Management. It is a public body responsible for building and maintaining the country s road and waterway infrastructure, including bridges, locks, weirs and movable storm surge barriers. A couple of years ago this public body introduced probabilistic management. A risk assessment is the centre of this approach. It gives direction to test intervals, maximal time to repair and modifications to the object. The ideal situation would be that Rijkswaterstaat provides different scenario s, with different cost estimates, that each will result in a different quality level and that the government is able to make a political choice. Software plays an increasingly important role in these objects and systems. So to be able to quantify the risk of the entire system, quantification of software reliability is a necessity. Whereas for hardware reliability assessments and quantification is fairly common, for software it is not. Over the years several attempts have been made to introduce methods for this. In the next section some of these are discussed. 2. Existing methods for reliability analysis Already in the early seventies of the previous century statistical models were described to predict the occurrence of failures in software. These Reliability Growth Models are based on the principle that the reliability of software grows when defects are found and fixed. This growth is not random but depends on properties of the system at hand. To determine an accurate failure probability, a statistically significant number of defects is required. With safety critical, high reliable systems as storm surge barriers, that appears to be a problem. When modules of those systems show many defects they are discarded all together. The Monte Carlo approach of random testing and comparing the results with the predicted outcome is also not practically feasible. It would take excessive testing effort in complex systems as storm surge barriers. And then there are methods that help to achieve reliable software, like formal methods or the use of Safety Integrity Levels (IEC 61508). These methods however cannot be used to guarantee or quantify reliability of existing software afterwards. 7
8 The overall conclusion for Rijkswaterstaat must be that there are no adequate or feasible methods available. In order to address this problem, a group of known experts in the field of software reliability engineering conceived a new method called TOPAAS. 3. TOPAAS-model TOPAAS is short for Task Oriented Probability of Abnormalities Analysis for Software. It is a static approach to analysis of reliability. This means it does not involve actual usage of the system at hand as in reliability growth modelling. But in stead it evaluates the circumstances that influence reliability and quantifies these factors. Software failure in this approach is defined as the absence (for too long) of desired task execution, or the incorrect task execution, by a software module with respect to the mission of the overall system. Task execution is an individual action implemented in software that is observable on the outside (on a technical level) and that can be decided upon if it is performed correctly or incorrectly with respect to the overall mission of the system. A software module is then defined as a piece of software that is represented by a specific group of lines source code (or its graphical equivalent) with the following properties: A clear distinction can be made with respect to other pieces of code and there is clear separated functionality provided by the module that is required by the system; It exhibits observable behaviour with specific qualities (like timeliness, reliability, etc.); It isn t useful (in the light of the failure analysis on system level) or possible to make a further decomposition. Applying TOPAAS starts with decomposing the system into software modules performing a Fault Tree Analysis. The probability of software failure of individual modules is then determined by investigating 15 factors divided into 5 dimensions. Answering the questions with every factor results in a failure probability per module in the FTA. From a mathematical point of view: factor driven model provides n factors F i to determine failure probability P P = PB F 1 F 2... F n where o PB is the base failure rate (1 as a conservative default value, implying the assumption that software will fail if we cannot apply a reliability analysis method) o F x is the impact of a specific factor based on a piece of knowledge TOPAAS contains the following 5 dimensions and, in total, 15 factors: Development process o Safety Integrity Level o Inspections o Design modifications o Maturity organisation o Knowledge and experience o Cooperation Product properties o Complexity o Size o Transparency architecture o Certified compiler Requirements o traceability Testing o techniques and coverage Operational use o Multi processor o Field data available o Monitoring 8
9 The combination of these factors expresses the combined expert opinions that determine the probability of failure. By using these underlying factors, simulating the experts, the accuracy of the estimates and the transparency of the process are hugely increased. An example of the factor Test techniques and coverage is shown below. 12 Test techniques and coverage 1 2 Unknown No documented test execution Normal 0 0 SIL3/SIL4 NVT NVT 3 Documented test execution, no techniques, unknown coverage -⅓ NVT Formal test techniques, low coverage Formal test techniques, medium coverage Formal test techniques, high coverage Formal test techniques, high documented coverage -½ -⅔ -1-1⅓ ⅔ ½ 0 -⅓ Note that TOPAAS makes a distinction between systems developed targeting a Safety Integrity Level of 3 or 4 and others. When a system is not developed targeting a SIL and formal test techniques were used but with a low coverage, the failure probability will be decreased by a factor 10-1/2 (answer 4). But when SIL 3 or 4 was targeted the failure probability in that case will increase by a factor of 10 2/3! variance in failure probability factor development proces product requirements traceability testing execution / production environment Depending on the outcome of the underlying factors the TOPAAS dimensions may influence the base failure rate as follows (diagram) To be done Version 2 of the theory behind the model is published within Rijkswaterstaat and its suppliers and is already applied in several projects. It is available, but only in Dutch yet ( Publication in international magazines is being prepared. The model has been tested on several reference projects and compared to other methods of reliability analysis, including expert opinion. It now needs further back up and calibration by statistical data and, based on that further review and referencing. Manuals and tools should be provided when broadening the access and usage. Also support of user forum and model maintenance should be organized. 9
10 A particular point of investigation is related to the crucial point of software module decomposition. Failure probability of modules in an FTA may interfere of influence each other. The correlation of modules in this respect needs further investigation resulting in guide lines for TOPAAS users. TOPAAS authors and reviewers Authors of the TOPAAS model are: Alessandro Di Bucchianico (TU/e), Jaap van Ekris (DNV), Jan-Friso Groote (TU/e), Wouter Geurts (Logica), Gerben Heslinga (Intermedion), Gea Kolk (Movares) and Ed Brandt (Refis). Reviewers of the TOPAAS model are: Sipke van Manen (Bouwdienst RWS), Harry van der Graaf (Bouwdienst RWS), Peter van Gestel (Delta Pi) and Piet de Groot (NRG). About the author Ed Brandt is working in information technology since He specialized in software testing since 1996 and founded Refis in 2003, specialized in reliability engineering and software metrics. He is one of the authors of the Topaas-method. Please contact for more information. 10
11 Constructing Formal Models through Automata Learning Fides Aarts, Faranak Heidarian, and Frits Vaandrager Institute for Computing and Information Sciences, Radboud University Nijmegen P.O. Box 9010, 6500 GL Nijmegen, the Netherlands Model-based system development is becoming an increasingly important driving force in the software and hardware industry. The construction of models typically requires specialized expertise, is time consuming and involves significant manual effort, implying that in practice often models are not available, or become outdated as the system evolves. In practice, 80% of software development involves legacy code, for which only poor documentation is available. Manual construction of models of legacy components is typically very labor intensive and often not cost effective. The solution is to infer models automatically through observations and test, that is, through black-box reverse engineering. The problem to build a state machine model of a system by providing inputs to it and observing the resulting outputs, often referred to as black-box system identification, is both fundamental and of clear practical interest. A major challenge is to let computers perform this task in a rigorous manner for systems with large numbers of states. The problem of learning state machines (automata) has been studied for decades. Many techniques for constructing models from observation of component behavior have been proposed. The most efficient such techniques use the setup of active learning, where a model of a system is learned by actively performing experiments on that system. Tools that are able to infer state machine models automatically by systematically pushing buttons and recording outputs have numerous applications in different domains. They support understanding and analyzing legacy software, regression testing of software components, protocol conformance testing based on reference implementations, reverse engineering of proprietary/classified protocols, and inference of botnet protocols. LearnLib [7], the winner of the 2010 Zulu competition on regular inference, is currently able to learn state machines with at most in the order of states. During the last few years important developments have taken place on the borderline of verification, model-based testing and automata learning. By combining ideas from these three areas it will become possible to learn models of realistic software components with state-spaces that are many orders of magnitude larger than what state-of-the-art tools can currently handle. Clearly, abstraction is the key for scaling existing automata learning methods to realistic applications. The idea of an intermediate component that takes care of abstraction is very natural and is used, implicitly or explicitly, in many case studies on automata learning. Aarts, Jonsson and Uijen [3] formalized the concept of such an intermediate abstraction component. Inspired by ideas from predicate abstraction, they defined the notion of a mapper A, which is placed in between the SUT M and the learner, and transforms the interface of the SUT by an abstraction that maps (in a history dependent manner) the large set of actions of the SUT into a small set of abstract actions. By combining the abstract machine H learned in this way with information about the mapper A, they can effectively learn a (symbolically represented) state machine that is equivalent to M. Roughly speaking, the learner is responsible for learning the global control modes in which the system can be, and the transitions between those modes, whereas the mapper records some relevant state variables (typically computed from the data parameters of previous input and output actions) and takes care of the data part of the SUT. A major challenge will be the development of algorithms for the automatic construction of 11
12 mappers: the availability of such algorithms will boost the applicability of automata learning technology. In [2], we provided a solid theoretical foundation for a generalization of the abstraction framework of [3]. The theory of [2] is (a) based on interface and I/O automata instead of the more restricted Mealy machines (in which each input induces exactly one output), (b) supports the concept of a learning purpose, which allows us to restrict the learning process to relevant interaction patterns only, and (c) supports a richer class of abstractions. In [1], we presented our prototype tool Tomte, named after the creature that shrank Nils Holgersson into a gnome and (after numerous adventures) changed him back to his normal size again. Tomte is able to automatically construct mappers for a restricted class of scalarset automata, in which one can test for equality of data parameters, but no operations on data are allowed. The notion of a scalarset data type originates from model checking, where it is been used by Ip & Dill for symmetry reduction [6]. We use the technique of counterexample-guided abstraction refinement: initially, the algorithm starts with a very course abstraction A, which is subsequently refined if it turns out that M A is not behavior-deterministic. Non-determinism arises naturally when we apply abstraction: it may occur that the behavior of a SUT is fully deterministic but that due to the mapper (which, for instance, abstracts from the precise value of certain input parameters), the system appears to behave non-deterministically from the perspective of the learner. We used LearnLib as our basic learning tool and therefore the abstraction of the SUT may not exhibit any non-determinism: if it does then LearnLib crashes and we have to refine the abstraction. This is exactly what has been done repeatedly during the manual construction of the abstraction mappings in the case studies of [3]. We formalized this procedure and described the construction of the mapper in terms of a counterexample guided abstraction refinement (CEGAR) procedure, similar to the approach developed by Clarke et al [4] in the context of model checking. Using Tomte, we have succeeded to learn fully automatically models of several realistic software components, including the biometric passport, the SIP protocol and the various components of the Alternating Bit Protocol. The Tomte tool and all the models that we used in our experiments are available via Table 1 gives an overview of the systems we learned with the number of input refinement steps, total learning and testing queries, number of states of the learned abstract model, and time needed for learning and testing (in seconds). For example, the learned SIP model is an extended finite state machine with 29 states, 3741 transitions, and 17 state variables with various types (booleans, enumerated types, (long) integers, character strings,..). This corresponds to a state machine with an astronomical number of states and transitions, thus far fully out of reach of automata learning techniques. We have checked that all models inferred are bisimilar to their SUT. For this purpose we combined the learned model with the abstraction and used the CADP tool set, for equivalence checking. System under test Input refinements Learning/ Testing queries States Learning/Testing time Alternating Bit Protocol - Sender 1 193/ s/104.9s Alternating Bit Protocol - Receiver 2 145/ s/134.5s Alternating Bit Protocol - Channel 0 31/ s/107.5s Biometric Passport / s/94.5s Session Initiation Protocol / s/35.9s Login 3 639/ s/56.8s Farmer-Wolf-Goat-Cabbage Puzzle 4 699/ s/121.8s Palindrome/Repdigit Checker / s/256.4s Table 1. Learning statistics Currently, Tomte can learn SUTs that may only remember the last and first occurrence of a parameter. We expect that it will be relatively easy to dispose of this restriction. We also expect that our CEGAR based approach can be further extended to systems that may apply simple or 12
13 known operations on data, using technology for automatic detection likely invariants, such as Daikon [5]. Even though the class of systems to which our approach currently applies is limited, the fact that we are able to learn models of systems with data fully automatically is a major step towards a practically useful technology for automatic learning of models of software components. References 1. F. Aarts, F. Heidarian, P. Olsen, and F.W. Vaandrager. Automata learning through counterexample-guided abstraction refinement, October Available through URL publications/papers/fvaan/cegar11/. 2. F. Aarts, F. Heidarian, and F.W. Vaandrager. A theory of abstractions for learning i/oautomata, October Available through URL 3. F. Aarts, B. Jonsson, and J. Uijen. Generating models of infinite-state communication protocols using regular inference with abstraction. In A. Petrenko, J.C. Maldonado, and A. Simao, editors, 22nd IFIP International Conference on Testing Software and Systems, Natal, Brazil, November 8-10, Proceedings, volume 6435 of Lecture Notes in Computer Science, pages Springer, E.M. Clarke, O. Grumberg, S. Jha, Y. Lu, and H. Veith. Counterexample-guided abstraction refinement for symbolic model checking. J. ACM, 50(5): , M.D. Ernst, J.H. Perkins, P.J. Guo, S. McCamant, C. Pacheco, M.S. Tschantz, and C. Xiao. The Daikon system for dynamic detection of likely invariants. Science of Computer Programming, 69(1-3):35 45, C.N. Ip and D.L. Dill. Better verification through symmetry. Formal Methods in System Design, 9(1/2):41 75, H. Raffelt, B. Steffen, T. Berg, and T. Margaria. Learnlib: a framework for extrapolating behavioral models. STTT, 11(5): , About the authors Fides Aarts is a PhD student at the Radboud University in Nijmegen. After she has finished her Master thesis in Uppsala on inference and abstraction of communication protocols under supervision of Bengt Jonsson, she continued working on learning automata. Currently, she is involved in automatically generating abstractions of automata to extent inference to systems with large parameter domains. Faranak Heidarian finished her PhD in Radboud University Nijmegen. She worked in MBSD group and her research was about abstraction refinement. She also fulfilled a research on formal analysis of synchronization protocols of wireless sensor networks. She has a bachelor degree in software engineering, and a Masters degree in Computer Science from Sharif University of Technology, Tehran, Iran. Frits Vaandrager has a strong interest in the development and application of theory, (formal) methods and tools for the specification and analysis of computer based systems. In particular, he is interested in real-time embedded systems, distributed algorithms and protocols. Together with Lynch, Segala, and Kaynar he developed the (timed, probabilistic and hybrid) input/output automata formalisms, which are basic mathematical frameworks to support description and analysis of computing systems. He has been and is involved in a large number of projects in which formal verification and model checking technology is applied to tackle practical problems from industrial partners. Within the OCTOPUS project with Océ he is currently involved in the construction of the DSEIR toolset for model-driven design-space exploration for embedded systems. Recently, he has also become very interested in automata learning. 13
14 14
15 Managing the co-evolution of software artifacts J.M.A.M. Gabriels a, D.H.R. Holten b, M.D. Klabbers a, W.J.P. van Ravensteijn b, A. Serebrenik c a Laboratory for Quality Software, Eindhoven University of Technology b SynerScope B.V. c Model-Driven Software Engineering, Eindhoven University of Technology. Software development projects are virtually always carried out under pressure. Planning and budgets are tight, room for errors is non-existent and the pressure to deliver is high. Natural questions for (test) managers arise, such as: When have we tested enough? and How many tests do we have to redo for this new version?'. The naive answer would be: when we have convinced ourselves through testing that all requirements are satisfied. Unfortunately, attaining maximal confidence with minimal effort is not easy. In order to convince ourselves that the system does what it is supposed to do, tests are needed. Requirements, design and code change during the development of software. As a consequence, tests need to change as well. In the end we want to ensure that all requirements and risks are adequately addressed with tests. For this, tests at different levels of abstraction and for different software artifacts are required and need to be managed. Traceability matrices are often used to relate user requirements, design, code, and tests. Traceability allows to link elements from different software artifacts, like requirements, design components and code components, to each other and to test cases. As a result, traceability can be used to analyze for example how well software artifacts are covered by test cases. Because a requirement leads to design components and eventually to code, tests are needed at each stage. Traceability can tell us how well test cases cover different software artifact elements. This information can be used to uncover mistakes in software artifacts at an early stage and actively manage the development and test efforts. Unfortunately, traceability information is often spread out over multiple artifacts and describes only the current situation. TraceVis, a visual analytics tool based on the master thesis of Van Ravensteijn 1 combines the traceability information of multiple software artifacts in an interactive way. The tool was recently applied to fraud detection in financial transactions 2 and software model transformations 3, Figure 1 shows the traceability between four, vertically placed, hierarchical software artifacts: acceptance test plan (ATP), user requirements document (URD), software requirements document (SRD), and architectural design document (ADD). Hierarchy within each document is given by its division into chapters, sections and subsections. Each line between hierarchies represents a link between elements of two artifacts, e.g., user requirement being tested by an acceptance test or architectural component implementing a software requirement. The hierarchies can be collapsed and extended and risk levels and priorities can be 1 W.J.P. van Ravensteijn, Visual traceability across dynamic ordered hierarchies, M.Sc. thesis, 2011, Eindhoven University of Technology 2 SynerScope on-line demos: and, specifically, SynerScope for Fraud 3 M.F. van Amstel, A. Serebrenik, & M.G.J. van den Brand, Visualizing traceability in model transformation compositions, 2011, Workshop on Composition and Evolution of Model Transformations, London: Department of Informatics, King's College London. 15
16 visualized by giving the elements different colors. The edge bundling technique bundles similar relations in the middle, clearly showing deviations. Furthermore, TraceVis provides a way of assessing the evolution of traceability between artifacts though a timeline (lower part of Figure 1). The timeline shows such events as addition, modification or removal of individual elements, e.g., user requirements or tests, as well as addition or removal of traceability links between the elements. Figure 1: TraceVis tool with traceability information from a student capstone project at the Eindhoven University of Technology. The edge bundeling technique makes it easy to spot deviations. Already at first glance, we can see points of attention in Figure 1: a gap in requirement coverage (1) and a gap in the timeline (2). The gap labeled (1) shows some medium and low priority user requirements not covered by acceptance tests. The gap labeled (2), located in the timeline, shows that test cases were added very late in the project. The gap itself relates to implementation activities in which there were no changes to the shown software artifacts. By interactively inspecting the traceability information, we can discover points of interest. Figure 1 reveals, for example, that a selected user requirement (URC8) is tested by one acceptance test, corresponds to one software requirement and is implemented in one architectural component. While this does not seem to be problematic, further inspection of the evolution of user requirements closely related to URC8 tells an entirely different story. While grouped together in the URD, the corresponding software requirements are spread all over the SRD, and implementation involves five out of thirteen architectural components. Figure 1 also shows outliers; grey lines running off-center between the URD and the SRD (3), and between the SRD and the ADD. The core part of the SRD consists of two parts: Requirements and Rights table. The Requirements part consists of Functional requirements further divided in groups (with 189 requirements in total) and Non-functional requirements (1 requirement). The Rights table part contains only one element, i.e., the rights table. This means that individual functional requirements are nested at depth four, the non-functional requirements at depth three and the rights table at depth two. Therefore the rights table is an outlier in the organization of the SRD. 16
17 The evolutionary traceability information allows us to see how well tests cover artifacts and whether risks are sufficiently tackled. It gives insight in the balance between tests, priorities, and risks and can support decision making in assigning test effort. Furthermore, it can help in determining which tests need to be redone when a certain component or requirement changes. The insight in the co-evolution of software artifacts and associated tests makes it possible to actively manage test effort from an early stage on. About the authors Joost Gabriels ([email protected]) received his M.Sc. in Computer Science from the Eindhoven University of Technology (TU/e) in Currently, he is a researcher and consultant at the Laboratory for Quality Software (LaQuSo), TU/e. His interests include software architecture and design, specification languages, program verification and the quality of software in general. In his Ph.D., defended in 2009 at TU/e Dr. Danny Holten ([email protected]) has developed techniques for the visualization of graphs and trees. He received the best paper award at IEEE INFOVIS 2006 and was nominated for the best paper award at ACMCHI He furthermore received the TU/e Doctoral Project Award for the best PhD dissertation in From 2009 to 2011, he worked as a postdoctoral researcher. Since April 2011, Danny Holten is CTO at SynerScope B.V., a Dutch visualization-research-inspired TU/e spin-off company that leverages his PhD research for "Big Data" analysis. Martijn Klabbers ([email protected]) received a M.Sc. in Computer Science from the Technical University Delft, The Netherlands in At the Technical University of Eindhoven, he is a senior consultant at LaQuSo (Laboratory for Software Quality). His research interests include certification, decision support systems, and user requirements. In 2011, Wiljan van Ravensteijn ([email protected]) received his Master's degree in computer science (with honors) from the Dept. of Mathematics & Computer Science at TU/e. His Master's thesis was titled "Visual traceability across Dynamic Ordered Hierarchies". Since August 2011, Wiljan van Ravensteijn is Software Developer at SynerScope B.V. Dr. Alexander Serebrenik ([email protected]) is an assistant professor of Model-Driven Software Engineering at TU/e. He has obtained his Ph.D. in Computer Science from Katholieke Universiteit Leuven, Belgium (2003) and M.Sc. in Computer Science from the Hebrew University, Jerusalem, Israel. Dr. Serebrenik's areas of expertise include software evolution, maintainability and reverse engineering, program analysis and transformation, process modeling and verification. 17
18 18
19 Risk Based Testing. A piece of cake or not? Jeanne Hofmans Improve Quality Services Risk based testing has been a popular approach to testing for a while. Based on the fact that it is impossible to test everything, the idea of focusing test effort on the most risky areas has been embraced by both testers and management. But even today many companies and projects are struggling with their risk based test approach. Especially since systems under test are becoming more complex the need to focus on the risky areas increases. And thus the need for a flexible and practical approach for identifying and managing risks. This article is about popular risk methods, their pitfalls and introduces a practical approach to risk based testing. 1. Product Risk Analysis An important task in risk based testing is the product risk analysis (PRA). Product risk analysis is a method to identify and analyze risks. A risk is a factor that could have a negative consequence in the future and is usually expressed as impact and likelihood (ISTQB). In testing one should focus on risks with high impact and likelihood. Risks with an extremely high impact should have more focus than risks with low impact. This does not mean that no effort is spent on other risks with a lower impact. It only means that less effort is spent. 2. Common PRA-methods Several methods for product risk analysis exist. Popular methods are PRISMA, PRIMA, the risk method of TMAP Next and the Failure Mode, Effect & Criticality Analysis (FMECA). Methods like PRIMA and the risk method of TMAP NEXT divide the system under test in risk areas in which certain quality attributes are important. In the TMAP NEXT risk analysis several other tables are constructed to determine the risk of smaller system components. This is done to make the risk concerned with certain quality attributes and both systems and components clear. The risk factor determines the test effort per component. The PRISMA method does not prescribe how to categorize risks, it is a method to identify risk quadrants, like shown in figure 1. Often it is used as a specification-based method in which each (part of the) specification represents a risk item. Each risk item is to be assigned a risk quadrant. Probability Impact Figure 1: Four Risk quadrants as used in the PRISMA-method 19
20 In a Failure Mode Effect and Criticality Analysis a complete fault three is made to determine failure modes that could have a negative effect on a predefined top event. This is preferably done in the design stage in which the design can be altered, based on results of the FMECA. 3. Pitfalls using common PRA-methods Fully applying a FMECA is very time-consuming. This is primarily done for high-risk systems like aircrafts. A solution is to only use FMECA for a risky part of the system. TMAP Next and PRIMA give stakeholders and managers good insight in the high-risk areas of a system. However the resulting PRA is often not used by test engineers when deciding what to test and what not to test. Perhaps the PRA does not match the available specifications because the PRA is based on subsystems whilst the specification is function-based. Or perhaps the test engineer does not understand all the tables that are delivered in the risk analysis method TMAP Next. It then becomes hard for a test engineer to test based on risks. In best case the information provided is used by the test manager as a checklist to see if nothing is missing in the current test set. In worst case the PRA is not used at all. Using PRISMA in a specification-based manner overcomes the problem of the PRA not being a practical source for test engineers. If chosen wisely the risk items in the PRA represent (sections of) the specifications. As testers use the specifications as a starting-point for testing it helps when the PRA simply states which parts represent the highest risk. The pitfall in this is that PRISMA is only used in such a specification based manner. If test cases are only based on specifications, likely not all risks are covered. 4. A practical approach for Product Risk Analysis A practical rule of thumb is to base the product risk analysis on three types of product risk: Specification; Integral; Regression. These three types of product risk are very much in line with the way testers (should ) test. First of all the specification is tested, where more attention is paid to the parts which are identified as most risky. Then special attention is paid to risks that exist, but are not clear in the specification; the integral (or implicit) risks. Finally attention must be paid to the approach for regression testing. For a system test, the specification and thus PRA can be based on functions or use cases. For an interface test the PRA and specification can be based on interfaces (per Interface Requirement Specification or for each interface in an Interface Requirement Specification). For a maintenance test the specification and thus PRA can consist of change reports. For a regression test, the set of existing test cases can be input to the PRA. This rule of thumb, identifying three types of product risk, can very well be used in combination with PRISMA. In this case a PRA is made for each test level. Some test levels can share a PRA if they share the same specifications or are highly related and traceable. For identifying integral or implicit risks brainstorm sessions, stakeholder interviews or FMECA s can be used. A FMECA can be done for even a complete system, but can also very well be applied to some function or subsystem. PRIMA and the first part of the risk method of TMAP Next can be useful to give an total, more static overview of the risky areas of the system. The PRA per test 20
21 level will be used more intensively, as they represents a testers day-to-day s business. That is testing according to specs, testing the relevant risks and regression testing. 5. Product Risk Process The most important thing to be said about the process of risk based testing is that it all depends on how well risks are being kept track of. Most of all this requires a lot of discipline. However when defining product risks in terms of three types (specification, integral and regression) this can very easily be translated to the test strategy of a test level. In a test level first the specification-based risks are covered, then or in parallel the integral risks and finally regression tests take place. The discipline is in constantly keeping track of the status of existing risks and identifying new risks. It is easier to keep track of the status of risks when defining three types of product risk. At least, if using an accompanying test strategy The reward is that stakeholders and management become more involved. Involvement grows as they better understand what testers actually do and how testers can be helped to let the project deliver a system that supports the stakeholders and users in their needs. 6. Conclusion The basic concept of risk based testing is still embraced by many testers, managers en stakeholders. If used properly it should direct testing in an efficient manner. Unfortunately very often risk based testing is not a piece of cake. In this article I discussed common methods for product risk analysis (PRA), and possible reasons why projects struggle. In worst case a PRA becomes shelfware. In general it is smart to use a combination of risk analysis techniques like FMECA, a brainstorm, PRISMA, etc. In this way risks are determined using different viewpoints. These viewpoints are also represented in the three types of risk identified in this article: Specification, Integral and Regression. The strength of identifying tree types of product risk is that it is both understandable for a stakeholder and usable for a tester. About the author Jeanne Hofmans has studied Software Technology at Utrecht University and now is a test consultant at Improve Quality Services. She has participated in many projects in both the financial and the embedded domain where risk based testing was more or less successful. The suggested solution was implemented by Jeanne at a technical test center and presented at Testnet Spring Event (Nieuwe helden) and at a Supervision session at Rabobank International. Momentarily Jeanne works as a test manager for the Sluiskiltunnel and as an auditor for several other projects. Jeanne Hofmans can be contacted at: [email protected] 21
22 22
23 Testing Highly Distributed Service-oriented Systems using Virtual Environments F. Nizamic 1, R. Groenboom 2, A. Lazovik 1 1 Johann Bernoulli Institute for Mathematics and Computer Science, Faculty of Mathematics and Natural Sciences, University of Groningen 2 Parasoft Netherlands B.V. Modern application landscapes are becoming more and more interconnected. The introduction of SOA and ESB technology has increased the dependency on other systems, both internal and external. This has severe consequences for testing SOA applications, their availability, and the constraints on different environments. 1. Testing challenges of service-oriented systems On March 11, 2009, twenty-five residents of a little municipality in the West of the Netherlands unexpectedly received a certificate that they were married or had registered a newborn. What happened? To test the central information system of the country, data in the central people registry had been modified. While the data was supposed to be only a test for the system, it ended up updating the real population database and initiating the processes for informing the involved citizens. In fact, the change propagated even further: from the central population registry, it also affected the information system of the tax office and of the retirement administration. This anecdote illustrates the challenges faced in testing service-oriented systems. From the perspective of a tester, most of the challenges arise from the need to prepare and execute tests in the shortest possible amount of time and without dependency on other processes. In order to work independently of other processes, dependencies such as dependency on availability of testing environment or dependency caused by sharing resources with others must be removed. Moreover, testers must start preparation of test scripts before a code that they will test is written. One of the reasons for this is that many software companies are working per Agile working methodology, which means that iterations in which developers and testers need to deliver an additional value to a software are much shorter. If testers could start the preparation of test scripts from day one of the sprint, their chances to prepare tests on time would significantly increase. In addition, testers usually have difficulties to produce all desired states of the external services that are part of their chain testing. In order to cover all code paths and to ensure that the system is fully functional, tester needs to have ability to mock-up some of the services or, in other words, simulate desired behavior of external services. Furthermore, external services can be available just for a limited time or not available at all. From the perspective of a company, besides quality of a software, major concerns are additional costs. These costs can be caused by acquiring additional equipment or human resources. Even higher additional costs lie in maintenance work that was unnecessary or in project delays which occurred. Such additional costs need to be reduced or if possible totally eliminated. In short, all the above-mentioned issues point to the same goal: to increase the speed of a testing process and that way reduce the costs by removing dependencies and gaining the control over testing environments. Most of above-mentioned issues could be avoided if a simulation environment, behaving in the very same way as a real system, is used. Nowadays, generation of this kind of simulation environment is very feasible and will be further discussed in this paper. 23
24 2. The solution is in a virtualization The facts are that software is continuously evolving and that in service-oriented systems the functionality is spread across the network. However, testing of software still implicitly assumes full control over a system. The reality is that the service under test depends on the service provided by external services (which are not owned by the subject performing testing) in order to provide its own service. In this work we propose that simulation environments should be semi-automatically generated and thus gain control over external services and their data. Why? As a part of integration testing, different versions of interdependent services need to be tested together as one system, in order to confirm that all functional aspects for this particular combination of services are covered. To prove this, data that is stored in each system need to be appropriate, so all important test cases can be executed. If external systems are not owned by us, which is usually the case, tests can be performed only with data that is available. Alternatively, virtual assets that mimic a behavior of external services can be created. A set of this kind of virtual assets, representing all external systems, would form a simulation environment. It is External service 3 External service 1 possible to generate this kind of environment by using currently available tools. Stubs or mock services are not something new, and nowadays they can be easily generated just by having a WSDL, log file or by some other way. Once virtual assets are generated/deployed, the behavior of the external service methods needs to be modeled. To make it even easier, only the behavior of certain methods which are needed in order to cover defined test cases needs to be modeled. Thus careful choice of the test cases permits saving time and resources. Behavior of a services can be modeled simply by defining expected outputs for certain inputs. However, this is not that simple for services that preserve a state. Examples of these kinds of service methods are ones that do not have input parameters but need to return an output (for example createaccount method). For this kind of method, it is difficult to map a set of inputs to set a set of outputs. Besides generation of the simulation environment, the whole ecosystem should be populated with corresponding test-data. This procedure should be carried out only one time. After we have prepared environments that are populated with data that describe their behavior, real tests can be executed on the system under test. During execution, simulated external services return responses that are exactly the same as those that would be returned by real external services. Thus the creation of simulated systems, which can be easily maintained without a need for intervention of environment administrators, solves the problems of availability, dependency, and maintenance. VA User Testing tool Service under test VA External service 2 VA 3. WMO case study In order to demonstrate how this procedure could work in practice, we present a simulation environment of a real business process for the implementation of the Wet Maatschappelijke Ondersteuning Law (WMO). The WMO is a Dutch law for supporting people that have a chronic disease Care suplier Response Request Citizen Request Response Municipality Response Request Request Response Medical doctor Insurance company 24
25 or disability, so that these people can independently live in their homes and actively take part in everyday life despite their physical limitations. The support that is provided by the WMO typically includes transportation, a wheelchair, or a home modification (e.g. removing doorposts for people using wheelchairs). The responsibility for a WMO request lies with the Dutch municipalities; they handle the complete business process. The process accesses external parties, e.g., insurance companies and doctors for medical advice. In the WMO ecosystem, all of the issues mentioned in section 2 can be seen. External services such as Medical doctor are not available all the time and especially not available for sending test-data which would pollute the real database or send notifications to the real doctors. Furthermore, in case of testing a Municipality service, a system engineer would need to deploy the latest version of the external services code on test environments. Firstly, we may not have access to that code that is external to our organization. Secondly, we would depend on some other team (i.e. Ops team) for deployment. And thirdly, the testing environment might not be available. All of these issues can be avoided simulating all external services and modeling their desired behavior. For this paper, we have used the Parasoft Virtualize tool to create virtual assets which would represent all three external services (Medical doctor, Insurance company, Care supplier). Citizen was simulated by Parasoft SOAtest, which we used to send the requests to the Municipality service. We have thus demonstrated that it is possible to create all needed components by using currently available tools. We have also modeled virtual assets behavior to cover basic scenarios and recorded desired requests and responses. The next step would be to find a way to semi-automatically generate all those virtual assets, populate them with the data and keep up-to-date. As this was in a way a toy example, now we are approaching companies for appropriate systems that could be used as the real case study. We are aware that we have just scratched the top of the iceberg. Our goal is to have Agile support for a TestOps team which would enable simple switching between staged environments by merging testing and infrastructure knowledge. About the authors Faris Nizamic is a PhD student at the University of Groningen (RuG). His research interests cover the areas of software testing as well as service-oriented, distributed and cloud computing. He holds an MSc in Computing Science from the University of Sarajevo, Bosnia and Herzegovina. For the last two years, he has worked as a Senior Quality Assurance Engineer in NAVTEQ with the main focus on testing of web services and automation of testing processes. [email protected], Web: Rix Groenboom is a consultant for Parasoft in the area of testing and virtualization of modern SOA, SaaS and cloud-based architectures. He has written a large number of technical articles and presented at many IT industry conferences in Europe and the USA. His core area of expertise is specification, design and validation of software applications. He holds an MSc and a PhD in Computing Science from the University of Groningen and published a thesis focusing on the formalization of domain knowledge. [email protected] Dr. Alexander Lazovik is an assistant professor in the Distributed Systems group at the University of Groningen, after being an ERCIM fellow at CWI (NL) and INRIA (F), and an intern at IBM TJ Watson. He obtained a PhD in computer science from the University of Trento (I) in His research interests are in the areas of service-oriented and distributed computing. He is also actively interested in pervasive and ubiquitous computing, with an emphasis on dynamic coordination of context-aware embedded devices. [email protected], Web: 25
26 26
27 State-driven Testing: An Innovative Approach to Successful UI Test Automation Jan de Coster Micro Focus In an IT world of constant change, shorter release cycles, iterative development processes, and increasing complexity of enterprise applications, testing is a business-critical step in the Software Development Lifecycle (SDLC). Automation of testing processes is a must-have for organizations that plan to remain competitive as these changes will continue to accelerate and further influence software development. Building a successful test automation practice is a challenging task for many development organizations. Mature development organizations realize that simple UI test automation techniques like record/playback do not provide the ROI required for modern development projects. Keyword-driven Testing (KDT) is a widely accepted test automation technique that many mature development organizations rely on to overcome the disadvantages of simple record/playback test automation. However beyond the advantages that KDT frameworks deliver, there are major disadvantages in manageability and complexity inherent in KDT. Many applications require that thousands of automation keywords be developed to make use of KDT. Navigating and constructing test cases based on these keywords can be cumbersome and unpractical. Acceptance Testing Frameworks (ATF) such as FitNesse use a similar approach to structuring test cases (acceptance tests) using keywords that are implemented using coded fixtures. These frameworks do not support users in navigating all available actions (keywords). As with KDT frameworks, ATFs do not support developers in structuring fixtures (the implementation of the actions) so that they can be easily reused and maintained. State-driven Testing (SDT) addresses the maintenance and complexity issues of KDT by providing a UI state-transition model. By defining state transitions of the user interface, the set of allowable UI actions (keywords) at any given point in a test case is reduced from thousands of allowable actions down to a manageable list of tens of allowable actions. SDT uses a domain specific language (DSL) to define the test automation framework which provides a highly maintainable and simple approach to structuring a test framework. 4. About the author Jan de Coster is a testing guru with over 15 years of IT experience. He currently holds the position of Subject Matter Expert in Quality Solutions at Micro Focus. In this role, Jan provides domain expertise and thought leadership around Software Quality Assurance for Micro Focusʼ education organization as well as delivering consultancy and project services into the field throughout Europe. Jan started his career in software development and project management. However, it soon became clear that he had a passion for the qualitative aspect of software delivery and for the last 11 years he has specialized in integrating processes and tooling to help organizations increase quality in their software. More recently, Jan has been more closely exploring quality in relation to Agile Test Management and Agile Test Automation areas, he feels, of significant importance to the future success of software delivery. For additional information please visit 27
Software Engineering Reference Framework
Software Engineering Reference Framework Michel Chaudron, Jan Friso Groote, Kees van Hee, Kees Hemerik, Lou Somers, Tom Verhoeff. Department of Mathematics and Computer Science Eindhoven University of
Improving Active Mealy Machine Learning for Protocol Conformance Testing
Machine Learning manuscript No. (will be inserted by the editor) Improving Active Mealy Machine Learning for Protocol Conformance Testing Fides Aarts Harco Kuppens Jan Tretmans Frits Vaandrager Sicco Verwer
(Refer Slide Time: 01:52)
Software Engineering Prof. N. L. Sarda Computer Science & Engineering Indian Institute of Technology, Bombay Lecture - 2 Introduction to Software Engineering Challenges, Process Models etc (Part 2) This
How To Develop Software
Software Engineering Prof. N.L. Sarda Computer Science & Engineering Indian Institute of Technology, Bombay Lecture-4 Overview of Phases (Part - II) We studied the problem definition phase, with which
Requirements engineering
Learning Unit 2 Requirements engineering Contents Introduction............................................... 21 2.1 Important concepts........................................ 21 2.1.1 Stakeholders and
Software Engineering Introduction & Background. Complaints. General Problems. Department of Computer Science Kent State University
Software Engineering Introduction & Background Department of Computer Science Kent State University Complaints Software production is often done by amateurs Software development is done by tinkering or
Software testing. Objectives
Software testing cmsc435-1 Objectives To discuss the distinctions between validation testing and defect testing To describe the principles of system and component testing To describe strategies for generating
EL Program: Smart Manufacturing Systems Design and Analysis
EL Program: Smart Manufacturing Systems Design and Analysis Program Manager: Dr. Sudarsan Rachuri Associate Program Manager: K C Morris Strategic Goal: Smart Manufacturing, Construction, and Cyber-Physical
WebSphere Business Modeler
Discovering the Value of SOA WebSphere Process Integration WebSphere Business Modeler Workshop SOA on your terms and our expertise Soudabeh Javadi Consulting Technical Sales Support WebSphere Process Integration
SOFTWARE REQUIREMENTS
SOFTWARE REQUIREMENTS http://www.tutorialspoint.com/software_engineering/software_requirements.htm Copyright tutorialspoint.com The software requirements are description of features and functionalities
Virtual Platforms Addressing challenges in telecom product development
white paper Virtual Platforms Addressing challenges in telecom product development This page is intentionally left blank. EXECUTIVE SUMMARY Telecom Equipment Manufacturers (TEMs) are currently facing numerous
UML-based Test Generation and Execution
UML-based Test Generation and Execution Jean Hartmann, Marlon Vieira, Herb Foster, Axel Ruder Siemens Corporate Research, Inc. 755 College Road East Princeton NJ 08540, USA [email protected] ABSTRACT
The role of integrated requirements management in software delivery.
Software development White paper October 2007 The role of integrated requirements Jim Heumann, requirements evangelist, IBM Rational 2 Contents 2 Introduction 2 What is integrated requirements management?
Component-based Development Process and Component Lifecycle Ivica Crnkovic 1, Stig Larsson 2, Michel Chaudron 3
Component-based Development Process and Component Lifecycle Ivica Crnkovic 1, Stig Larsson 2, Michel Chaudron 3 1 Mälardalen University, Västerås, Sweden, [email protected] 2 ABB Corporate Research,
Specification and Analysis of Contracts Lecture 1 Introduction
Specification and Analysis of Contracts Lecture 1 Introduction Gerardo Schneider [email protected] http://folk.uio.no/gerardo/ Department of Informatics, University of Oslo SEFM School, Oct. 27 - Nov.
SPECIFICATION BY EXAMPLE. Gojko Adzic. How successful teams deliver the right software. MANNING Shelter Island
SPECIFICATION BY EXAMPLE How successful teams deliver the right software Gojko Adzic MANNING Shelter Island Brief Contents 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Preface xiii Acknowledgments xxii
Oracle Real Time Decisions
A Product Review James Taylor CEO CONTENTS Introducing Decision Management Systems Oracle Real Time Decisions Product Architecture Key Features Availability Conclusion Oracle Real Time Decisions (RTD)
Extend the value of your core business systems.
Legacy systems renovation to SOA September 2006 Extend the value of your core business systems. Transforming legacy applications into an SOA framework Page 2 Contents 2 Unshackling your core business systems
Feature. Applications of Business Process Analytics and Mining for Internal Control. World
Feature Filip Caron is a doctoral researcher in the Department of Decision Sciences and Information Management, Information Systems Group, at the Katholieke Universiteit Leuven (Flanders, Belgium). Jan
Nova Software Quality Assurance Process
Nova Software Quality Assurance Process White Paper Atlantic International Building 15F No.2 Ke Yuan Yi Road, Shiqiaopu, Chongqing, P.R.C. 400039 Tel: 86-23- 68795169 Fax: 86-23- 68795169 Quality Assurance
IPv6 Preparation and Deployment in Datacenter Infrastructure A Practical Approach
Paper IPv6 Preparation and Deployment in Datacenter Infrastructure A Practical Approach Marco van der Pal Generic Services Network Infrastructure Services, Capgemini Netherlands B.V., Utrecht, The Netherlands
State of Medical Device Development. 2014 State of Medical Device Development seapine.com 1
State of Medical Device Development 2014 2014 State of Medical Device Development seapine.com 1 Executive Summary The demand for smarter, safer, more connected medical devices has introduced new complexities
Next-Generation Performance Testing with Service Virtualization and Application Performance Management
Next-Generation Performance Testing with Service Virtualization and Application Performance Management By Akshay Rao, Principal Consultant, CA Technologies Summary Current approaches for predicting with
Towards Collaborative Requirements Engineering Tool for ERP product customization
Towards Collaborative Requirements Engineering Tool for ERP product customization Boban Celebic, Ruth Breu, Michael Felderer, Florian Häser Institute of Computer Science, University of Innsbruck 6020 Innsbruck,
PASTA Abstract. Process for Attack S imulation & Threat Assessment Abstract. VerSprite, LLC Copyright 2013
2013 PASTA Abstract Process for Attack S imulation & Threat Assessment Abstract VerSprite, LLC Copyright 2013 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Foundations for Systems Development
Foundations for Systems Development ASSIGNMENT 1 Read this assignment introduction. Then, read Chapter 1, The Systems Development Environment, on pages 2 25 in your textbook. What Is Systems Analysis and
Learning and Academic Analytics in the Realize it System
Learning and Academic Analytics in the Realize it System Colm Howlin CCKF Limited Dublin, Ireland [email protected] Danny Lynch CCKF Limited Dublin, Ireland [email protected] Abstract: Analytics
Keywords document, agile documentation, documentation, Techno functional expert, Team Collaboration, document selection;
Volume 4, Issue 4, April 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com A Document Driven
Benefits of Test Automation for Agile Testing
Benefits of Test Automation for Agile Testing Manu GV 1, Namratha M 2, Pradeep 3 1 Technical Lead-Testing Calsoft Labs, Bangalore, India 2 Assistant Professor, BMSCE, Bangalore, India 3 Software Engineer,
SOFTWARE ENGINEERING INTERVIEW QUESTIONS
SOFTWARE ENGINEERING INTERVIEW QUESTIONS http://www.tutorialspoint.com/software_engineering/software_engineering_interview_questions.htm Copyright tutorialspoint.com Dear readers, these Software Engineering
Basic Trends of Modern Software Development
DITF LDI Lietišķo datorsistēmu programmatūras profesora grupa e-business Solutions Basic Trends of Modern Software Development 2 3 Software Engineering FAQ What is software engineering? An engineering
Chapter 9 Software Evolution
Chapter 9 Software Evolution Summary 1 Topics covered Evolution processes Change processes for software systems Program evolution dynamics Understanding software evolution Software maintenance Making changes
A Comparison of SOA Methodologies Analysis & Design Phases
202 A Comparison of SOA Methodologies Analysis & Design Phases Sandra SVANIDZAITĖ Institute of Mathematics and Informatics, Vilnius University Abstract. Service oriented computing is a new software engineering
Chapter 8 Software Testing
Chapter 8 Software Testing Summary 1 Topics covered Development testing Test-driven development Release testing User testing 2 Program testing Testing is intended to show that a program does what it is
The SPES Methodology Modeling- and Analysis Techniques
The SPES Methodology Modeling- and Analysis Techniques Dr. Wolfgang Böhm Technische Universität München [email protected] Agenda SPES_XT Project Overview Some Basic Notions The SPES Methodology SPES_XT
Project Management Process
Project Management Process Description... 1 STAGE/STEP/TASK SUMMARY LIST... 2 Project Initiation 2 Project Control 4 Project Closure 5 Project Initiation... 7 Step 01: Project Kick Off 10 Step 02: Project
Software Engineering Question Bank
Software Engineering Question Bank 1) What is Software Development Life Cycle? (SDLC) System Development Life Cycle (SDLC) is the overall process of developing information systems through a multi-step
New Generation of Software Development
New Generation of Software Development Terry Hon University of British Columbia 201-2366 Main Mall Vancouver B.C. V6T 1Z4 [email protected] ABSTRACT In this paper, I present a picture of what software development
Software Development for Medical Devices
Overcoming the Challenges of Compliance, Quality and Cost An MKS White Paper Introduction Software is fast becoming the differentiator for manufacturers of medical devices. The rewards available from software
Testing, What is it Good For? Absolutely Everything!
Testing, What is it Good For? Absolutely Everything! An overview of software testing and why it s an essential step in building a good product Beth Schechner Elementool The content of this ebook is provided
Improving Interoperability in Mechatronic Product Developement. Dr. Alain Biahmou, Dr. Arnulf Fröhlich, Dr. Josip Stjepandic
International Conference on Product Lifecycle Management 1 Improving Interoperability in Mechatronic Product Developement Dr. Alain Biahmou, Dr. Arnulf Fröhlich, Dr. Josip Stjepandic PROSTEP AG Dolivostr.
NIST Cloud Computing Program Activities
NIST Cloud Computing Program Overview The NIST Cloud Computing Program includes Strategic and Tactical efforts which were initiated in parallel, and are integrated as shown below: NIST Cloud Computing
Digital Marketplace Services Service Definition
Digital Marketplace Services Service Definition Arrk Limited Manchester Science Park Pencroft Way Manchester M15 6JJ Tel: +44 161 227 9900 Fax: +44 016 227 9966 www.arrkgroup.com Registered In England
MSc & PhD TWENTE GRADUATE SCHOOL STRUCTURED PHD PROGRAMMES
MSc & PhD TWENTE GRADUATE SCHOOL STRUCTURED PHD PROGRAMMES TWENTE GRADUATE SCHOOL STRUCTURED PHD PROGRAMMES Twente Graduate School coordinates high-quality educational programmes based on topics closely
3TU Master of Science in Systems and Control. An essential engineering qualification for future decades
3TU Master of Science in Systems and Control An essential engineering qualification for future decades The challenge of Systems and Control As the complexity and importance of our many industrial structures
Formal Methods for Preserving Privacy for Big Data Extraction Software
Formal Methods for Preserving Privacy for Big Data Extraction Software M. Brian Blake and Iman Saleh Abstract University of Miami, Coral Gables, FL Given the inexpensive nature and increasing availability
Automated Model Based Testing for an Web Applications
Automated Model Based Testing for an Web Applications Agasarpa Mounica, Lokanadham Naidu Vadlamudi Abstract- As the development of web applications plays a major role in our day-to-day life. Modeling the
Doctor of Philosophy in Computer Science
Doctor of Philosophy in Computer Science Background/Rationale The program aims to develop computer scientists who are armed with methods, tools and techniques from both theoretical and systems aspects
Requirements Management
REQUIREMENTS By Harold Halbleib Requirements Management Identify, Specify, Track and Control Requirements Using a Standard Process About the author... Harold Halbleib has a degree in Electrical Engineering
SCADE System 17.0. Technical Data Sheet. System Requirements Analysis. Technical Data Sheet SCADE System 17.0 1
SCADE System 17.0 SCADE System is the product line of the ANSYS Embedded software family of products and solutions that empowers users with a systems design environment for use on systems with high dependability
What mathematical optimization can, and cannot, do for biologists. Steven Kelk Department of Knowledge Engineering (DKE) Maastricht University, NL
What mathematical optimization can, and cannot, do for biologists Steven Kelk Department of Knowledge Engineering (DKE) Maastricht University, NL Introduction There is no shortage of literature about the
Introducing Formal Methods. Software Engineering and Formal Methods
Introducing Formal Methods Formal Methods for Software Specification and Analysis: An Overview 1 Software Engineering and Formal Methods Every Software engineering methodology is based on a recommended
BUSINESS RULES AND GAP ANALYSIS
Leading the Evolution WHITE PAPER BUSINESS RULES AND GAP ANALYSIS Discovery and management of business rules avoids business disruptions WHITE PAPER BUSINESS RULES AND GAP ANALYSIS Business Situation More
Knowledge Base Data Warehouse Methodology
Knowledge Base Data Warehouse Methodology Knowledge Base's data warehousing services can help the client with all phases of understanding, designing, implementing, and maintaining a data warehouse. This
Software Process for QA
Software Process for QA Basic approaches & alternatives CIS 610, W98 / M Young 1/7/98 1 This introduction and overview is intended to provide some basic background on software process (sometimes called
VDM vs. Programming Language Extensions or their Integration
VDM vs. Programming Language Extensions or their Integration Alexander A. Koptelov and Alexander K. Petrenko Institute for System Programming of Russian Academy of Sciences (ISPRAS), B. Communisticheskaya,
C. Wohlin and B. Regnell, "Achieving Industrial Relevance in Software Engineering Education", Proceedings Conference on Software Engineering
C. Wohlin and B. Regnell, "Achieving Industrial Relevance in Software Engineering Education", Proceedings Conference on Software Engineering Education & Training, pp. 16-25, New Orleans, Lousiana, USA,
Reliable Test Effort Estimation
Reliable Test Effort Estimation It's a kind of magic Ruud Teunissen Polteq Test Services BV The Netherlands Objectives for this keynote Make your estimate more reliable Discover the principle of estimation
Intelligent Log Analyzer. André Restivo <[email protected]>
Intelligent Log Analyzer André Restivo 9th January 2003 Abstract Server Administrators often have to analyze server logs to find if something is wrong with their machines.
Increasing Development Knowledge with EPFC
The Eclipse Process Framework Composer Increasing Development Knowledge with EPFC Are all your developers on the same page? Are they all using the best practices and the same best practices for agile,
How service-oriented architecture (SOA) impacts your IT infrastructure
IBM Global Technology Services January 2008 How service-oriented architecture (SOA) impacts your IT infrastructure Satisfying the demands of dynamic business processes Page No.2 Contents 2 Introduction
Introduction to Testing Webservices
Introduction to Testing Webservices Author: Vinod R Patil Abstract Internet revolutionized the way information/data is made available to general public or business partners. Web services complement this
Bidirectional Tracing of Requirements in Embedded Software Development
Bidirectional Tracing of Requirements in Embedded Software Development Barbara Draxler Fachbereich Informatik Universität Salzburg Abstract Nowadays, the increased complexity of embedded systems applications
Introduction to Automated Testing
Introduction to Automated Testing What is Software testing? Examination of a software unit, several integrated software units or an entire software package by running it. execution based on test cases
Healthcare Measurement Analysis Using Data mining Techniques
www.ijecs.in International Journal Of Engineering And Computer Science ISSN:2319-7242 Volume 03 Issue 07 July, 2014 Page No. 7058-7064 Healthcare Measurement Analysis Using Data mining Techniques 1 Dr.A.Shaik
Agile Master Data Management A Better Approach than Trial and Error
Agile Master Data Management A Better Approach than Trial and Error A whitepaper by First San Francisco Partners First San Francisco Partners Whitepaper Executive Summary Market leading corporations are
Framing Requirements for Predictive Analytic Projects with Decision Modeling
Research Brief Framing Requirements for Predictive Analytic Projects with Decision Modeling August 2015 Written by: James Taylor Key Takeaways 1. Organizations are struggling to create a scalable, sustainable
How To Test A Web Based Application Automatically
A General Framework for Testing Web-Based Applications Saeed Abrishami, Mohsen Kahani Computer Engineering Department, Ferdowsi University of Mashhad [email protected] r, [email protected] Abstract Software
Skatteudvalget 2014-15 (2. samling) SAU Alm.del Bilag 48 Offentligt. Programme, Project & Service Management Analysis
Skatteudvalget 2014-15 (2. samling) SAU Alm.del Bilag 48 Offentligt Programme, Project & Service Management Analysis Table of Content 1 Executive Summary... 3 1.1 Scope of Work... 3 1.2 Methodology for
Testing of safety-critical software some principles
1(60) Testing of safety-critical software some principles Emerging Trends in Software Testing: autumn 2012 Matti Vuori, Tampere University of Technology 27.11.2012 Contents 1/4 Topics of this lecture 6
Metrics in Software Test Planning and Test Design Processes
Master Thesis Software Engineering Thesis no: MSE-2007:02 January 2007 Metrics in Software Test Planning and Test Design Processes Wasif Afzal School of Engineering Blekinge Institute of Technology Box
Software-based medical devices from defibrillators
C O V E R F E A T U R E Coping with Defective Software in Medical Devices Steven R. Rakitin Software Quality Consulting Inc. Embedding defective software in medical devices increases safety risks. Given
3SL. Requirements Definition and Management Using Cradle
3SL Requirements Definition and Management Using Cradle November 2014 1 1 Introduction This white paper describes Requirements Definition and Management activities for system/product development and modification
Bringing Value to the Organization with Performance Testing
Bringing Value to the Organization with Performance Testing Michael Lawler NueVista Group 1 Today s Agenda Explore the benefits of a properly performed performance test Understand the basic elements of
Instructional Design Framework CSE: Unit 1 Lesson 1
Instructional Design Framework Stage 1 Stage 2 Stage 3 If the desired end result is for learners to then you need evidence of the learners ability to then the learning events need to. Stage 1 Desired Results
How To Develop A Telelogic Harmony/Esw Project
White paper October 2008 The Telelogic Harmony/ESW process for realtime and embedded development. Bruce Powel Douglass, IBM Page 2 Contents 3 Overview 4 Telelogic Harmony/ESW core principles 6 Harmony/ESW
Systems Engineering with RUP: Process Adoption in the Aerospace/ Defense Industry
March 2004 Rational Systems Engineering with RUP: Process Adoption in the Aerospace/ Defense Industry Why companies do it, how they do it, and what they get for their effort By Dave Brown, Karla Ducharme,
IBM Software IBM Business Process Management Suite. Increase business agility with the IBM Business Process Management Suite
IBM Software IBM Business Process Management Suite Increase business agility with the IBM Business Process Management Suite 2 Increase business agility with the IBM Business Process Management Suite We
How To Test Automatically
Automated Model-Based Testing of Embedded Real-Time Systems Jan Peleska [email protected] University of Bremen Bieleschweig Workshop 7 2006-05-05 Outline Technologie-Zentrum Informatik Objectives Basic concepts
Ten steps to better requirements management.
White paper June 2009 Ten steps to better requirements management. Dominic Tavassoli, IBM Actionable enterprise architecture management Page 2 Contents 2 Introduction 2 Defining a good requirement 3 Ten
Realizing business flexibility through integrated SOA policy management.
SOA policy management White paper April 2009 Realizing business flexibility through integrated How integrated management supports business flexibility, consistency and accountability John Falkl, distinguished
Bottlenecks in Agile Software Development Identified Using Theory of Constraints (TOC) Principles
Master thesis in Applied Information Technology REPORT NO. 2008:014 ISSN: 1651-4769 Department of Applied Information Technology or Department of Computer Science Bottlenecks in Agile Software Development
ASL 2, An introduction
ASL 2, An introduction Machteld Meijer and Louk Peters - Introduction - Why ASL? - What is ASL? - ASL framework - What can you do with ASL? - ASL in relation to other frameworks - More Info Introduction
Software Engineering. So(ware Evolu1on
Software Engineering So(ware Evolu1on 1 Software change Software change is inevitable New requirements emerge when the software is used; The business environment changes; Errors must be repaired; New computers
The Role of Information Technology Studies in Software Product Quality Improvement
The Role of Information Technology Studies in Software Product Quality Improvement RUDITE CEVERE, Dr.sc.comp., Professor Faculty of Information Technologies SANDRA SPROGE, Dr.sc.ing., Head of Department
Masters in Human Computer Interaction
Masters in Human Computer Interaction Programme Requirements Taught Element, and PG Diploma in Human Computer Interaction: 120 credits: IS5101 CS5001 CS5040 CS5041 CS5042 or CS5044 up to 30 credits from
Design of automatic testing tool for railway signalling systems software safety assessment
Risk Analysis VI 513 Design of automatic testing tool for railway signalling systems software safety assessment J.-G. Hwang 1, H.-J. Jo 1 & H.-S. Kim 2 1 Train Control Research Team, Korea Railroad Research
Module 2. Software Life Cycle Model. Version 2 CSE IIT, Kharagpur
Module 2 Software Life Cycle Model Lesson 4 Prototyping and Spiral Life Cycle Models Specific Instructional Objectives At the end of this lesson the student will be able to: Explain what a prototype is.
a new generation software test automation framework - CIVIM
a new generation software test automation framework - CIVIM Software Testing is the last phase in software development lifecycle which has high impact on the quality of the final product delivered to the
Measurement Information Model
mcgarry02.qxd 9/7/01 1:27 PM Page 13 2 Information Model This chapter describes one of the fundamental measurement concepts of Practical Software, the Information Model. The Information Model provides
Modern SOA Testing. A Practitioners Guide to. July 2011
A Practitioners Guide to Modern SOA Testing Gaurish Hattangadi Abstract Today s dynamic business needs are shaping the modern IT industry. Lower costs and faster time to market have propelled forward market-changing
What is a life cycle model?
What is a life cycle model? Framework under which a software product is going to be developed. Defines the phases that the product under development will go through. Identifies activities involved in each
IBM Rational systems and software solutions for the medical device industry
IBM Software August 2011 IBM Rational systems and software solutions for the medical device industry Improve processes, manage IEC 61508 and IEC 62304 standards, develop quality products Highlights Manage
