Software Engineering



Similar documents
A Process Model for Software Architecture

Software Engineering/Courses Description Introduction to Software Engineering Credit Hours: 3 Prerequisite: (Computer Programming 2).

(Refer Slide Time: 01:52)

Software Process for QA

CS4507 Advanced Software Engineering

Basic Testing Concepts and Terminology

Process Models and Metrics

Software testing. Objectives

Module 2. Software Life Cycle Model. Version 2 CSE IIT, Kharagpur

Software Development Process

Software Engineering. Objectives. Designing, building and maintaining large software systems

Software Engineering Introduction & Background. Complaints. General Problems. Department of Computer Science Kent State University

Software Engineering. Software Processes. Based on Software Engineering, 7 th Edition by Ian Sommerville

SE464/CS446/ECE452 Software Life-Cycle and Process Models. Instructor: Krzysztof Czarnecki

SOFTWARE ENGINEERING INTERVIEW QUESTIONS

CS 389 Software Engineering. Lecture 2 Chapter 2 Software Processes. Adapted from: Chap 1. Sommerville 9 th ed. Chap 1. Pressman 6 th ed.

COMP 354 Introduction to Software Engineering

Karunya University Dept. of Information Technology

Software Development Life Cycle (SDLC)

JOURNAL OF OBJECT TECHNOLOGY

The most suitable system methodology for the proposed system is drawn out.

Software Engineering Reference Framework

Title: Topic 3 Software process models (Topic03 Slide 1).

Software Development Processes. Software Life-Cycle Models

Name of pattern types 1 Process control patterns 2 Logic architectural patterns 3 Organizational patterns 4 Analytic patterns 5 Design patterns 6

To introduce software process models To describe three generic process models and when they may be used

Custom Software Development Approach

What is a life cycle model?

Software Engineering Question Bank

Explore 10 Different Types of Software Development Process Models

Software Engineering Processes

A Software Engineering Model for Mobile App Development

Software Development Processes. Software Life-Cycle Models. Process Models in Other Fields. CIS 422/522 Spring

The profile of your work on an Agile project will be very different. Agile projects have several things in common:

SECURITY METRICS: MEASUREMENTS TO SUPPORT THE CONTINUED DEVELOPMENT OF INFORMATION SECURITY TECHNOLOGY

Agile Software Development Methodologies and Its Quality Assurance

BUSINESS RULES AND GAP ANALYSIS

Process Methodology. Wegmans Deli Kiosk. for. Version 1.0. Prepared by DELI-cious Developers. Rochester Institute of Technology

Your Software Quality is Our Business. INDEPENDENT VERIFICATION AND VALIDATION (IV&V) WHITE PAPER Prepared by Adnet, Inc.

SOFTWARE PROCESS MODELS

Elite: A New Component-Based Software Development Model

A Framework for Software Product Line Engineering

Applying 4+1 View Architecture with UML 2. White Paper

IF2261 Software Engineering. Introduction. What is software? What is software? What is software? Failure Curve. Software Applications Type

A system is a set of integrated components interacting with each other to serve a common purpose.

Umbrella: A New Component-Based Software Development Model

Development Methodologies Compared

CSE 435 Software Engineering. Sept 16, 2015

Development models. 1 Introduction. 2 Analyzing development models. R. Kuiper and E.J. Luit

Component-based Development Process and Component Lifecycle Ivica Crnkovic 1, Stig Larsson 2, Michel Chaudron 3

Surveying and evaluating tools for managing processes for software intensive systems

The Project Matrix: A Model for Software Engineering Project Management

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into

TRADITIONAL VS MODERN SOFTWARE ENGINEERING MODELS: A REVIEW

Introduction to Automated Testing

Requirements engineering

Increasing Development Knowledge with EPFC

Software Processes. Coherent sets of activities for specifying, designing, implementing and testing software systems

Classical Software Life Cycle Models

How To Develop A Multi Agent System (Mma)

Software Development Process Models and their Impacts on Requirements Engineering Organizational Requirements Engineering

IV. Software Lifecycles

Agile Software Engineering Practice to Improve Project Success

2. Analysis, Design and Implementation

Measurement Information Model

Benefits of Test Automation for Agile Testing

CHAPTER 1 INTRODUCTION

Module 10. Coding and Testing. Version 2 CSE IIT, Kharagpur

Fourth generation techniques (4GT)

A. Waterfall Model - Requirement Analysis. System & Software Design. Implementation & Unit Testing. Integration & System Testing.

Partnering for Project Success: Project Manager and Business Analyst Collaboration

Introduction to Software Paradigms & Procedural Programming Paradigm

11.1 What is Project Management? Object-Oriented Software Engineering Practical Software Development using UML and Java. What is Project Management?

Software Engineering from an Engineering Perspective: SWEBOK as a Study Object

Chapter 8 Approaches to System Development

Chapter 2 Software Processes

Process Models in Software Engineering

Whitepaper. Agile Methodology: An Airline Business Case YOUR SUCCESS IS OUR FOCUS. Published on: Jun-09 Author: Ramesh & Lakshmi Narasimhan

Towards Collaborative Requirements Engineering Tool for ERP product customization

C. Wohlin and B. Regnell, "Achieving Industrial Relevance in Software Engineering Education", Proceedings Conference on Software Engineering

In the IEEE Standard Glossary of Software Engineering Terminology the Software Life Cycle is:

Table of Contents. CHAPTER 1 Web-Based Systems 1. CHAPTER 2 Web Engineering 12. CHAPTER 3 A Web Engineering Process 24

Software Engineering. What is a system?

Module 1. Introduction to Software Engineering. Version 2 CSE IIT, Kharagpur

A Software Development Simulation Model of a Spiral Process

CS 1632 SOFTWARE QUALITY ASSURANCE. 2 Marks. Sample Questions and Answers

Lifecycle Models: Waterfall / Spiral / EVO

International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research)

How To Model Software Development Life Cycle Models

CREDENTIALS & CERTIFICATIONS 2015

A Comparison of System Dynamics (SD) and Discrete Event Simulation (DES) Al Sweetser Overview.

Lecture 3 Software Development Processes

Instructional Design Framework CSE: Unit 1 Lesson 1

The Phios Whole Product Solution Methodology

Transcription:

Software Engineering Que1: What is a software process define it s step? Ans: In order for software to be consistently well engineered, its development must be conducted in an orderly process. It is sometimes possible for a small software product to be developed without a well-defined process. However, for a software project of any substantial size, involving more than a few people, a good process is essential. The process can be viewed as a road map by which the project participants understand where they are going and how they are going to get there. There is general agreement among software engineers on the major steps of a software process. Figure 1 is a graphical depiction of these steps. As discussed in Chapter 1, the first three steps in the process dia-gram coincide with the basic steps of the problem solving process, as shown in Table 4. The fourth step in the process is the post-development phase, where the product is deployed to its users, maintained as necessary, and enhanced to meet evolving requirements. The first two steps of the process are often referred to, respectively, as the "what and how" of software development. The "Analyze and Specify" step defines what the problem is to be solved; the "Design and Implement" step entails how the problem is solved. Figure 1: Diagram of general software process steps. Page : 1/20

Problem-Solving Phase Define the Problem Solve the Problem Verify the Solution Software Process Step Analyze and Specify Software Requirements Design and Implement Software Product Test that Product Meets Requirements Correspondence between problem-solving and software processes. While these steps are common in most definitions of software process, there are wide variations in ho w process details are defined. The variations stem from the kind of software being developed and the people doing the development. For example, the process for developing a well-understood business application with a highly experienced team can be quite different from the process of developing an experimental artificial intelligence program with a group of academic researchers. Among authors who write about software engineering processes, there is a good deal of variation in process details. There is variation in terminology, how processes are structured, and the emphasis placed on different aspects of the process. This chapter will define key process terminology and present a specific process that is generally applicable to a range of end-user software. The chapter will also discuss alternative approaches to defining software engineering processes. Independent of technical details, there are general quality criteria that apply to any good process. These criteria include the following: The process is suited to the people involved in a project and the type of software being developed. All project participants clearly understand the process, or at minimum the part of the process in which they are directly involved. If possible, the process is defined based on the experience of engineers who have participated in successful projects in the past, in an application domain similar to the project at hand. The process is subject to regular evaluation, so that adjustments can be made as necessary during a project, and so the process can be improved for future projects. As presented in this chapter, with neat graphs and tables, the software development process is intended to appear quite orderly. In actual practice, the process can get messy. Developing software often involves people of diverse backgrounds, varying skills, and differing viewpoints on the product to be developed. Added to this are the facts that software projects can take a long time to complete and cost a lot of money. Given these facts, software development can be quite challenging, and at times trying for those doing it. Having a well-defined software process can help participants meet the challenges and minimize the trying times. However, any software process must be conducted by people who are willing and able to work effectively with one another. Effective human communication is absolutely essential to any software development project, whatever specific technical process is employed. Software Process: This book presents and follows a specific software process. The purpose of presenting a particular process is three-fold: to define a process that is useful for a broad class of end-user software, including the example soft-ware system presented in the book to provide an organizational framework for presenting technical subject matter to give a concrete example of process definition, that can be used for guidance in defining other software processes. Defining a software process entails the follow wing major tasks: defining the process steps, defining process enactment, and defining the artifacts that the steps produce. Process steps and their enactment are defined here in Chapter 2. The structure of software artifacts is presented in Chapter 3. An important point to make at the outset is that this is "a" software process, not "the" process. There is in fact no single process that is universally applicable to all software. The process employed in this book is useful for a reasonably wide range of end-user products. However, this process, as any other, must be Page : 2/20

adapted to suit the needs of a particular development team working on a particular project. A good way to regard the process is as a representative example of process definition. Further discussion of process adaptation appears later in the chapter. One of the most important things to say about software process is "use one that works". This means that technical details of a process and its methodologies are often less important than how well the process suits the project at hand. Above all, the process should be one that everyone thoroughly understands and can be productive using. There is no sense having management dictate a process from on high that the customers and technical staff cannot live with. The management and technical leaders who define a soft-ware process must understand well the people who will use it, and consult with them as necessary before, during, and after the establishment of a process. In order for all this to happen, the process must be clearly defined, which is what this chapter is about. The top-level steps of the book's process are shown in Figure 6. These steps are a refinement of the general software process presented at the beginning of the chapter in Figure 1. The refined process has the following enhancements compared to the more general one: the "Analyze and Specify" step has been broken down into two separate steps; similarly, the "Design and Implement" step has been broken into two separate steps; : Top-level steps of the process used in the book Process Steps Figure 7 is a one-level expansion the ordered process steps. The ordered steps can be viewed as a process of successive refinement, from an initial idea through to a deployable software product. In this sense, the process is based on a divide and conquers strategy, where the focus of each step is a particular aspect of the overall development effort. At each step, the developers have the responsibility to focus on what is important at that level of refinement. The y also have the freedom to ignore or give only limited consideration to what is important at other levels of refinement. Table 5 summarizes the responsibilities and freedoms for each top-level step of the ordered process. The focus of the Analyze step is on user requirements. The needs of the user are the primary concern at this level. Concern with details of program design and implementation should be limited to what is feasible to implement. For projects that are on a particularly tight budget or time line, requirements analysts may need to focus more on implementation feasibility. Also, it may be difficult to estimate implementation feasibility Page : 3/20

if the analysis team is inexperienced in the type of software being built or in the application domain. In such cases, a more iterative development approach can be useful, as is discussed a bit later in this chapter. The focus of the Specify step is on building a "real-world" software model. Real-world in this context means that the model defines the parts of the software that are directly relevant to the end user, without program implementation details. The distinction between a real-world model and program Analyze Perform General Requirements Analysis Define Functional Requirements Define Non-Functional Requirements Specify Specify Structural Model Specify Behavioral Model Specify User Interface Model Specify Non-Functional Requirements Refine User Interface Design Formally Specify Design Design for Non-Functional Requirements Apply Design Heuristics Define SCOs, Iterate Back as Necessary Implement Implement Data Design Implement Function Design Iterate Back to Analyze Step as Necessary Optimize Implementation Prototype Iterate Back to Design Step as Necessary Refine Scenario Storyboards into Working UI Define SCOs, Iterate Back as Necessary Sequence UI Screens Deploy Sensitize UI Components Release Product Write Prototype Scripts Track Defects Iterate Back to Preceding Steps as Necessary Define Enhancements Iterate Back to Repair and Enhance Design Design High-Level Architecture Apply Design Patterns Refine Model and Process Design Figure 7: Ordered process steps expanded one level. Que2: What is a software life cycle model? Ans: A software life cycle model is either a descriptive or prescriptive characterization of how software is or should be developed. A descriptive model describes the history of how a particular software system was developed. Descriptive models may be used as the basis for understanding and improving software development processes, or for building empirically grounded prescriptive models (Curtis, Krasner, Iscoe, 1988). A prescriptive model prescribes how a new software system should be developed. Prescriptive models are used as guidelines or frameworks to organize and structure how software development activities should be performed, and in what order. Typically, it is easier and more common to articulate a prescriptive life cycle model for how software systems should be developed. This is possible since most such models are intuitive or well reasoned. This means that many idiosyncratic details that describe how a software systems is built in practice can be ignored, generalized, or deferred for later consideration. This, of course, Page : 4/20

should raise concern for the relative validity and robustness of such life cycle models when developing different kinds of application systems, in different kinds of development settings, using different programming languages, with differentially skilled staff, etc. However, prescriptive models are also used to package the development tasks and techniques for using a given set of software engineering tools or environment during a development project. Descriptive life cycle models, on the other hand, characterize how particular software systems are actually developed in specific settings. As such, they are less common and more difficult to articulate for an obvious reason: one must observe or collect data throughout the life cycle of a software system, a period of elapsed time often measured in years. Also, descriptive models are specific to the systems observed and only generalize able through systematic comparative analysis. Therefore, this suggests the prescriptive software life cycle models will dominate attention until a sufficient base of observational data is available to articulate empirically grounded descriptive life cycle models. These two characterizations suggest that there are a variety of purposes for articulating software life cycle models. These characterizations serve as a Guideline to organize, plan, staff, budget, and schedule and manage software project work over organizational time, space, and computing environments. Prescriptive outline for what documents to produce for delivery to client. Basis for determining what software engineering tools and methodologies will be most appropriate to support different life cycle activities. Framework for analyzing or estimating patterns of resource allocation and consumption during the software life cycle (Boehm 1981) Basis for conducting empirical studies to determine what affects software productivity, cost, and overall quality. Que3: What is a software process model? Ans: In contrast to software life cycle models, software process models often represent a networked sequence of activities, objects, transformations, and events that embody strategies for accomplishing software evolution. Such models can be used to develop more precise and formalized descriptions of software life cycle activities. Their power emerges from their utilization of a sufficiently rich notation, syntax, or semantics, often suitable for computational processing. Software process networks can be viewed as representing multiple interconnected task chains (Kling 1982, Garg 1989). Task chains represent a non-linear sequence of actions that structure and transform available computational objects (resources) into intermediate or finished products. Non-linearity implies that the sequence of actions may be non-deterministic, iterative, accommodate multiple/parallel alternatives, as well as partially ordered to account for incremental progress. Task actions in turn can be viewed a non-linear sequences of primitive actions which denote atomic units of computing work, such as a user's selection of a command or menu entry using a mouse or keyboard. Wino grad and others have referred to these units of cooperative work between people and computers as "structured discourses of work" (Wino grad 1986), while task chains have become popularized under the name of "workflow" (Bolcer 1998). Task chains can be employed to characterize either prescriptive or descriptive action sequences. Prescriptive task chains are idealized plans of what actions should be accomplished, and in what order. For example, a task chain for the activity of object-oriented software design might include the following task actions: Develop an informal narrative specification of the system. Identify the objects and their attributes. Identify the operations on the objects. Identify the interfaces between objects, attributes, or operations. Implement the operations. Clearly, this sequence of actions could entail multiple iterations and non-procedural primitive action invocations in the course of incrementally progressing toward an object-oriented software design. Task chains join or split into other task chains resulting in an overall production network or web (Kling 1982). The production web represents the "organizational production system" that transforms raw computational, cognitive, and other organizational resources into assembled, integrated and usable software systems. The production lattice therefore structures how a software system is developed, used, and maintained. Page : 5/20

Que4: Define Waterfall Model in detail? Ans: One of the earliest versions of a software process was published by Winston Royce in 1970 [Royce 70]. Figure 21 is a diagram of the process as originally presented. The diagram has been dubbed a "waterfall chart", due to its depiction of process flow downward from one step to the next. The steps shown in the original waterfall diagram are notably similar to those still widely used today. Table 8 shows a reasonable correspondence between the waterfall steps and the ordered steps shown in Figure 6. In its position as one of the earliest published models, the waterfall process has been the subject of many critiques. One of the most frequently raised criticisms is that the model is too inflexible in its strictly sequential style of processing. It is unrealistic to expect that each step will be fully completed before the next is begun. Water must sometimes flow uphill. Figure : The original waterfall process model. Waterfall Step Ordered Process Step System Requirements Software Requirements Analysis Analyze Specify Page : 6/20

Program Design Coding Testing Operations Design Implement Test Implementation Deploy Table 8: Waterfall steps in correspondence to the ordered steps in Figure 6. To his credit, Royce was aware that process iteration may be necessary. However, he suggested that it be limited as much as possible, and ideally occur only between immediately adjacent steps. This has led to a widely-held perception that the waterfall model is strictly sequential. However, it is easy enough to add iterative paths in a waterfall chart, and to consider iterative enactment to be a regular part of the process. This is essentially what the iterative enactment presented in Section 2.1.3 is about. A more significant criticism of the waterfall model is the relegation of testing to near the end of the process. Many have observed that waiting until after implementation is too late to start testing. Errors in artifacts produced in the earlier steps can be very difficult to detect if the y become embedded within the implementation. In most process models that have succeeded the waterfall, testing has become a more pervasive part of the process. In addition, the need for other pervasive steps has become clear. Another critique of the waterfall as originally presented is its lack of detail in how each step is conducted. For example, the high-level waterfall presentation says little about how the PROGRAM DESIGN step transforms the result of the ANALYSIS step into a design artifact. As process modeling has matured, the presentation of new models has often included details of process enactment that were missing from the original waterfall. Given its status as first on the block, the waterfall process has been fodder for much discussion. Despite a good deal of criticism, it is fair to say that many of the process basics presented by Royce are still valid today. The waterfall model remains an enduring benchmark of comparison against which other models are regularly compared, even if the comparison is negative. As illustrated Table 8, there is commonality between the process used in the book and the original waterfall model. The significant advances in the book' s process address key criticisms of waterfall, namely pervasive testing and other pervasive process steps more process iteration definition of process details. Stepwise Refinement In this approach, software systems are developed through the progressive refinement and enhancement of high-level system specifications into source code components (Wirth 1971, Mili 1986). However, the choice and order of which steps to choose and which refinements to apply remain unstated. Instead, formalization is expected to emerge within the heuristics and skills that are acquired and applied through increasingly competent practice. This model has been most effective and widely applied in helping to teach individual programmers how to organize their software development work. Many interpretations of the classic software life cycle thus subsume this approach within their design and implementations. Incremental Development and Release Developing systems through incremental release requires first providing essential operating functions, then providing system users with improved and more capable versions of a system at regular intervals (Basili 1975). This model combines the classic software life cycle with iterative enhancement at the level of system development organization. It also supports a strategy to periodically distribute software maintenance updates and services to dispersed user communities. This in turn accommodates the provision of standard software maintenance contracts. It is therefore a popular model of software evolution used by many commercial software firms and system vendors. This approach has also been extended through the use of software prototyping tools and techniques (described later), which more directly provide support for incremental development and iterative release for early and ongoing user feedback and evaluation (Graham1989). Figure 2 provides an example view of an incremental development, build, and release model for engineering large Ada based software systems, developed by Royce (1990) at TRW. Page : 7/20

Elsewhere, the Clean room software development method at use in IBM and NASA laboratories provides incremental release of software functions and/or subsystems (developed through stepwise refinement) to separate in-house quality assurance teams that apply statistical measures and analyses as the basis for certifying high-quality software systems (Selby 1987, Mills 1987). Que5: explain Spiral Model in detail? Ans: The spiral process model was developed in 1988 by Barry Boehm [Boehm 88]. Figure 22 is a diagram of the model, as presented originally by its author. There are four pervasive activities in the spiral model, shown in the four quadrants of the diagram Figure 22: Spiral process diagram as originally presented. Determine objectives -- set out what is to be accomplished in the next phase of development. Evaluate risks -- determine the risks involved, using prototyping to help evaluate alternatives. Develop and validate -- develop the next phase of the project and validate the resulting artifacts. Page : 8/20

Plan -- plan for the next phase of the project, based on the results of the just-completed phase. Each loop of the spiral represents a phase of development. The phases are indicated in the top part of the lower right quadrant: Concept of operation -- formulate broadly what the software is intended to do. Software requirements -- analyze user requirements. Software product design -- design the high level software architecture. Detailed design and code -- implement the product. An important aspect of the spiral model not directly evident in the diagram is the dynamic development of process details. Part of the planning activities for each phase includes determining the process details for the coming phase. For example, the Requirements and Life-Cycle Plans lay out the details for the requirements analysis phase of the process. The Development Plan contains details of what type of design and implementation process to follow. The plans also determine how iterative the process will be, i.e., how many expansions of the spiral will be enacted. To provide a common frame of reference, Figure 23 shows the spiral process in the style of process diagram used in the first part of the chapter. The diagram illustrates that there is nothing particularly important about the spiral shape of the model. Compared to other process models, the two most distinctive aspects of the spiral process are these: the continual assessment of risk to guide process enactment the dynamic development of process details, based on risk assessment and planning Risks are any adverse circumstances that can impair the development process or negatively affect product quality. An important part of risk analysis is to identify the high-risk problems versus those of lower risk. In doing so, process planning then proceeds to avoid the high-risk areas. The development of prototypes is part of the risk identification process. For example, if there are two possible approaches to development, two prototypes using each approach can be developed to determine which approach shows better promise. From a business and managerial perspective, risk assessment is a discipline in its own right. Hence, in order for a spiral process to be successfully enacted, the development team must include individuals who are skilled in risk analysis. The key principles of the Spiral model are treated with less emphasis in the process used in this book. In the book's process, risk assessment is performed as part of the Analyze step, not pervasively. Also, dynamic process development is de-emphasized, since most process details are defined in advance. Dynamic process updating is included as a pervasive activity, but as a lower sub step within Manage (see Figure 14). Spiral practices could be more fully incorporated into the book's process by elevating risk analysis and dynamic process development to immediate sub steps of Manage, or to top-level steps.. The spiral model of software development and evolution represents a risk-driven approach to software process analysis and structuring (Boehm 1987, Boehm et al, 1998). This approach, developed by Barry Boehm, incorporates elements of specification-driven, prototype-driven process methods, together with the classic software life cycle. It does so by representing iterative development cycles as an expanding spiral, with inner cycles denoting early system analysis and prototyping, and outer cycles denoting the classic software life cycle. The radial dimension denotes cumulative development costs, and the angular dimension denotes progress made in accomplishing each development spiral. See Figure 3. Page : 9/20

Risk analysis, which seeks to identify situations that might cause a development effort to fail or go over budget/schedule, occurs during each spiral cycle. In each cycle, it represents roughly the same amount of angular displacement, while the displaced sweep volume denotes increasing levels of effort required for risk analysis. System development in this model therefore spirals out only so far as needed according to the risk that must be managed. Alternatively, the spiral model indicates that the classic software life cycle model need only be followed when risks are greatest, and after early system prototyping as a way of reducing these risks, albeit at increased cost. The insights that the Spiral Model offered has in turned influenced the standard software life cycle process models, such as ISO12207 noted earlier. Finally, efforts are now in progress to integrate computer-based support for stakeholder negotiations and capture of tradeoff rationales into an operational form of the Win. Win Spiral Model (Boehm et al, 1998). (see Risk Management in Software Development) Miscellaneous Process Models. Many variations of the non-operational life cycle and process models have been proposed, and appear in the proceedings of the international software process workshops sponsored by the ACM, IEEE, and Software Process Association. These include fully interconnected life cycle models that accommodate transitions between any two phases subject to satisfaction of their pre- and postconditions, as well as compound variations on the traditional life cycle and continuous transformation models. However, reports indicate that in general most software process models are exploratory, though there is now a growing base of experimental or industrial experience with these models (Basili 1988, Raffo et al 1999, Raffo and Scacchi 2000). Que6: What is Software Reliability define in detail? Ans: Software permeates our daily life. There is probably no other human-made material which is more omnipresent than software in our modern society. It has become a crucial part of many aspects of society: home appliances, telecommunications, automobiles, airplanes, shopping, auditing, web teaching, personal entertainment, and so on. In particular, science and technology demand high-quality software for making improvements and breakthroughs. The size and complexity of software systems have grown dramatically during the past few decades, and the trend will certainly continue in the future. The data from industry show that the size of the software for various systems and applications has been growing exponentially for the past 40 years [20]. The trend of such growth in the telecommunication, business, defense, and transportation industries shows a compound growth rate of ten times every five years. Because of this ever-increasing dependency, software failures can lead to serious, even fatal, consequences in safety-critical systems as well as in normal business. Previous software failures have impaired several high-visibility programs and have led to loss of business [28]. The ubiquitous software is also invisible, and its invisible nature makes it both beneficial and harmful. From the positive side, systems around us work seamlessly thanks to the smooth and swift execution of software. From the negative side, we often do not know when, where and how software ever has failed, or will fail. Consequently, while reliability engineering for hardware and physical systems continuously improves, reliability engineering for software does not really live up to our expectation over the years. This situation is frustrating as well as encouraging. It is frustrating because the software crisis identified as early as the 1960s still stubbornly stays with us, and software engineering has not fully evolved into a real engineering discipline. Human judgments and subjective favorites, instead of physical laws and rigorous procedures, dominate many decision making processes in software engineering. The situation is particularly critical in software reliability engineering. Reliability is probably the most important factor to claim for any engineering discipline, as it quantitatively measures quality, and the quantity can be properly engineered. Yet software reliability engineering, as elaborated in later sections, is not yet fully delivering its promise. Nevertheless, there is an encouraging aspect to this situation. The demands on, techniques of, and enhancements to software are continually increasing, and so is the need to understand its reliability. The unsettled software crisis poses tremendous opportunities for software engineering researchers as well Page : 10/20

as practitioners. The ability to manage quality software production is not only a necessity, but also a key distinguishing factor in maintaining a competitive advantage for modern businesses. Software reliability engineering is centered on a key attribute, software reliability, which is defined as the probability of failure-free software operation for a specified period of time in a specified environment [2]. Among other attributes of software quality such as functionality, usability, capability, and maintainability, etc., software reliability is generally accepted as the major factor in software quality since it quantifies software failures, which can make a powerful system inoperative. Software reliability engineering (SRE) is therefore defined as the quantitative study of the operational behavior of software-based systems with respect to user requirements concerning reliability. As a proven technique, SRE has been adopted either as standard or as best current practice by more than 50 organizations in their software projects and reports [33], including AT&T, Lucent, IBM, NASA, Microsoft, and many others in Europe, Asia, and North America. However, this number is still relatively small compared to the large amount of software producers in the world. Existing SRE techniques suffer from a number of weaknesses. First of all, current SRE techniques collect the failure data during integration testing or system testing phases. Failure data collected during the late testing phase may be too late for fundamental design changes. Secondly, the failure data collected in the inhouse testing may be limited, and they may not represent failures that would be uncovered under actual operational environment. This is especially true for high-quality software systems which require extensive and wide-ranging testing. The reliability estimation and prediction using the restricted testing data may cause accuracy problems. Thirdly, current SRE techniques or modeling methods are based on some unrealistic assumptions that make the reliability estimation too optimistic relative to real situations. Of course, the existing software reliability models have had their successes; but every model can find successful cases to justify its existence. Without cross-industry validation, the modeling exercise may become merely of intellectual interest and would not be widely adopted in industry. Thus, although SRE has been around for a while, credible software reliability techniques are still urgently needed, particularly for modern software systems [24]. In the following sections we will discuss the past, the present, and the future of software reliability engineering. We first survey what techniques have been proposed and applied in the past, and then describe what the current trend is and what problems and concerns remain. Finally, we propose the possible future directions in software reliability engineering. Software reliability and system reliability Although the nature of software faults is different from that of hardware faults, the theoretical foundation of software reliability comes from hardware reliability techniques. Previous work has been focused on extending the classical reliability theories from hardware to software, so that by employing familiar mathematical modeling schemes, we can establish software reliability framework consistently from the same viewpoints as hardware. The advantages of such modeling approaches are: (1) The physical meaning of the failure mechanism can be properly interpreted, so that the effect of failures on reliability, as measured in the form of failure rates, can be directly applied to the reliability models. (2) The combination of hardware reliability and software reliability to form system reliability models and measures can be provided in a unified theory. Even though the actual mechanisms of the various causes of hardware faults and software faults may be different, a single formulation can be employed from the reliability modeling and statistical estimation viewpoints. (3) System reliability models inherently engage system structure and modular design in block diagrams. The resulting reliability modeling process is not only intuitive (how components contribute to the overall reliability can be visualized), but also informative (reliability-critical components can be quickly identified). Software reliability and system reliability Although the nature of software faults is different from that of hardware faults, the theoretical foundation of software reliability comes from hardware reliability techniques. Previous work has been focused on extending the classical reliability theories from hardware to software, so that by employing familiar mathematical modeling schemes, we can establish software reliability framework consistently from the Page : 11/20

same viewpoints as hardware. The advantages of such modeling approaches are: (1) The physical meaning of the failure mechanism can be properly interpreted, so that the effect of failures on reliability, as measured in the form of failure rates, can be directly applied to the reliability models. (2) The combination of hardware reliability and software reliability to form system reliability models and measures can be provided in a unified theory. Even though the actual mechanisms of the various causes of hardware faults and software faults may be different, a single formulation can be employed from the reliability modeling and statistical estimation viewpoints. (3) System reliability models inherently engage system structure and modular design in block diagrams. The resulting reliability modeling process is not only intuitive (how components contribute to the overall reliability can be visualized), but also informative (reliability-critical components can be quickly identified). The major drawbacks, however, are also obvious. First of all, while hardware failures may occur independently (or approximately so), software failures do not happen independently. The interdependency of software failures is also very hard to describe in detail or to model precisely. Furthermore, similar hardware systems are developed from similar specifications, and hardware failures, usually caused by hardware defects, are repeatable and predictable. On the other hand, software systems are typically one-ofa-kind. Even similar software systems or different versions of the same software can be based on quite different specifications. Consequently, software failures, usually caused by human design faults, seldom repeat in exactly the same way or in any predictable pattern. Therefore, while failure mode and effect analysis (FMEA) and failure mode and effect criticality analysis (FMECA) have long been established for hardware systems, they are not very well understood for software systems. Que7: What is Metrics and how it could measurement perform on a software? Ans: Metrics and measurements have been an important part of the software development process, not only for software project budget planning but also for software quality assurance purposes. As software complexity and software quality are highly related to software reliability, the measurements of software complexity and quality attributes have been explored for early prediction of software reliability [39]. Static as well as dynamic program complexity measurements have been collected, such as lines of code, number of operators, relative program complexity, functional complexity, operational complexity, and so on. The complexity metrics can be further included in software reliability models for early reliability prediction, for example, to predict the initial software fault density and failure rate. In SRGM, the two measurements related to reliability are: 1) the number of failures in a time period; and 2) time between failures. An important advancement of SRGM is the notation of time during which failure data are recorded. It is demonstrated that CPU time is more suitable and more accurate than calendar time for recording failures, in which the actual execution time of software can be faithfully represented [35]. More recently, other forms of metrics for testing efforts have been incorporated into software reliability modeling to improve the prediction accuracy [8, 18]. One key problem about software metrics and measurements is that they are not consistently defined and interpreted, again due to the lack of physical attributes of software. The achieved reliability measures may differ for different applications, yielding inconclusive results. A unified ontology to identify, describe, incorporate and understand reliability-related software metrics is therefore urgently needed. Reliability for emerging software applications Software engineering targeted for general systems may be too ambitious. It may find more successful applications if it is domain-specific. In this Future of Software Engineering volume, future software engineering techniques for a number of emerging application domains have been thoroughly discussed. Emerging software applications also create abundant opportunities for domain-specific reliability engineering. One key industry in which software will have a tremendous presence is the service industry. Serviceoriented design has been employed since the 1990s in the telecommunications industry, and it reached software engineering community as a powerful paradigm for Web service development, in which Page : 12/20

standardized interfaces and protocols gradually enabled the use of third-party functionality over the Internet, creating seamless vertical integration and enterprise process management for cross-platform, cross-provider, and cross-domain applications. Based on the future trends for Web application development as laid out in [22], software reliability engineering for this emerging technique poses enormous challenges and opportunities. The design of reliable Web services and the assessment of Web service reliability are novel and open research questions. On the one hand, having abundant service providers in a Web service makes the design diversity approach suddenly appealing, as the diversified service design is perceived not as cost, but as an available resource. On the other hand, this unplanned diversity may not be equipped with the necessary quality, and the compatibility among various service providers can pose major problems. Seamless Web service composition in this emerging application domain is therefore a central issue for reliability engineering. Extensive experiments are required in the area of measurement of Web service reliability. Some investigations have been initiated with limited success [27], but more efforts are needed. Researchers have proposed the publish/subscribe paradigm as a basis for middleware platforms that support software applications composed of highly evolvable and dynamic federations of components. In this approach, components do not interact with each other directly; instead an additional middleware mediates their communications. Publish/subscribe middleware decouples the communication among components and supports implicit bindings among components. The sender does not know the identity of the receivers of its messages, but the middleware identifies them dynamically. Consequently new components can dynamically join the federation, become immediately active, and cooperate with the other components without requiring any reconfiguration of the architecture. Interested readers can refer to [21] for future trends in middleware-based software engineering technologies. The open system approach is another trend in software applications. Closed-world assumptions do not hold in an increasing number of cases, especially in ubiquitous and pervasive computing settings, where the world is intrinsically open. Applications cover a wide range of areas, from dynamic supply-chain management, dynamic enterprise federations, and virtual endeavors, on the enterprise level, to automotive applications and home automation on the embedded-systems level. In an open world, the environment changes continuously. Software must adapt and react dynamically to changes, even if they are unanticipated. Moreover, the world is open to new components that context changes could make dynamically available for example, due to mobility. Systems can discover and bind such components dynamically to the application while it is executing. The software must therefore exhibit a self-organization capability. In other words, the traditional solution those software designers adopted carefully elicit change requests, prioritize them, specify them, design changes, implement and test, then redeploy the software is no longer viable. More flexible and dynamically adjustable reliability engineering paradigms for rapid responses to software evolution are required. Que8: what is software testing? Give an Overview. Ans: Software testing is an important technique for assessing the quality of a software product. In this chapter, we will explain the following: the basics of software testing, a verification and validation practice, throughout the entire software development lifecycle the two basic techniques of software testing, blackbox testing and white-box testing six types of testing that involve both black- and white-box techniques. Strategies for writing fewer test cases and still finding as many faults as possible using a template for writing repeatable, defined test cases. Introduction to Testing Software testing is the process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs) and to evaluate the features of the software item [9, 12]. Software testing is an activity that should be done throughout the whole development process [3]. Software testing is one of the verification and validation, or V&V, software practices. Some other V&V practices, such as inspections and pair programming, will be discussed throughout this book. Verification (the first V) is the process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase [11]. Verification activities include testing and reviews. For example, in the software for the Monopoly game, we can verify Page : 13/20

that two players cannot own the same house. Validation is the process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements [11]. At the end of development validation (the second V) activities are used to evaluate whether the features that have been built into the software satisfy the customer requirements and are traceable to customer requirements. For example, we validate that when a player lands on Free Parking, they get all the money that was collected. Boehm [4] has informally defined verification and validation as follows: The Basics of Software Testing There are two basic classes of software testing, black box testing and white box testing. For now, you just need to understand the very basic difference between the two classes, clarified by the definitions below [11]: o Black box testing (also called functional testing) is testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions. o White box testing (also called structural testing and glass box testing) is testing that takes into account the internal mechanism of a system or component. The classes of testing are denoted by colors to depict the opacity of the testers of the code. With black box testing, the software tester does not (or should not) have access to the source code itself. The code is considered to be a big black box to the tester who can t see inside the box. The tester knows only that information can be input into to the black box, and the black box will send something back out. Based on the requirements knowledge, the tester knows what to expect the black box to send out and tests to make sure the black box sends out what it s supposed to send out. Alternatively, white box testing focuses on the internal structure of the software code. The white box tester (most often the developer of the code) knows what the code looks like and writes test cases by executing methods with certain parameters. In the language of V&V, black box testing is often used for validation (are we building the right software?) and white box testing is often used for verification (are we building the software right?). This chapter focuses on black box testing. All software testing is done with executable code. To do so, it might be necessary to create scaffolding code. Scaffolding is defined as computer programs and data files built to support software development and testing but not intended to be included in the final product [11]. Scaffolding code is code that simulates the functions of components that don t exist yet and allow the program to execute [16]. Scaffolding code involves the creation of stubs and test drivers. Stubs are modules that simulate components that aren t written yet, formally defined as a computer program statement substituting for the body of a software module that is or will be defined elsewhere [11]. For example, you might write a skeleton of a method with just the method signature and a hard-coded but valid return value. Test drivers are defined as a software module used to involve a module under test and often, provide test inputs, controls, and monitor execution and report test results [11]. Test drivers simulate the calling components (e.g. hard-coded method calls) and perhaps the entire environment under which the component is to be tested [1]. Another concept is mock objects. Mock objects are temporary substitutes for domain code that emulates the real code. For example, if the program is to interface with a database, you might not want to wait for the database to be fully designed and created before you write and test a partial program. You can create a mock object of the database that the program can use temporarily. The interface of the mock object and the real object would be the same. The implementation of the object would mature from a dummy implementation to an actual database. Six Types of Testing There are several types of testing that should be done on a large software system. Each type of test has a specification that defines the correct behavior the test is examining so that incorrect behavior (an observed failure) can be identified. The six types and the origin of specification (what you look at to develop your tests) involved in the test type are now discussed. There are two issues to think about in these Page : 14/20

types of testing one is the opacity of the tester s view of the code (is it white or black box testing). The other issue is scale (is the tester examining a small bit of code or the whole system and its environment). 1. Unit Testing Opacity: White box testing Specification: Low-level design and/or code structure Unit testing is the testing of individual hardware or software units or groups of related units [11]. Using white box testing techniques, testers (usually the developers creating the code implementation) verify that the code does what it is intended to do at a very low structural level. For example, the tester will write some test code that will call a method with certain parameters and will ensure that the return value of this method is as expected. Looking at the code itself, the tester might notice that there is a branch (an if-then) and might write a second test case to go down the path not executed by the first test case. When available, the tester will examine the low-level design of the code; otherwise, the tester will examine the structure of the code by looking at the code itself. Unit testing is generally done within a class or a component. 2.Integration testing Opacity: Black- and white-box testing Specification: Low- and high-level design Integration test is testing in which software components, hardware components, or both are combined and tested to evaluate the interaction between them [11]. Using both black and white box testing techniques, the tester (still usually the software developer) verifies that units work together when they are integrated into a larger code base. Just because the components work individually, that doesn t mean that they all work together when assembled or integrated. For example, data might get lost across an interface, messages might not get passed properly, or interfaces might not be implemented as specified. To plan these integration test cases, testers look at high- and low-level design documents. 3.Functional and system testing Opacity: Black-box testing Specification: high-level design, requirements specification Using black box testing techniques, testers examine the high -level design and the customer requirements specification to plan the test cases to ensure the code does what it is intended to do. Functional testing involves ensuring that the functionality specified in the requirement specification works. System testing involves putting the new program in many different environments to ensure the program works in typical customer environments with various versions and types of operating systems and/or applications. System testing is testing conducted on a complete, integrated system to evaluate the system compliance with its specified requirements [11]. Because system test is done with a full system implementation and environment, several classes of testing can be done that can examine non- functional properties of the system. It is best when function and system testing is done by an unbiased, independent perspective (e.g. not the programmer) [3]. o Stress testing testing conducted to evaluate a system or component at or beyond the limits of its specification or requirement [11]. For example, if the team is developing software to run cash registers, a non-functional requirement might state that the server can handle up to 30 cash registers looking up prices simultaneously. Stress testing might occur in a room of 30 actual cash registers running automated test transactions repeatedly for 12 hours. There also might be a few more cash registers in the test lab to see if the system can exceed its stated requirements. o Performance testing testing conducted to evaluate the compliance of a system or component with specified performance requirements [11]. To continue the above example, a performance requirement might state that the price lookup must complete in less than 1 second. Performance testing evaluates Page : 15/20

whether the system can look up prices in less than 1 second (even if there are 30 cash registers running simultaneously). o Usability testing testing conducted to evaluate the extent to which a user can learn to operate, prepare inputs for, and interpret outputs of a system or component. While stress and usability testing can be and is often automated, usability testing is done by human-computer interaction specialists that observe humans interacting with the system. 4.Acceptance testing Opacity: Black-box testing Specification: requirements specification After functional and system testing, the product is delivered to a customer and the customer runs black box acceptance tests based on their expectations of the functionality. Acceptance testing is formal testing conducted to determine whether or not a system satisfies its acceptance criteria (the criteria the system must satisfy to be accepted by a customer) and to enable the customer to determine whether or not to accept the system [11]. These tests are often pre-specified by the customer and given to the test team to run before attempting to deliver the product. The customer reserves the right to refuse delivery of the software if the acceptance test cases do not pass. However, customers are not trained software testers. Customers generally do not specify a complete set of acceptance test cases. Their test cases are no substitute for creating your own set of functional/system test cases. The customer is probably very good at specifying at most one good test case for each requirement. As you will learn below, many more tests are needed. Whenever possible, we should run customer acceptance test cases ourselves so that we can increase our confidence that they will work at the customer location. 5.Regression testing Opacity: Black- and white-box testing Specification: Any changed documentation, high-level design Throughout all testing cycles, regression test cases are run. Regression testing is selective retesting of a system or component to verify that modifications have not caused unintended effects and that the system or component still complies with its specified requirements [11]. Regression tests are a subset of the original set of test cases. These test cases are re-run often, after any significant changes (bug fixes or enhancements) are made to the code. The purpose of running the regression test case is to make a spot check to examine whether the new code works properly and has not damaged any previously-working functionality by propagating unintended side effects. Most often, it is impractical to re -run all the test cases when changes are made. Since regression tests are run throughout the development cycle, there can be white box regression tests at the unit and integration levels and black box tests at the integration, function, system, and acceptance test levels. The following guidelines should be used when choosing a set of regression tests (also referred to as the regression test suite): Choose a representative sample of tests that exercise all the existing software functions; Choose tests that focus on the software components/functions that have been changed; and Choose additional test cases that focus on the software functions that are most likely to be affected by the change. A subset of the regression test cases can be set aside as smoke tests. A smoke test is a group of test cases that establish that the system is stable and all major functionality is present and works under normal conditions [6]. Smoke tests are often automated, and the selection of the test cases are broad in scope. The smoke tests might be run before deciding to proceed with further testing (why dedicate resources to testing Page : 16/20

if the system is very unstable). The purpose of smoke tests is to demonstrate stability, not to find bugs with the system. 6.Beta testing Opacity: Black-box testing Specification: None. When an advanced partial or full version of a software package is available, the development organization can offer it free to one or more (and sometimes thousands) potential users or beta testers. These users install the software and use it as they wish, with the understanding that they will report any errors revealed during usage back to the development organization. These users are usually chosen because they are experienced users of prior versions or competitive products. The advantages of running beta tests are as follows [8]:Identification of unexpected errors because the beta testers use the software in unexpected ways. A wider population search for errors in a variety of environments (different operating systems with a variety of service releases and with a multitude of other applications running). Low costs because the beta testers generally get free software but are not compensated. The disadvantages of beta testing are as follows [8]: Lack of systematic testing because each user uses the product in any manner they choose. Low quality error reports because the users may not actually report errors or may report errors without enough detail. Much effort is necessary to examine error reports particularly when there are many beta testers. Throughout all testing cycles, regression test cases are run. Regression testing is selective retesting of a system or component to verify that modifications have not caused unintended effects and that the system or component still complies with its specified requirements. Testing Type Specification General Scope Opacity Who generally does it? Unit Low-Level Design Small unit of White Box Programmer Actual Code Structure code no larger who wrote than a class code Integration Low-Level Design Multiple White Box Programmers High-Level Design classes Black Box who wrote code Functional High Level Design Whole product Black Box Independent tester System Requirements Analysis Whole product Black Box Independent in tester representative environments Acceptance Requirements Analysis Whole product Black Box Customer in customer s environment Beta Ad hoc Whole product Black box Customer in customer s environment Regression Changed Any of the Black Box Programmer(s) Documentation above White Box or independent Page : 17/20

High-Level Design testers Table: Levels of Software Testing It is best to find a fault as early in the development process as possible. When a test case fails, you have now seen a symptom of the failure [13] and still need to find the fault that caused the failure. The further you go into the development process the harder it is to track down the cause of the failure. If you unit test often, a new failure is likely to be in the code you just wrote/tested and should be reasonably easy to find. If you wait until system or acceptance testing, a failure could be anywhere in the system you will have to be an astute detective to find the fault now. Black Box Testing: Ideally, we d like to test every possible thing that can be done with our program. But, as we said, writing and executing test cases is expensive. We want to make sure that we definitely write test cases for the kinds of things that the customer will do most often or even fairly often. Our objective is to find as many defects as possible in as few test cases as possible. To accomplish this objective, we use some strategies that will be discussed in this subsection. We want to avoid writing redundant test cases that won t tell us anything new (because they have similar conditions to other test cases we already wrote). Each test case should probe a different mode of failure. We also want to design the simplest test cases that could possibly reveal this mode of failure test cases themselves can be error-prone if we don t keep this in mind. Equivalence Partitioning To keep down our testing costs, we don t want to write several test cases that test the same aspect of our program. A good test case uncovers a different class of errors (e.g., incorrect processing of all character data) than has been uncovered by prior test cases. [17] Equivalence partitioning is a strategy that can be used to reduce the number of test cases that need to be developed. Equivalence partitioning divides the input domain of a program into classes. For each of these equivalence classes, the set of data should be treated the same by the module under test and should produce the same answer. Test cases should be designed so the inputs lie within these equivalence classes. [2] For example, for tests of Go to Jail the most important thing is whether the player has enough money to pay the $50 fine. Therefore, the two equivalence classes can be partitioned, as shown in Figure 4. Figure 4: Equivalence Classes for Player Money: Less than $50 $50 or more Once you have identified these partitions, you choose test cases from each partition. To start, choose a typical value somewhere in the middle of (or well into) each of these two ranges. See Table 6 for test cases written to test the equivalent classes of money. However, you will note that Test Cases 6 (Player 1 has $1200) and 7 (Player 1 has $100) are both in the same equivalence class. Therefore, Test Case 7 is unlikely to discover any defect not found in Test Case 6. Boundary Value Analysis Boris Beizer, well-known author of testing book advises, Bugs lurk in corners and congregate at boundaries. [1] Programmers often make mistakes on the boundaries of the equivalence classes/input domain. As a result, we need to focus testing at these boundaries. This type of testing is called Boundary Value Analysis (BVA) and guides you to create test cases at the edge of the equivalence classes. Boundary value is defined as a data value that corresponds to a minimum or maximum input, internal, or Page : 18/20

output value specified for a system or component [11]. In our above example, the boundary of the class is at 50, as shown in Figure 5. We should create test cases for the Player 1 having $49, $50, and $51. These test cases will help to find common off-by-one errors, caused by errors like using >= when you mean to use >. Figure 5: Boundary Value Analysis. Test cases should be created for the boundaries (arrows) between equivalence classes. Less than $50 $50 or more When creating BVA test cases, consider the following [17]: If input conditions have a range from a to b (such as a=100 to b=300), create test cases: immediately below a (99) at a (100) immediately above a (101) immediately below b (299) at b (300) immediately above b (301) If input conditions specify a number of values that are allowed, test these limits. For example, input conditions specify that only one train is allowed to start in each direction on each station. In testing, try to add a second train to the same station/same direction. If (somehow) three trains could start on one station/direction, try to add two trains (pass), three trains (pass), and four trains (fail). Decision Table Testing Decision tables are used to record complex business rules that must be implemented in the program, and therefore tested. A sample decision table is found in Table 7. In the table, the conditions represent possible input conditions. The actions are the events that should trigger, depending upon the makeup of the input conditions. Each column in the table is a unique combination of input conditions (and is called a rule) that result in triggering the action(s) associated with the rule. Each rule (or column) should become a test case. If a Player (A) lands on property owned by another player (B), A must pay rent to B. If A does not have enough money to pay B, A is out of the game. Table 7: Decision table Conditions Rule 1 Rule 2 Rule 3 A lands on B s property Yes Yes No A has enough money to pay rent Yes No -- Actions A stays in game Yes No Yes Que9: Differentiate b/w Verification and Validation? Ans: Verification: Are we building the product right? Through verification, we make sure the product behaves the way we want it to. For example, on the left in Figure 1, there was a problem because the specification said that players should collect $200 if they land on or pass Go. Apparently a programmer implemented this requirement as if the player had to pass Go to collect. A test case in which the player landed on Go revealed this error. Page : 19/20

Validation: Are we building the right product? Through validation, we check to make sure that somewhere in the process a mistake hasn t been made such that the product build is not what the customer asked for; validation always involves comparison against requirements. For example, on the right in Figure 1, the customer specified requirements for the Monopoly game but the programmer delivered the game of Life. Maybe the programmer thought he or she knew better than the customer that the game of Life was more fun than Monopoly and wanted to delight the customer with something more fun than the specifications stated. This example may seem exaggerated but as programmers we can miss the mark by that much if we don t listen well enough or don t pay attention to details or if we second guess what the customer says and think we know better how to solve the customer s problems. Que10: What is Software Engineering Metrics? Ans: Abstract Construct validity is about the question, how we know that we're measuring the attribute that we think we're measuring? This is discussed in formal, theoretical ways in the computing literature (in terms of the representational theory of measurement) but rarely in simpler ways that foster application by practitioners. Construct validity starts with a thorough analysis of the construct, the attribute we are attempting to measure. In the IEEE Standard 1061, direct measures need not be validated. "Direct" measurement of an attribute involves a metric that depends only on the value of the attribute, but few or no software engineering attributes or tasks are so simple that measures of them can be direct. Thus, all metrics should be validated. The paper continues with a framework for evaluating proposed metrics, and applies it to two uses of bug counts. Bug counts capture only a small part of the meaning of the attributes they are being used to measure. Multidimensional analyses of attributes appear promising as a means of capturing the quality of the attribute in question. Analysis fragments run throughout the paper, illustrating the breakdown of an attribute or task of interest into sub-attributes for grouped study. Page : 20/20