A Case Study in Test Management



Similar documents
BUSINESS RULES AND GAP ANALYSIS

Keywords document, agile documentation, documentation, Techno functional expert, Team Collaboration, document selection;

Levels of Software Testing. Functional Testing

Analysis of Object Oriented Software by Using Software Modularization Matrix

AGILE SOFTWARE TESTING

JOURNAL OF OBJECT TECHNOLOGY

Guide to Mobile Testing

Web Application Regression Testing: A Session Based Test Case Prioritization Approach

Basic Unified Process: A Process for Small and Agile Projects

Picasso Recommendation

SQA Labs Value Assured

SOFTWARE TESTING TRAINING COURSES CONTENTS

Realizing business flexibility through integrated SOA policy management.

Co-Presented by Mr. Bill Rinko-Gay and Dr. Constantin Stanca 9/28/2011

Centercode Platform. Features and Benefits

Implementing Continuous Integration Testing Prepared by:

Applying Agile Methods in Rapidly Changing Environments

Information Systems Development Process (Software Development Life Cycle)

Successful Test Case Management: It Takes More than a Spreadsheet

Quality Assurance - Karthik

Oracle Insurance Policy Administration System Quality Assurance Testing Methodology. An Oracle White Paper August 2008

Building Lab as a Service (LaaS) Clouds with TestShell

Business white paper. Best practices for implementing automated functional testing solutions

Agile Software Development Methodologies and Its Quality Assurance

Custom Software Development Approach

Testhouse Training Portfolio

Successfully managing geographically distributed development

A Knowledge Management Framework Using Business Intelligence Solutions

Table of contents. Performance testing in Agile environments. Deliver quality software in less time. Business white paper

Regression Testing Based on Comparing Fault Detection by multi criteria before prioritization and after prioritization

CHAPTER 7 Software Configuration Management

Enterprise Frameworks: Guidelines for Selection

Smarter Balanced Assessment Consortium. Recommendation

Software Testing. Knowledge Base. Rajat Kumar Bal. Introduction

Best Practices for Improving the Quality and Speed of Your Agile Testing

(Refer Slide Time: 01:52)

Implementation Process Ensuring Effective Development and Deployment

Elite: A New Component-Based Software Development Model

Practical Experiences of Agility in the Telecom Industry

SKILL DEVELOPMENT IN THE ERA OF QUALITY ASSURANCE MANAGEMENT WITH RESPECT TO PRODUCTS & SERVICES BASED SOFTWARE IT ORGANIZATIONS

CRM SUCCESS GUIDELINES

one Introduction chapter OVERVIEW CHAPTER

Your Software Quality is Our Business. INDEPENDENT VERIFICATION AND VALIDATION (IV&V) WHITE PAPER Prepared by Adnet, Inc.

Top ten reasons to transition your IT lab environments to the cloud

Increasing Business Efficiency and Agility for ATGbased. Systems. the business challenge: upgrading the development pipeline

Learning More About Load Testing

SA Tool Kit release life cycle

A Configuration Management Model for Software Product Line

Automated Testing Best Practices

Basic Unix/Linux 1. Software Testing Interview Prep

Comparing Microsoft SQL Server 2005 Replication and DataXtend Remote Edition for Mobile and Distributed Applications

Quality Management. Lecture 12 Software quality management

Optimizing Your Software Process

ALM Solutions using Visual Studio TFS 2013 ALMI13; 5 Days, Instructor-led

Software Continuous Integration & Delivery

Introduction site management software

Continuous Integration, Delivery and Deployment. Eero Laukkanen T Software Testing and Quality Assurance P

Requirements Management

NASCIO EA Development Tool-Kit Solution Architecture. Version 3.0

Example Software Development Process.

T14 Thursday, November 4, :30 PM

Implementing Hybrid Cloud at Microsoft

Database Marketing, Business Intelligence and Knowledge Discovery

Five Reasons why Agile Won t Scale Without Automation

Test Data Management Concepts

a new generation software test automation framework - CIVIM

NEOXEN MODUS METHODOLOGY

Top Ten Reasons to Transition Your IT Sandbox Environments to the Cloud

Coverity White Paper. Effective Management of Static Analysis Vulnerabilities and Defects

Quality Assurance Training Program

Metadata Quality Control for Content Migration: The Metadata Migration Project at the University of Houston Libraries

The Worksoft Suite. Automated Business Process Discovery & Validation ENSURING THE SUCCESS OF DIGITAL BUSINESS. Worksoft Differentiators

Table of contents. HP Software customer perspective: using HP TestDirector for Quality Center software to report and resolve software defects

Knowledge Infrastructure for Project Management 1

Service Virtualization:

Windows 7 Upgrade Risk Mitigation Planning: Ensuring Windows 7 Upgrade Success

Advancements in the V-Model

Comparing Methods to Identify Defect Reports in a Change Management Database

CAN DISTRIBUTED SOFTWARE DEVELOPMENT BE AGILE?

Requirements-Based Testing: Encourage Collaboration Through Traceability

The Power of Process, People, and Tools When Testing a Complex Integration Landscape for a Very Large Initial Retail ERP Implementation

Key Benefits of Microsoft Visual Studio Team System

The Quality Data Warehouse: Solving Problems for the Enterprise

We are live on KFS Now What? Sameer Arora Director Strategic Initiatives, Syntel

A Tool for Mining Defect-Tracking Systems to Predict Fault-Prone Files

Basic Trends of Modern Software Development

Comparing Agile Software Processes Based on the Software Development Project Requirements

Continuous integration for databases using

A Closer Look at BPM. January 2005

What Is Specific in Load Testing?

An Increase in Software Testing Robustness: Enhancing the Software Development Standard for Space Systems

Web-based Reporting and Tools used in the QA process for the SAS System Software

Exploratory Testing Dynamics

The George Washington University

EFFECTIVE APPROACH FOR DYNAMIC TEST CASE GENERATION FOR LOAD TESTING OF HTTP WEB SERVER

Exploratory Testing Dynamics

Banking Application Modernization and Portfolio Management

CT30A8901 Chapter 10 SOA Delivery Strategies

Transcription:

A Case Study in Test Management Tauhida Parveen Scott Tilley George Gonzalez Dept. of Computer Sciences Dept. of Computer Sciences Software Quality Management Florida Institute of Technology Florida Institute of Technology Sabre Holdings Inc. tparveen@ fit.edu stilley@cs. fit.edu george.gonzalez@sabre.com ABSTRACT Testing is an essential but often under-utilized area of software engineering. A variety of software testing techniques have been developed to effectively identify bugs in source code, yet these techniques are not always fully employed in practice. There are numerous reasons for this, including the difficulty in mastering the complexity of managing all of the test cases for large-scale projects. Test case management involves organizing testing artifacts (e.g., requirements traceability data, test cases, and expected results) in a systematic manner. To be successful, test case management requires a high degree of discipline to accommodate the large volume of artifacts under consideration. This paper presents the results of a case study in centralizing test artifacts in an industrial setting to aid better test management. Several of the challenges in adopting this approach are discussed. In response to these challenges, recommendations on how to better leverage test case management are offered. Keywords Software testing, test management 1. INTRODUCTION Software testing is a process to identify the correctness, completeness, and quality of developed computer software. It is an integral part of software engineering. It encompasses the concepts of demonstrating the validity of software at each stage of the development life cycle and the authenticity of the final system with respect to the customer s requirements [1]. However, in application development projects, testing is often not given enough resources, time, and priority until initial development is completed. With competitive pressure and the increasing cost of downtime, organizations have started to introduce testing at earlier stages of software development while others are striving to find effective testing strategies. Few organizations have established the basis to measure the effectiveness of testing. Without testing standards and a proper test management strategy, the effectiveness of testing can not be measured or improved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage, and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. ACMSE 2007, March 23-24, 2007, Winston-Salem, North Carolina, USA Copyright 2007 ACM 978-1-59593-629-5/07/0003...$5.00 This paper focuses on the test management part of the testing process. Test management is a method of organizing test assets and artifacts such as test requirements, test cases, and test results to enable accessibility and reuse. This paper compares and contrasts the theory of test case management against what commonly happens in industry with respect to manual test cases. It describes a case study of the authors experience in applying academic best practices for software test case management in industry and the outcome of such best practices. The paper also outlines an approach to streamlining test case management and how to enhance productivity through test case management. The paper is organized as follows: Section 2 gives an overview of testing and the common techniques that are followed in the testing process. Section 3 describes a case study of the experiences of the authors in implementing a test management strategy. Section 4 illustrates the impacts of establishing test management, the challenges faced and some recommendations to aid in efficient test management process. Section 5 summarizes the paper with some direction for future research in this area. 2. SOFTWARE TESTING Research shows that software quality consumes more than fifty percent of the cost of the software and this number is even higher for safety critical software [2]. One way to support quality assurance is efficient software testing. Testing refers to the execution of software with an input (known as the test case) and observing the result of the input. The output data helps in understanding the behavior of the system and provides data to ensure the conformance to specification of the software. The goal of testing is to efficiently identify any high impact deviations from expected results, so that these deviations can be corrected or prevented before the software is released. The value of testing also encompasses goals such as the reduction of the number of defects and increase development productivity through continual feedback. 2.1 Testing techniques Software testing is a collection of techniques used in the process of verifying that a piece of software is free from technical errors. Three well-known testing techniques that are commonly used in the industry are black box, white box, and gray box testing [6]. Black box testing treats the system as if it were a black box. This testing technique requires no knowledge of the internal code of the system during the testing process and with little regard in 82

the internal logic of the system. Although black box testing is designed to identify errors, the technique is also used to demonstrate that the software meets functional requirements. White box testing of software involves a closer examination of the internal logic of the system. Logical paths through the software are tested by providing test cases that use particular sets of conditions and/or loops. The status of the system can be identified at diverse points to establish if the expected status matches the actual status. Gray box testing refers to a technique that examines and can manipulate the activity of back-end components to view the state of the application during test case execution. An example of gray box testing would be using database queries to seed data and check results. The testing process is not limited to the above-mentioned techniques only. It includes testing the software at different levels such as unit, system, integration, acceptance, regression, load, performance, usability, and reliability testing. It also involves writing well-described test cases, test plans, test results, and bug reports. 2.2 Test automation Testing can be manual or automated. Manual testing is performed by testers who use manual interaction to carry out all the step by step actions on the application and determine whether a particular step was accomplished successfully or not. Automated testing is assisted by tools that allow the software to be tested repeatedly and quickly without the need for the presence of a tester for every input, analysis, or evaluation. It is not always possible to fully automate testing. Therefore, manual testing is often an important part of almost any testing effort. Test automation is also not an easy task and requires a greater upfront investment than manual testing. It is expensive due to the need of its specialized tools to support automation as well as the need for specialized programming skills to develop test suites. 2.3 Testing in industry The general theory of software testing is not directly usable in industry. Commercial software testing is always under time pressure that makes it hard to follow theoretical best practices of software testing. Testing in practice is very dynamic, requires reconfiguration of tests from existing test cases, or the modification of basic assumptions from the existing test cases. In theory, testing should be introduced in every step of the software development cycle but in practice it is often introduced at the end of the software development process. Limited time, budget, and resources are put into the testing phase. There is often only one tester for a whole software development team executing mostly manual test cases and documenting the results. Therefore, a lot of critical bugs that should be caught before deployment of the product are never caught. Since the overall testing process in software development projects takes less priority, a sub-section of the testing process, test management, is often not even considered to be an area that should be paid attention to. Testing software applications requires hundreds to thousands of unique test cases, and the ability to efficiently manage the results that are produced from running these test cases. The test cases are collectively known as the test suite and can be manual or automated. With the test cases and the test results, other artifacts such as test plans, bug reports, and defect reports are also produced during the testing process. Often these artifacts are not maintained due to lack of a proper test management strategy and eventually become outdated and can not be reused. Test management is a method of organizing test assets and artifacts such as test requirements, test plans, test cases, test scripts, and test results to enable easy accessibility and reusability. It is one of the many ways the testing process can be made more efficient. There is a need to introduce test management in the testing process, but often the amount of time and effort that has to be put in implementing such a management strategy may keep one from taking such a step. 3. A CASE STUDY This case study describes the challenges that were faced when introducing test management for the first time in an industrial setting. The case study is based on experiences of the authors in implementing a centralized test management strategy. It emphasizes the need for a test management strategy with the right resources and effective use of centralized control. 3.1 The project The objective of the project was to gather all testing artifacts that were available between different testing teams into one central repository to streamline a more disciplined approach to test management. The company followed an agile software development methodology [12]. Software development at this company encompassed developing new products, upgrading of existing products, and software customization. Development was distributed among several offices around the world. Since development was agile, testers were involved in the project from the beginning and in every stage of the process. Every time changes were made to the product by the developers, the resulting build was sent to the testing team. The testing team executed regression tests to make sure that all the tests cases that the product passed before the changes were made, still worked after the new changes have been added. The regression tests were typically manual test cases in Microsoft Excel files. The Excel files containing the test cases were saved in the tester s workstation and when needed by others, were delivered via email. Every time a tester was assigned to a product, the tester would make changes in the Excel files according to what he thought suited him. The progress in the testing process was handled by email from the testers stating a percentage count of the test coverage. It was getting very hard for management as well as offshore testers to have access to the test cases. Predictably, a large percentage of test cases were lost over time due to hardware failures, employee turnover, and simple changes to test case formats. Other artifacts such as test plans and automated test scripts were also maintained in this manner. After managing the testing artifacts in the above-mentioned adhoc manner, it was decided that a global test management strategy needed to be in place. Therefore, a project to centralize the testing artifacts started. The project had two major goals: (1) to manage test cases globally so that the testers at various locations (in this case Poland, India and the U.S.) could use the same test cases to test the product; and (2) to minimize the loss of testing artifacts 83

that occurs over time due to software upgrades, bug fixes, and constant evolution of the software. 3.2 Test case centralization phases The centralization of testing artifacts involved two phases: the discovery phase, and the migration phase. The discovery phase was a challenging phase that involved finding, collecting, and assessing the quality of existing test cases. Since the test management strategy of the testing teams was generally ad-hoc, test cases were written but were not used frequently. This was mostly due to the test cases not being accessible, which made them difficult to find for purposes of execution and modification. The fact that they were difficult to modify generally outdated them over time even though they often included valuable business logic that was needed to evaluate the functionality of the software. Since most of the test cases resided on the local machine of the testers assigned to each product group, predictably, some test cases were lost and needed to be recreated. For the migration process, massive amounts of test cases that were stored in Excel and Word files were ported to a centralized test management tool: TestDirector. Mercury TestDirector (now a part of Mercury Quality Center) [4] was chosen to be the test management tool to centralize the test artifacts and mange them globally. The migration process required changing the Excel files into a format accepted by TestDirector. A TestDirector add-in was used to import the Excel files to Testdirector. If needed, the test cases were then modified to fit the feature that the test case was written for. TestDirector supports more than just test case management, but the company focused on the tool for test case management alone because of existing significant investments in other tools for specification, release management, and error reporting. 3.3 The TestDirector tool TestDirector (TD) is a web-based application published by Mercury Interactive that can manage the essential aspects of test management, requirements management, and defect management. It is a global test management tool. Since it is Web- enabled, it supports collaboration among distributed teams whether they are in different parts of the world or at different places within the same organization. With TestDirector, testers can design test plans, develop test cases and can run the test cases locally or remotely, on any available machine on the network. Automated test scripts generated by other testing tools (e.g., Mercury s QTP, LoadRunner) can be launched using TestDirector. The testers can run both manual and automated tests, report execution results, and enter defects. Business analysts can define application requirements and testing objectives. Project managers can generate reports on project status and above all, TestDirector allows teams to access test cases anytime, from any location via a web browser The choice of TestDirector was driven by requirement for more widespread sharing of test cases between teams. At the time, the company was undergoing a rapid shift to distributed software development, with growing offices around the world. TestDirector focused test case management and was intended to preserve business logic associated with requirements specifications (for those teams that developed from fixed requirements) and test cases. 4. DISCUSSION Based on experience from the case study, an initial impact assessment was preformed to determine the results of using the centralized test case management approach. A follow-on assessment was performed one year later as well, to ascertain the longer-term impact of the approach in the project as it evolved over time. Based on these assessments, a number of challenges were identified, and preliminary recommendations suggested. 4.1 Impact assessment 4.1.1 Initial impact Initially, projects in active development were not directly impacted by a demand to migrate test cases to Test Director, as centrally funded resources (summer interns) were loaned to each product team to migrate legacy regression tests. Later, there were impacts, as testers had to spend time learning the tool and discovering ways to adapt it for effective use on their projects. As project teams were told to use TestDirector for storing all test cases, and execute the test cases using the tool, differences in adoption rates were encountered and limitations of the tool were found. It turned out that migration of the test cases from Excel files to TestDirector was not sufficient to encourage its usage. Limitations of the tool discouraged testers to adopt the usage of it. For example, the tool had restrictions on the number of columns that could be used to describe the test cases. For some products, the test cases were shipped to the customers with the products. The test cases exported from TestDirector were not in the format preferred by the customers. TestDirector could only export to Word and text file formats whereas the customers preferred Excel spreadsheets. This discouraged testers to adopt the usage of TestDirector since they had to find an alternative solution to meet customer demand. TestDirector also had limitations of Software Configuration Management (SCM) and it did not match the test cases to the version and branch management of the source code. For example, if version 1.1 of a product needed to be released, the test cases for that version of the product could be labeled as 1.1 to match the SCM label for the release. If, at some point, a branch release (i.e., a customized variation of the actual release) needed to be made, a version 1.1a could be created, the test cases could be updated and then labeled as 1.1a. Any fixes or enhancements and the test cases used in 1.1a would be preserved for future releases by merging to future branches of this version, for example, release 2.0 source code branch. But this type of functionality is not implemented in TestDirector and limited the usage of the tool as a repository for automated test cases. Although the immediate impact of the conversion effort was not 100% successful, it was beneficial in most respect. The immediate positive result was the drop in the rate of test case loss. Moreover, test cases and test plans became more transparent, enabling each product team to see how other products were tested. A greater positive impact of the conversion was seen after a year of usage of the tool. 4.1.2 One year later An assessment of the usage of TestDirector was done one year after the test cases were ported from the Excel spreadsheet to TestDirector. It was found that some teams adapted the usage of the tool but others still maintained test cases outside of 84

TestDirector. There were three reasons why these teams could not adopt the tool: (1) some could not use the tool because of its limitations; (2) some would not use it because they were insufficiently bought-in to the tool; (3) and some found successful alternative tools that they were familiar with and that did not have limitations that TestDirector appeared to have from their perspective. Some teams would not adopt the tool because of its technical limitations. After a year of usage, it was found that teams that had large quantities of automated test cases did not want to store test cases in TestDirector due to its poor performance in execution of the automated test cases from TestDirector. The test scripts and object repositories were very large and TestDirector took an unacceptable amount of time to load them before execution. Such teams abandoned the use of TestDirector as a repository for automated test cases favoring storage in the same version control system that was being used for the project s source code. Most of the test cases for such teams were automated and there existed very few manual test cases. Therefore, the utility of TestDirector was greatly diminished for such teams. Even after one year of usage, there were teams that could not overcome the limitations of TestDirector for representation of test case scenarios. For example, some teams had test cases that were written with more columns than are allowed in TestDirector, and the customer had specified the test case format, so it could not be changed. Other teams also had formats deliverable to customers that could not be changed quickly to match TestDirector s native output. Another important issue that was holding back the widespread adoption of the tool was its price. The software was rather expensive and yet specification and documentation information needed to be widely shared within the company. If each employee at the company needed to access specifications required a license to use TestDirector, the tool itself would become prohibitively expensive. Nevertheless, the usage of the tool has grown steadily since the completion of the project. Test cases are now shared with other teams, and details of test cases are now visible to upper management as well as to team members on other projects. This enables reuse of test cases and test management strategies. 4.2 Challenges Some of the challenges that were faced during the project in the porting task are described below. 4.2.1 Testing terminology The importance of having standard testing terminology was not realized until the start of this project. As mentioned above, TestDirector had a specific format to represent the test cases. For manual test cases, a simple format was needed where each test case would have the steps of the test execution, expected results and any necessary comments. It was not obvious until the start of this project how testing terminology could result in confusion and delay. Simple terms such as test case, test steps, and test scripts had different meanings to each tester. There were no standards set in place for testing terminologies in the company or even if there were, no one followed them. This was exposed during the implementation of this project. Some referred to the default 1, 2, 3 test executions steps as test cases and to the Excel files as test scripts where in reality, the test scripts are referred to automated test cases. 4.2.2 File format In order to import the test cases from excel files to TestDirector, the Excel files needed to be in a particular format. Converting the Excel files to fit the format that TestDirector would accept was a challenge for this project and took the largest amount of time. Just as the testing terminologies did not have any standards across the board, so were creations of the Excel files. Every tester had his own file format. TestDirector was set up to have four columns of information. The first column indicated what step is being executed in the testing process and is indicated by a number, the second column is the steps required to perform the test, the third column about the expected result that the test is supposed to have, and the fourth column is used for comments. The Excel files that the testers were maintaining their test cases in had more than four columns. Table 1 shows the format of the file that TestDirector would accept and Table 2 shows the actual Excel file that was maintained by testers. In order to port the test cases to TestDirector, the Excel files needed to be setup in a way that TestDirector could map the columns in the Excel file to the columns in TestDirector. It was very difficult to assess what information was important and what could be thrown away. Some of the test cases were written for a previous version of the product, by previous testers who were no longer in the company, but the test cases were never updated. It was a time consuming and difficult task to compress the many columns of information into four columns only. Many hours of manual work, interactions and brainstorming with testers and business analysts, use of Word and Excel macros, parsing algorithms in Visual Basic for Applications (VBA), and rewriting many of the test cases were required to port the test cases into TestDirector. The process took more effort than expected. There were no consistencies in the file format between testers. Therefore, it was not possible to figure out one solution that would convert all the Excel files to the correct format accepted by TestDirector and import the test cases. Multiple solutions needed to be in place and some solutions were specific and could be applicable to only a few Excel files. 4.2.3 Embracing the change With an extended amount of effort and time put on this project, thousands of test cases from various products across the company were ported into TestDirector successfully. Any tester could now open a test case and read the requirements for that particular feature to facilitate their testing activities. But, the successful completion of porting the test cases to TestDirector did not end the test management process. Only a global repository was found from where test cases could be maintained. But the maintenance and management mostly depended on the testers that were writing and updating the test cases and adopting this change. As challenges were faced from converting test cases stored in Excel files to TestDirector, similar challenges were faced when testers were told to change their work habits. The testers were already familiar with the news of porting test cases that was going on. The message was sent across the testing team that all the test cases have been successfully ported and they no longer can use Excel or Word files or any other utility at their convenience to 85

store test cases. Nor could they be stored in their local machines anymore. Several training sessions were held and presentations were conducted to get the employees familiar with using TestDirector. Adoption to change was not taken easily by the testers. Among some of the challenges that were faced were employees not having enough trust on the managerial decision. Questions such as How long will TestDirector be around before we move to some other tool? were raised. The issue of investing time and effort to adopt a new tool and finding out that it may not be around for long was a concern. 4.3 Recommendations The key to effective test management is communication among different parties involved in the process. Before reporting mechanisms can be put into place, the testing team needs to set the ground rules, such as defining the terminologies and agreeing upon the general testing terminologies, what is considered severity of the bugs and agreeing on what information must be included in test cases as well as defect reports. This terminologies may or may not follow a known standard but it should be an agreed upon standard that is accepted and followed by all the testers across the board. The key to making a good test case is supplying testers with clear and concrete information so they can follow and test the product. A test case should include a list of steps needed to test the product, expected result, and any relevant information that is necessary for the testers. But this format of creating the test cases should also be agreed upon and standardized for all testers. It is critical that everyone follows the standard so there is no conflict when a new person joins the team. The creation of test cases should also be prioritized according to highest value (domain testing) and highest impact (extreme negative impact from likely accidental inputs). This may require that the tester receive user-level training on the application or even on the business area. Understanding high impact actions will likely require a good understanding of the product architecture. In any case, this may not lead to portable test cases, but it will likely lead to reusable testing patterns. Test cases should also be divided into commonly used subset (sanity test) and deep functionality components. Manual testing typically requires that a large number of test cases be run against new functionality. It will likely not be feasible to run all of those manual test cases against the software on successive regression tests, so a basic subset of the functionality is typically targeted for inclusion in a sanity test [3]. Adoption of the test case management tool should not be left to each employee. They should be sent to training classes, evangelized, supported and their progress should be measured. A tool should be chosen with an eye in the future. The goal for a tool might be preservation of domain-specific business knowledge, but the choice (or available choices) must allow for sufficient information-sharing, test automation and release branching. When purchasing tools, all aspect of the functionality of a tool and the limitations of the tool should be accessed thoroughly. Limitations of tools discourage employees from adopting the usage of the tool which in this project did take place. Enough research on the tool before investing money in it may eliminate this. 5. SUMMARY There is little doubt that testing can be an effective means of improving the quality of software applications. The problem is that testing rarely receives the proper attention that it deserves. Test case management is an excellent example of an activity that is extremely difficult to master particularly when manual intervention is needed at all stages of the process. This paper presented the results of a case study in test case management in an industrial setting. The approach was intended to address some of the shortcomings of real-world management test case management, such as variations in terminology, file format interchange problems, and overall centralization through automation. The approach has been in use for over a year and, although there are still issues to be resolved (e.g., managing change and getting buy-in from all the engineers with the new tools introduced into the testing process), the result can be seen as moderately successful. The experience in executing the case study has opened up several avenues for future work. For example, the recommendations outlined in the paper should be empirically validated in further studies. There is an ever-present need to increased automation the test case management activities, since automation is one way of codifying best practices in tools that can better aid less experiences software testers. Finally, there remain a number of opportunities for academics to learn from industrial practice, and perhaps incorporate some of the lessons learned in case studies such as those presented here into the curricula. REFERENCES [1] Perry, W.; Effective Methods for Software Testing (2 nd Ed.), Wiley, 2000. [2] Harrold, M. J. Testing: A Roadmap. In the International Conference on Software Engineering, (Limerick, Ireland, 2000). [3] Kaner, C.; Bach J.; and Pettichord, B. Lessons Learned in Software Testing: A Context Driven Approach. New York: Wiley & Sons Inc., 2002. [4] Mercury, http://www.mercury.com/. Last accessed - November 22, 2006. [5] Miller K. and Voas J., Software Test Cases: Is One Ever Enough? IT Pro, January-February 2006, pp. 44-48. [6] Kaner, C. Testing Computer Software (2 nd Ed). John Wiley & Sons, 1999. [7] G. J. Myers, The Art of Software Testing. New York, NY: Wiley-Interscience, 1979. [8] J. A. Whittaker, How to Break Software: A Practical Guide to Testing: Pearson Addison Wesley, 2002. [9] Kaner, C. Architectures of Test Automation. Software Testing, Analysis & Review Conference (Star) West, San Jose, CA, Oct 2000. [10] B. Korel, Automated Test Data Generation. IEEE Transactions on Software Engineering, vol. SE-16, NO. 8, August 1990, pp. 870-879. [11] Kaner, C. Improving the Maintainability of Automated Test Suites, Software QA, 4(4), 1997. 86

[12] Beck K., Test-Driven Development: By Example, Addison- Wesley Professional, 2002. Table 1. Format of the Excel file that TestDirector would accept Table 2. Format of the actual Excel file that the testers followed 87