Automated Software Testing by: Eli Janssen



Similar documents
Being Productive Venkat Subramaniam

The ROI of Test Automation

A Comparison of Programming Languages for Graphical User Interface Programming

Content Author's Reference and Cookbook

Introduction to Microsoft OneNote

Easy Casino Profits. Congratulations!!

Graphical Environment Tool for Development versus Non Graphical Development Tool

THE WHIM WINDOW MANAGER

1 CHORD LENGTH OR UNIFORM PARAMETERIZATION

Excel macros made easy

Windows XP Pro: Basics 1

II. II. LITERATURE REVIEW I. INTRODUCTION

Set up an account with Hotmail

Embracing Change with Squeak: Extreme Programming (XP)

Performance Testing Web 2.0

Near Future of Automated Software Testing

Using Text & Graphics with Softron s OnTheAir CG and OnTheAir Video

ECDL. European Computer Driving Licence. Spreadsheet Software BCS ITQ Level 2. Syllabus Version 5.0

Auto Clicker Tutorial

Event processing in Java: what happens when you click?

IF The customer should receive priority service THEN Call within 4 hours PCAI 16.4

Test Automation Framework

Draw pie charts in Excel

Introduction to Automated Testing

MULTIPLE CHOICE FREE RESPONSE QUESTIONS

Study Guide for the Pre-Professional Skills Test: Writing

Programming in Access VBA

**If the box does not automatically pop up, see the next page.

Introduction to scripting with Unity

How To Improve Software Quality

Software Documentation Guidelines

Achieving business benefits through automated software testing. By Dr. Mike Bartley, Founder and CEO, TVS

Test Automation Architectures: Planning for Test Automation

Meta-Framework: A New Pattern for Test Automation

Odyssey of the Mind Technology Fair. Simple Electronics

Objectives. Chapter 2: Operating-System Structures. Operating System Services (Cont.) Operating System Services. Operating System Services (Cont.

NonStop SQL Database Management

Automated Software Testing With Macro Scheduler

Like any function, the UDF can be as simple or as complex as you want. Let's start with an easy one...

Making and Editing Screenshots in PowerPoint 2010

Functional Test Automation. Leverage Automation Frameworks for Efficiencies in Software QA. Version 1.0 March 2009 WHITE PAPER

Functional Testing of Adobe Flex Applications: Four Benefits of Using TestComplete

Creating an Automated Software Testing Center of Excellence

Excel 2007 A Beginners Guide

Beginning Microsoft Word XP

TestManager Administration Guide

The VB development environment

Creating Articulate and Captivating e-learning Courses

Bailey Testing Framework. An automated graphic based GUI testing framework for TDD process.

The Practical Organization of Automated Software Testing

Lessons Learned in Software Testing

Manual Tester s Guide to Automated Testing Contents

Jean Piaget: Cognitive Theorist 1. Theorists from centuries ago have provided support and research about the growth of

Using Microsoft Project 2000

MS WORD 2007 (PC) Macros and Track Changes Please note the latest Macintosh version of MS Word does not have Macros.

The Importance of Continuous Integration for Quality Assurance Teams

Version Control. Luka Milovanov

Intellect Platform - The Workflow Engine Basic HelpDesk Troubleticket System - A102

Session 7 Fractions and Decimals

Stress Testing Technologies for Citrix MetaFrame. Michael G. Norman, CEO December 5, 2001

6.080/6.089 GITCS Feb 12, Lecture 3

Mobile Ad Injector User Guide

Automated Testing Tool

MICROSOFT POWERPOINT STEP BY STEP GUIDE

Gun's & Ammo Tracker. Copyright DERISCO Enterprises

Migrating to Excel 2010 from Excel Excel - Microsoft Office 1 of 1

Macros in Word & Excel

PowerPoint 2013 Basics of Creating a PowerPoint Presentation

Machine Guarding and Operator Safety. Leader Guide and Quiz

Year 8 KS3 Computer Science Homework Booklet

Advanced Excel Charts : Tables : Pivots : Macros

What's new in Word 2010

9/4/2012. Objectives Microsoft Word Illustrated. Unit B: Editing Documents. Objectives (continued) Cutting and Pasting Text

Development Methodologies. Types of Methodologies. Example Methodologies. Dr. James A. Bednar. Dr. David Robertson

SolidWorks Building Blocks Tutorial. Toy-car

5. Tutorial. Starting FlashCut CNC

Jamani's Guide to Computers

Microsoft Migrating to PowerPoint 2010 from PowerPoint 2003

Introduction to Windows

GETTING STARTED TABLE OF CONTENTS

Section 1: Ribbon Customization

Assignment 2: Matchismo 2

Appointments: Calendar Window

VBA PROGRAMMING FOR EXCEL FREDRIC B. GLUCK

tutor2u tutor2u Interactive Business Simulations Finance: Cash Flow Management

Lync 2013 FAQ s. How do I keep my Lync conversation window on top of all the other windows on my computer, so I can see it while I work?

Designing a Graphical User Interface

Single Property Website Quickstart Guide

Click on various options: Publications by Wizard Publications by Design Blank Publication

Action settings and interactivity

Agilent Evolution of Test Automation Using the Built-In VBA with the ENA Series RF Network Analyzers

SAMPLE MATERIALS - DO NOT USE FOR LIVE TEST ADMINISTRATION. English grammar, punctuation and spelling

Microsoft Migrating to Word 2010 from Word 2003

Welcome to The Grid 2

To add a data form to excel - you need to have the insert form table active - to make it active and add it to excel do the following:

How To Insert Hyperlinks In Powerpoint Powerpoint

Getting started with API testing

Q&As: Microsoft Excel 2013: Chapter 2

Transcription:

1. What is automated testing? Automated Software Testing by: Eli Janssen Automated testing is, much like the name implies, getting the computer to do the remedial work of ensuring that inputs yield expected outputs. This is often not as easy as it sounds. Automated testing can be broken down into two main pieces: Driving the program and Validating the results [1]. 1.1 Driving your program This involves how the testing program activates the program to be tested. If you want to test what happens when you push a certain button, you have to have some way of pushing that button. There are three common ways of doing this. Directly call the internal API that handles the button click even. Override the system and programmatically move the mouse to a set of screen coordinates, then send a click event. Testing specific code that is inserted into the program by the test suite These three mostly apply to GUI testing, as command line applications are often easier to test. A set of inputs can often be piped into the CLI program, if the program is well designed. 1.1.1 Direct API call Calling an API from your code is easy, but it does not really test the GUI elements of your program. Users will generally not be using the APIs when they interact with the program, they will be using the GUI. Then, the GUI should be tested. This type of testing is often used for Unit testing early in the product development. 1.1.2 Mouse Macros Syste m override

Simulating the mouse with mouse even recording macros a re seldom reliable. The issue here is that the windows might not be in exactly the same position every time a program is started, or a new window is fired. This also does not test whether or not the window is maximized, minimized, moved, or resized, the screen resolution, and more. All of these issues may have an effect on how the GUI performs. There are some tricks to getting around these issues. The test suite can always be run at the same resolution, with no other applications running. The location of windows opening can be hard coded into the GUI, or relative positioning can be used. The benefit of this D riving methodology is that you are actually testing the GUI. The downside is that it is a lot of work, and many things can go wrong to mess up your tests. An alternate to mouse action recording is using key combinations. If keyboard evens can be used to drive the GUI, then these are often easier to automate. There is no need for trying to deal with positioning issues that mouse events entail. The downside here is that if the program is not normally interacted heavily with the keyboard, then the test suite is not testing actually expected system use. If the mouse is to be used by the users predominantly, then it should be tested as such. 1.1.3 Hooks The other case relies on a combination of things. Some test suite programs put their own specific external method invocations into your project code. These invocations add extra information that the test suite uses. This is a combination of the above elements. Mouse macros are recorded, but window positioning information is passed to the test suite via the suite added code, among other things. This is a kind of combination of the previous two methods. This is likely the best overall method. 1.2 Results verification After the tests have been run, there has to be some method to determine whether or not the tests passed or failed. There are three main ways to do this: Assumption, Human

based, or a machine comparison tool. These generally apply to GUI testing, as once again, CLI programs are much easier to test. The set of inputs and outputs in a CLI program can be piped to a file, and can be programmatically compared with various textual comparison programs, like diff. 1.2.1 Assumption This involves making assumptions as to what the expected output should be, and comparing based on that. Once example was a spell checker. The author of the automated test said,.. when I was writing automation for the spelling engine in Visio I wrote a test that typed some misspelled text into a shape: 'teh'. This should get auto corrected to 'the'. It'shard to programmatically check if 'the'was correctly rendered on the screen. Instead I went and asked the shape for the text inside it and just did a string compare with the expected result. [1] There are some problems with this methodology. The assumptions are that the program redraws upon spelling correction. It may be that the spelling was fixed in the object information, but it may not have updated the screen for the user. The key here is to define your scope very carefully. It may not have been important for the author to be testing the redraw functionality in this case. If his scope was narrow enough to have only specified a test of the correction feature in the object itself, there would be no problem with his test. 1.2.2 Human based This method relies on human interaction to perform the final test pass/fail verification. This usually involves screenshots being taken at specified intervals during the test run. These screenshots are then saved off for later human review. The benefit here is the time saved clicking buttons. The downside is that someone has to manually review the images to see if a pass/fail occurred. This is not only tedious, but after viewing many images, a human may become bored and may miss some critical issue that might not be apparent at first glance.

1.2.3 Machine Comparison tool This method, when dealing with GUIs, takes screenshots like the human based comparison method, but uses programmatic comparisons. The suite compares the screenshots to a known correct m aster s et of images. The suite avoids the problems associated with mouse macro testing (screen position, size, etc.) by capturing only the active portion of the GUI. This captures only images pertaining to the application's canvas, and not parts of the desktop that may change and are not involved in the test. Once the programmatic comparison is performed, only the results are sent (or stored in a reporting database) to the tester(s). This drastically reduces the work load on the testers, saving them from having to push buttons and look at screenshots, but raises the efficiency of the testing process. It is less likely that a visual comparison tool would miss something that differed from the master im age set. This type of programmatic comparison is also used in CLI test comparison tools. Often it is in the form of a diff comparison on test output with known correct output. 2. Issues with automation Automated testing can be extremely useful. The amount of testing that can be done using automated tools is far and away above what can be achieved by manual testing. Some firms claim that, It would take a manual tester four months of work to product the results that we produce every single night with automated testing. [2] Many find that automated tools are only viable when they are developed in house, to meet the demand of a specific application/need. Using commercial tools is expensive, and there is only a real payoff if those tools/test scripts can easily be reused frequently. Another issue with automated testing tools is that they often require the manipulation of scripts specific to their environment to drive the tests. These scripts are basically just languages for programs (test suites) that drive other programs. The development of these scripts often pose problems for QA personnel. Not all QA personnel are programmers in their own right. Add onto that the time it takes to write these scripts, and ensure that the

scripts are bug free, that they are more modular and can be used in more tests than just the current one, and you have a mounting time cost. This coupled with the view oft held by management that i f you are not coding, you are not working, can become a hindrance and actually reduce the effectiveness of the QA process. 2.1. Cost Most of the commercial automated testing suites are not cheap. IBM's Rational Robot, for example, costs over $4,000 for a singe seat. There are some open source software testing programs, and those are free as in Libre (not beer). Automated testing adds some unique issues to general QA investment. Being an engineered, coded, and documented product, automated testing requires additional costs. Both up front costs of purchasing and training employees, as well as maintenance costs for the tool sets, must all be considered. 3. Test automation IS software development There are some extra things to think about when considering automated testing. First, a test automation strategy is very important. It is very similar to the regular software development cycle. Documentation is key, and along with that, requirements and scope definition. In addition to a need of good documentation, there is a need for coding skill. Many statistics point to there being almost an equal amount of code written for automated testing a project, as for the project code itself in some instances. Testing automation is often thought of solely as testing the entire application, and GUIs often spring to mind. This is not entirely true for all cases. Much automated testing is done at the integration, and even unit testing levels. In fact, the sooner testing can be done the better. This is even more so applicable to automated testing. Modularity in automated testing scripts is important if the scripts are to have a significant lifetime.

4. When Automated Testing Can Go Bad There are some instances where automated testing poses additional problems, and is almost always doomed to failure. The first is spare time automation. People are allowed, or only have time, to work on the automation as back burner projects, or when time allows. This not only lowers the interest in the project, but if anything does get turned out, it will likely be of poor quality since it was not made a priority. A lack of clear goals can further impact automated testing. What is to be expected of the automated tests needs to be as clear and laid out as any requirements document for a programming project. Indeed, automated testing may require a good deal of programming to get it working correctly. High turnover also poses a problem. If there is not a dedication in the QA staff to make the most of automated testing tools, and if only one or two people ever work on utilizing those tools, then there is a high probability that when they leave, it will be difficult for others to use the tests that they have developed. And finally, often automated testing is looked at as a panacea to the QA process. It is a lot of work, and requires careful planning to be successful. Automated testing is much harder than manual testing. It actually makes the effort more complex since there's now another added software development effort [3]. Some tests lend themselves to automation, some do not. Care must be taken to discern which ones do, and to come up with a well structured test plan and automated testing goals. 5. Conclusions Despite these issues, automated testing is proving to be a great asset to many development firms and QA divisions. Automated testing is allowing many companies to do more thorough testing of their products. This is in line with many software development paradigms, such as Xtreme Programming, and others, which call for testing at many steps along the development cycle, not just the end.

Automated testing also allows developers something to build towards. As automated test scripts are also designed from the requirements specifications, and reflect a very user centric view of the product, it can be a great asset to product cohesiveness. Remember, wear your user hat!

References Cited [1] Dickens, Charles. So ftware Test Engineering. Microsoft MSDN Articles. http://blogs.msdn.com/chappell/articles/106056.aspx [2] Earis, Alan. A re automated test tools for real? Application Development Trends. May, 2004. http://www.adtmag.com/article.asp?id=9307 [3] Kerry. Automated Software Testing A Perspective http://www.testingstuff.com/autotest.html [4] Zallar, Kerry. Are you ready for the Test Automation Game? Software Quality Engineering. Nov/Dec 2001. Vol. 3, Issue 6. http://www.stickyminds.com/sitewide.asp?objectid=3286&function=detailbro WSE&ObjectType=ART