SOFTWARE PERFORMANCE EVALUATION ALGORITHM EXPERIMENT FOR IN-HOUSE SOFTWARE USING INTER-FAILURE DATA



Similar documents
Software Engineering Compiled By: Roshani Ghimire Page 1

International Journal of Pure and Applied Sciences and Technology

R214 SPECIFIC REQUIREMENTS: INFORMATION TECHNOLOGY TESTING LABORATORY ACCREDITATION PROGRAM

Software testing. Objectives

Performance evaluation of large-scale data processing systems

Analysis of Object Oriented Software by Using Software Modularization Matrix

V&V and QA throughout the M&S Life Cycle

EXTENDED ANGEL: KNOWLEDGE-BASED APPROACH FOR LOC AND EFFORT ESTIMATION FOR MULTIMEDIA PROJECTS IN MEDICAL DOMAIN

Module 10. Coding and Testing. Version 2 CSE IIT, Kharagpur

Software Test Plan (STP) Template

The Role of Information Technology Studies in Software Product Quality Improvement

Review of Mobile Applications Testing with Automated Techniques

Advancements in the V-Model

Bringing Value to the Organization with Performance Testing

DISASTER RECOVERY AND CONTINGENCY PLANNING CHECKLIST FOR ICT SYSTEMS

Chapter 11: Integration- and System Testing

Levels of Software Testing. Functional Testing

Mladen Kezunovic Texas A&M University

Accounts Payable Imaging & Workflow Automation. In-House Systems vs. Software-as-a-Service Solutions. Cost & Risk Analysis

High availability From Wikipedia, the free encyclopedia

Method of Fault Detection in Cloud Computing Systems

TEST PLAN Issue Date: <dd/mm/yyyy> Revision Date: <dd/mm/yyyy>

How To Test For Performance

Cyber Security: Guidelines for Backing Up Information. A Non-Technical Guide

Comparative Study of Automated testing techniques for Mobile Apps

A JAVA TCP SERVER LOAD BALANCER: ANALYSIS AND COMPARISON OF ITS LOAD BALANCING ALGORITHMS

Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware

Intel RAID Controllers

Products reliability assessment using Monte-Carlo simulation

Software reliability improvement with quality metric and defect tracking

Independent t- Test (Comparing Two Means)

The Quest for Energy Efficiency. A White Paper from the experts in Business-Critical Continuity

SeaClouds Project D SeaClouds periodic evaluation reports

Keywords: Dynamic Load Balancing, Process Migration, Load Indices, Threshold Level, Response Time, Process Age.

STATE OF NEBRASKA STATE RECORDS ADMINISTRATOR DURABLE MEDIUM WRITTEN BEST PRACTICES & PROCEDURES (ELECTRONIC RECORDS GUIDELINES) OCTOBER 2009

University of Ulster Policy Cover Sheet

Availability and Load Balancing in Cloud Computing

Introduction to Automated Testing

Case Studies: A Common Language Between Engineers and Managers

RELIABILITY OF ELECTRIC POWER GENERATION IN POWER SYSTEMS WITH THERMAL AND WIND POWER PLANTS

INCORPORATING VITAL FACTORS IN AGILE ESTIMATION THROUGH ALGORITHMIC METHOD

Usability metrics for software components

IBM Software Information Management. Scaling strategies for mission-critical discovery and navigation applications

Formal Software Testing. Terri Grenda, CSTE IV&V Testing Solutions, LLC

THE DEVELOPMENT OF WAREHOUSE MANAGEMENT SYSTEM BY RADIO FREQUENCY IDENTIFICATION (RFID) TECHNOLOGY: A CASE STUDY OF ELECTRIC APPLIANCE FACTORY

Software Engineering: Analysis and Design - CSE3308

A Survey on Load Balancing and Scheduling in Cloud Computing

An On-Line Algorithm for Checkpoint Placement

IT Service Management

Real Time Network Server Monitoring using Smartphone with Dynamic Load Balancing

FTP is Free, but Can You Really Afford It?

How To Understand Software Engineering

RAID Technology Overview

Maximizing return on plant assets

Basic Testing Concepts and Terminology

Schedule Risk Analysis Simulator using Beta Distribution

INTELLIGENT ALARMS DETECTION FOR THE ANALYSIS OF SYSTEM FAULT IMPACT ON BUSINESS

Introduction to Security

Chapter 13 Introduction to Linear Regression and Correlation Analysis

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

What do you think? Definitions of Quality

SIP Registration Stress Test

Benchmark Testing Results: Telvent ArcFM Solution Running on Microsoft SQL Server

Wireless power meter monitoring with power theft detection and intimation system using GSM and Zigbee networks

Business Requirements Document Template. SR OASDI Employee Rate Change

Self-Evaluation Configuration for Remote Data Logging Systems

Conn Valuation Services Ltd.

ARC VIEW. Stratus High-Availability Solutions Designed to Virtually Eliminate Downtime. Keywords. Summary. By Craig Resnick

RCM Success Starts with CMMS

Multi-Agent System for Management and Monitoring of Routes Surveillance

Product Brief. DC-Protect. Content based backup and recovery solution. By DATACENTERTECHNOLOGIES

SOFTWARE PERFORMANCE TESTING SERVICE

Utilization of Statistical Process Control in Defined Level Software Companies to Manage Processes Using Control Charts with Three Sigma

Ohio Supercomputer Center

Effect of Remote Back-Up Protection System Failure on the Optimum Routine Test Time Interval of Power System Protection

Effect of Remote Back-Up Protection System Failure on the Optimum Routine Test Time Interval of Power System Protection

Comparative Analysis of Different Software Quality Models

ESTIMATING COMPLETION RATES FROM SMALL SAMPLES USING BINOMIAL CONFIDENCE INTERVALS: COMPARISONS AND RECOMMENDATIONS

Veritas Cluster Server by Symantec

High Availability for Citrix XenApp

Module 1: Introduction to Computer System and Network Validation

Virtualized Disaster Recovery (VDR) Overview Detailed Description... 3

Measurement Information Model

Virtual Fax Server Solutions. White Paper March 2010

VDI FIT and VDI UX: Composite Metrics Track Good, Fair, Poor Desktop Performance

How To Improve Availability In Local Disaster Recovery

How To Test A Web Based System

OPTIMIZING SERVER VIRTUALIZATION

Case Study of an IBM Rack- mount Server. Christopher Weber. Asst. Research Professor. Carnegie Mellon University.

COMPARISONS OF CUSTOMER LOYALTY: PUBLIC & PRIVATE INSURANCE COMPANIES.

CRASHING-RISK-MODELING SOFTWARE (CRMS)

Oracle Data Guard OTN Case Study SWEDISH POST

Debugging A MotoHawk Application using the Application Monitor

Quality Management. Lecture 12 Software quality management

SureSense Software Suite Overview

FAULT TOLERANCE FOR MULTIPROCESSOR SYSTEMS VIA TIME REDUNDANT TASK SCHEDULING

High-Mix Low-Volume Flow Shop Manufacturing System Scheduling

HIPAA: MANAGING ACCESS TO SYSTEMS STORING ephi WITH SECRET SERVER

HYPOTHESIS TESTING WITH SPSS:

A Visualization System and Monitoring Tool to Measure Concurrency in MPICH Programs

Transcription:

I.J.E.M.S., VOL.3(2) 2012: 99-104 ISSN 2229-6425 SOFTWARE PERFORMANCE EVALUATION ALGORITHM EXPERIMENT FOR IN-HOUSE SOFTWARE USING INTER-FAILURE DATA *Jimoh, R. G. & Abikoye, O. C. Computer Science Department, University of Ilorin, P.M.B 1515 Ilorin, Nigeria ABSTRACT This work aims at testing software performance on three main yardsticks: Reliability, availability and maintainability using a package (designated IN HOUSE software) prepared by us as the case study. The inter-failure data was recorded during an experiment with the software and it provided information about the software failures; it is recorded in terms of execution time in (seconds) between successive failures using a simulation of the real operational environment. This is measured as a function of mean time to failure (MTTF), the availability is measured as a function of mean time between failure (MTBF) and Maintainability is measured as a function of mean time to Maintain (MTTM). The inter-failure data collected were analyzed to reflect the relationship among the three test parameters. It is proved that our IN HOUSE software can favorably compete with any existing model in its class. KEYWORDS: Software Reliability, Software Availability, Software Maintainability, Mean time to failure (MTTF), Mean time between failure (MTBF) and Mean time to maintain (MTTM). INTRODUCTION Software performance evaluation has been of great concern to software engineers since it takes some degree of expertise to determine whether a software is of good performance regardless of whether it is operational or not. There have been many difficulties in determining the performance; the three nexus (reliability, availability and maintainability) have to be jointly considered for optimum result (Pfleeger, 1997). There are different ways of testing the system which include function testing, performance testing, acceptance testing and installation testing (Pfleeger, 1997). The yardsticks are defined as followed from various scholarly points of view (IEEE, 1990; IAN, 1992; Musa, 1997; Pfleeger, 1997; Rankin, 2002; Land, 2003; Leach, 2003; Change tech Solutions Inc, 2003; Jawadekar, 2004) Function Testing:- This confirms the fact that the tasks to be performed by the system are correctly performed. It checks that the system performs its functions as specified in the requirement. Performance Testing:- This is a measure against the non-functional requirements of the system such as speed, reliability, availability, accuracy and so on. System performance is measured against the performance objectives set by the customers as expressed in the nonfunctional requirement. Acceptance Testing:- This test is conducted to validate that the requirements set by the customers have been implemented by the developed system. It helps in assuring the customers that there is no variation between the required system and the developed one. Installation Testing:- It allows the user to exercise system functions and document additional problems that are functions of difference in the operational environment. Failure Data:- This is the information captured when a particular software fails. Inherent in any set of failure data is a considerable amount of uncertainty. Even when we can ascertain all possible faults existing in the software, yet we cannot still state with certainty when it will fail next. This work is mainly on performance evaluation (testing) using Reliability function, Availability function and Maintainability function in measuring software performance. The main work is to confirm the suitability of the metrics for the three measure of software performance having in mind that availability is not a function of execution time but rather the clock time. TYPES OF PERFORMANCE TESTING Performance test is based on the requirements, so that types of tests are determined by the kind of nonfunctional requirements specified. Stress tests evaluate the system when stressed to its limits over a short period of time. If the requirements state that a system is to handle up to a specified number of devices or users, a stress test evaluates system performance when all those devices or users are active simultaneously Volume tests address the handling of large amounts of data in the system. for example, we look at whether data structures ( such as queue or stack ) have been defined to be large enough to handle all possible situations. In addition we check fields, records, and files to see if their sizes can accommodate all expected data. Configuration tests analyses the various software and hardware configurations specified in the requirements. 99

Software performance evaluation algorithm experiment for in-house software using inter-failure data Sometimes a system is built to serve a variety of audiences and the system is really a spectrum of configurations. For example we may define minimal system to serve a single user, and other configurations build on the minimal configurations to serve additional users. This test evaluates all possible configurations to make sure that each satisfies the requirements. Compatibility tests are needed when a system interfaces with other systems. We find out whether the interface functions perform according to the requirements. For instance, if the system is to communicate with a large database system to retrieve information, a compatibility test examines the speed and accuracy of data retrieval. Regression tests are required when the system being tested is replacing an existing system. The regression tests guarantee that the new system s performance is at least as good as that of the old. Regression test are always used during a phased development. Security tests ensure that the security requirements are met. We test system characteristics related to availability, integrity and confidentiality of data and services. Timing tests evaluate the requirements dealing with time to respond to a user and time to perform a function. If a transaction must take place within a specified time, the test performs that transaction and verifies that the requirements are met. Timing tests are usually done in concert with stress to see if the timing requirements are met even when the system is extremely active. Environmental tests look at the system s ability to perform at the installation site. If the requirements include tolerances for heat, humidity, motion, chemical presence, moisture, portability, electrical or magnetic fields, disruption of power or any other environmental characteristic of the site, then our tests guarantee the system s proper performance under these conditions. Quality tests evaluate the system s reliability, maintainability and availability. These tests include calculation of meant time to failure and mean time to maintain, as well as average time to find and fix a fault. Quality tests are sometimes difficult to administer. For example, if a requirement specifies a long mean time between failures, it may be infeasible to let the system run long enough to verify the required mean. Recovery tests address response to the presence of faults or to the loss of data, power, devices or services. We subject the system to a loss of system resources and see if it recovers properly. Maintenance tests address the need for diagnostic tools and procedures to help in finding the source of problems. We may be required to supply diagnostic programs, memory maps, traces of transactions, circuit diagrams, and other aids. We verify that the aids exist and that the function properly. Documentation tests ensure that we have written the required documents. Thus, if user guides, maintenance guides and technical documents are needed, we verify that these materials exist and that the information they contain is consistent, accurate, and easy to read. Moreover, sometimes requirements specify the format and audience of the documentation; we evaluate the documents for compliance. Human factor tests investigate requirements dealing with the user interface to the system. We examine display screens, messages, report formats, and other aspects that may relate to ease of use. In addition, operator and user procedures are checked to see if they conform to ease of use requirements. These tests are sometimes called usability tests. SOFTWARE PERFORMANCE MEASURES Software Reliability (R s ) The concept of software reliability can be defined as ability of software system to function consistently and correctly over long periods of time (Pfleeger, 1997). Software reliability can also be defined as the probability of a failurefree software operation for a certain period of time in a given operational environment (Immonen, 2006). Software reliability is an important factor affecting system reliability. It is quite different from hardware reliability since it reflects the design perfection rather than manufacturing perfection as in the case of hardware reliability. Software Reliability is a function of execution/operational time and not the real time (clock time) (Pan, 1999) so that it more accurately reflect system usage. Software Reliability measures the system operation without failing over a long period of time. It operates on a scale between 0 and 1 that is R s 1. The software system is considered to be reliable if and only if it has reliability value close to 1. Consequently, the software system is considered not to be reliable if it has reliability value close to zero. The measure of reliability reflects the system usage and it is always calculated based on execution time rather than the clock time. Software Availability (A s ) Availability of software can be guarantee when the system is operating successfully according to specification at a given point in time (Pfleeger, 1997). More formally, it is the probability that a system is functioning completely at a given point in time, assuming that the required external resources are also available. It is a function of clock time and not operational time. A system that is up and running has availability 1 while the one that is unusable has availability 0. In order to achieve an accurate measure of availability exactly experienced by the end user, there is need to fully understand the system configurations (Change Tech Solutions, 2003). Things to really understand include components and resources being used by the application in question (local and remote) and required hardware and software resources. The availability is calculated by measuring the number of committed hours of availability (A) over a period of time as decided by the organiasation using the application (Pfleeger, 1997; Change Tech Solutions, 2003). Achieved Availability can rather be measured in terms of outage hours 100

I.J.E.M.S., VOL.3(2) 2012: 99-104 ISSN 2229-6425 ( outage time measured in hours within committed hours of availability, both unplanned and predetermined outage must be consider in achieving continuous availability(change Tech Solutions, 2003). The hours of outage can be used in measuring the mean time between failure. Table1 below explains the relationship between availability targets and hours of outage allowed for a continuous availability level to be achieved. Table 1: Outage Hours and Continuous Availability (Change Tech Solutions, 2003) Continuous availability target Hours of outage allowed per month Software Maintainability (M s ) According to IEE standard computer dictionary (1990), Maintainability can be defined as the ease and speed with which any maintenance activity can be carried out on an item of equipment. It is also defined as the ease with which a software system or component can be modified to correct faults, improve performance or other attributes, or adapt to a changed environment. Maintainability is analogous to Cumulative Failure time in reliability. Maintainability can be defined as the probability that a specified maintenance action on a specified item can be successfully performed (putting the item into a specified state) within specified characteristics using specific tools and procedures (Rosenberg, 2000). Maintainability is the probability that for a given condition of use, a maintenance activity can be carried out within a stated time interval and using a stated procedures and resources. The scale is between 0 and 1 that is 0 M s 1. This means that a system is maintainable if its maintainability is close to 1 and not maintainable if its maintainability is close to 0. Because Software reliability, availability and maintainability are defined in terms of failures they must be measured once the system is complete and working. The suitability of this approach is best explained by the calculated values of reliability, availability and maintainability. 99.99% 0.07 hours 99.9% 0.7 hours 99.5% 3.6 hours 99.0% 7.2 hours 98.6% 10.0 hours 98.0% 14.4 hours Table 2: Table of inter-failure time Read from Left to Right: Kindly mmention the attribute in Table MAIN RESULTS In this work, we developed a software and it is designated an IN- HOUSE software for result computation of the Postgraduate Diploma Students in the Department of Computer Science, University of Ilorin for over six years now and has since then been put into effective usage. The most important factor in this work is the system failure which might be catastrophic, critical, marginal or minor. Information about the software failure is taken using two assumptions: That there is a possibility of causing another problem while solving a particular one. The inability to predict the next failure Interfailure Data Inter-failure data is a data of successive failures of the departmental result computation in house software in an operational environment over a particular period of time. The inter-failure data of an in house application, used in the computation of student s result (case study) are taken in terms of execution time in (seconds) between successive failures of a command-and-control system during in house testing using a simulation of the real operational environment (Musa, 1997). The data is presented in Table 2 below. 106 133 219 184 218 112 105 194 215 118 240 152 179 126 210 190 772 222 128 216 132 100 102 104 323 161 272 154 553 395 440 170 257 183 437 295 125 1080 715 130 175 580 852 190 115 900 618 925 702 1225 125 238 203 105 1033 712 402 275 292 185 1080 521 442 1315 2212 452 220 185 1020 800 101

Software performance evaluation algorithm experiment for in-house software using inter-failure data Measuring Reliability, Availability and Maintainability We want to measure reliability, availability and maintainability as attributes of the software developed measured as numbers between 0 (unreliable, unavailable and unmaintainable) and 1 ( completely reliable, always available and completely maintainable). To derive these measures, we examine attributes of failure data. Assuming that we are capturing failure data and that we have seen 1 failures. We can record the interfailure times, or times to failure as,,,.the average of these values is the Mean Time To Failure (MTTF). After each underlying fault has been fixed and the system is again running. We now use T t to denote yet-to-be observed next time to failure. T t is a random variable. There are several other time related data important to calculating availability and maintainability. Once a failure occurs there is additional time lost as the faults causing the failure are located. The Mean Time to Maintain (MTTM) is the average time it takes to fix a faulty software component. The Mean Time to Failure (MTTF) can be combined with the Mean Time to Maintain (MTTM) to determine how long the system is unavailable. That means we measure availability by examining Mean Time between Failure (MTBF) = MTTF + MTTM. As the system becomes more reliable, its MTTF should increase. We can use MTTF in a measure whose value is near zero when MTTF is small, and nears 1 as MTTF gets increased. Considering this relationship, a measure of reliability can be defined as: R s = MTTF/(1 + MTTF).Reliability function Similarly, we can measure availability as to maximize the MTBF A s = MTBF (1 MTBF).Availability function Also, maintainability can be measured to minimize the MTTM as M s = 1/(1 + MTTM)...Maintainability function. Note that MTTF, MTBF and MTTM are derived from the inter-failure data. RESULTS AND DISCUSSION Mean time to failure Valid N (listwise) Table 2- Descriptive Statistics N Minimum Maximum Mean Std. Deviation 70 100.00 2212.00 403.8000 386.2255 70 Table 3 : T-Test One Sample Test Mean time to maintain Test Value = 0 95% Confidence Interval of the Mean Difference t df Sig. (2-tailed) Difference Lower Upper.028 69.978 1.5143-105.7221 108.7506 Graph 1: Mean Time to failure 2.2 2.0 1.8 1.6 Count 1.4 1.2 1.0.8 Missing 118.00 154.00 190.00 220.00 295.00 521.00 800.00 1315.00 105.00 130.00 179.00 215.00 257.00 437.00 702.00 1020.00 Mean time to failure 102

I.J.E.M.S., VOL.3(2) 2012: 99-104 ISSN 2229-6425 Graph 2: Mean Time to maintain 3.5 3.0 Count 2.5 2.0 1.5 1.0.5 Missing -582.00-122.00-45.00 32.00 94.00 158.00 310.00 1100.00-895.00-307.00-88.00-17.00 75.00 107.00 223.00 559.00 Mean time to maintain Graph 3: Relationship between MTTF and MTTM 3000 2000 1000 Value 0-1000 Mean time to maintain -2000 1 9 17 25 33 41 49 57 65 Mean time to failure 5 13 21 29 37 45 53 61 69 Case Number From the result of the analysis, we have the following values: Mean Time to Failure (MTTF) = 403.800 Mean Time to Maintain (MTTM) = 1.5143 Mean Time Between Failure (MTBF)= MTTF + MTTM MTBF = 403.8000 + 1.5143 = 405.3143. The obtained values are then used in calculating the reliability, availability and maintainability as follow: Reliability R s = MTTF (1 + MTTF) = 403.8000 /1 + 403.8000 = 403.8000/404.8000 = 0.9975 Availability A s = MTBF (1-MTBF) = 405.3143* (1-405.3143) = 405.3143 * -404.3143 = -163874.3675 Maintainability Ms = 1/(1 + MTTM) = 1/(1+1.5143) = ½.5143 = 0.4 CONCLUSION As we can see from the result that the departmental result computation in house software is considered to be reliable with a reliability value 0.9975 which is very close to 1, also it is fairly maintainable since the maintainability value 0.4 is not very close to 0. But on the contrary, the availability value does not fall within the range; this can be attributed to the fact that we cannot use execution time to evaluate availability. Therefore such a metric cannot be used for availability. From our result we are able to confirm that the metrics: A s = MTBF (1-MTBF) 103

Software performance evaluation algorithm experiment for in-house software using inter-failure data cannot be used to measure software availability since mean time between failure MTBF is a function of execution time whereas availability of software has to do with clock time. It can thus be recommended that another metrics should be used to measure software availability. Such revealed that using execution time cannot adequately capture the outage hours of the software system in question. This means it is practically impossible to measure reliability, maintainability and availability together using the same nexus. Such difficulty can be attributed to the fact that while one relies on execution time, the other relies on clock time. REFERENCES Change Tech. Solutions Inc. (2003) How to measure system availability targets, The Harris Kern Enterprise Computing Institute, February, 2003, Accessed from http://articles.techrepublic.com.com/5100-10878_11-1060287.html on February,2008. IAN C. (1992) Software Engineering, Addison-Wesley Publishing Company, New York, United State, pp 389 396. IEEE (1990) IEEE Standard Glossary of Software Engineering Terminology, US. Accessed from http://ieeexplore.ieee.org/servlet/opac?punumber=2238 on July, 2007 Immonen, A. (2006) A method for predicting reliability and availability at the architectural level in Software Product Lines Research Issues in Engineering and Management, 2006 edition Jawadekar, W. S. (2004) Software Engineering, Principle & practice, New Delhi: Tata McGraw-Hill Publishing Company Limited Land, R. (2003) Measurement of software maintainability, A Publication of Malardalen University Leach, R. J. (2000) Introduction to software engineering, Boca Raton, London New York Washington D.C: CRC Press Musa, J. D. (1997) Introduction to Software engineering and testing, 8 th international symposium on software reliability engineering Pan, J. (1999) Software Reliability in Embedded Systems, Carnegie Mellon http://www.ece.cmu.edu/~koopman/des_s99/sw_reliability/p resentation.pdf Pfleeger, S. L. (1997) Software Engineering, theory & practice, Washington D.C: Pearson Education Rankin, C. (2002) The Software Testing Automation framework, IBM systems journal, 41(1), 126-139 Rosenberg, J. (2000) Can we measure maintainability, 901 San Antonio Road, Palo Alto, California: Sun Microsystem, 104