ANALYSIS & EXPERIMENTAL EVALUATION OF MS SQL DATABASE ON VARIOUS HYPERVISORS USING DIFFERENT STATISTICAL METHODOLOGY



Similar documents
Analysis of Variance (ANOVA) Using Minitab

VMware and Xen Hypervisor Performance Comparisons in Thick and Thin Provisioned Environments

PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE

Comparison of Windows IaaS Environments

RFP-MM Enterprise Storage Addendum 1

WHITE PAPER Optimizing Virtual Platform Disk Performance

Virtualization Technologies (ENCS 691K Chapter 3)

8Gb Fibre Channel Adapter of Choice in Microsoft Hyper-V Environments

Amadeus SAS Specialists Prove Fusion iomemory a Superior Analysis Accelerator

Marvell DragonFly Virtual Storage Accelerator Performance Benchmarks

Recent Advances in Applied & Biomedical Informatics and Computational Engineering in Systems Applications

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage

Energy aware RAID Configuration for Large Storage Systems

Analysis of VDI Workload Characteristics

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering

Scalability Factors of JMeter In Performance Testing Projects

Online Remote Data Backup for iscsi-based Storage Systems

UNIVERSITY of MASSACHUSETTS DARTMOUTH Charlton College of Business Decision and Information Sciences Fall 2010

VMware Best Practice and Integration Guide

Storage I/O Control: Proportional Allocation of Shared Storage Resources

Virtualization. Jukka K. Nurminen

Deputy Secretary for Information Technology Date Issued: November 20, 2009 Date Revised: December 20, Revision History Description:

CPS104 Computer Organization and Programming Lecture 18: Input-Output. Robert Wagner

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

IOS110. Virtualization 5/27/2014 1

Multi Objective Optimization of Cutting Parameter in EDM Using Grey Taguchi Method

Accelerate the Performance of Virtualized Databases Using PernixData FVP Software

Understanding Data Locality in VMware Virtual SAN

Enterprise Deployment: Laserfiche 8 in a Virtual Environment. White Paper

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Load DynamiX Storage Performance Validation: Fundamental to your Change Management Process

MS Exchange Server Acceleration

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine

Benchmarking Cassandra on Violin

Performance Comparison of VMware and Xen Hypervisor on Guest OS

Using Iometer to Show Acceleration Benefits for VMware vsphere 5.5 with FlashSoft Software 3.7

Evaluation of Enterprise Data Protection using SEP Software

Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking

VMWARE WHITE PAPER 1

Comparison of Hybrid Flash Storage System Performance

A Comparison of Oracle Performance on Physical and VMware Servers

Technical Investigation of Computational Resource Interdependencies

Executive Summary. Methodology

Storage Sizing Issue of VDI System

Optimizing LTO Backup Performance

Chapter 2 Addendum (More on Virtualization)

White Paper. Recording Server Virtualization

BLACKBOARD LEARN TM AND VIRTUALIZATION Anand Gopinath, Software Performance Engineer, Blackboard Inc. Nakisa Shafiee, Senior Software Performance

Method of Fault Detection in Cloud Computing Systems

GUEST OPERATING SYSTEM BASED PERFORMANCE COMPARISON OF VMWARE AND XEN HYPERVISOR

Virtualization of the MS Exchange Server Environment

SCHEDULING IN CLOUD COMPUTING

business statistics using Excel OXFORD UNIVERSITY PRESS Glyn Davis & Branko Pecar

Microsoft Windows Server Hyper-V in a Flash

Evaluating HDFS I/O Performance on Virtualized Systems

Diablo and VMware TM powering SQL Server TM in Virtual SAN TM. A Diablo Technologies Whitepaper. May 2015

Performance Analysis: Benchmarking Public Clouds

VDI FIT and VDI UX: Composite Metrics Track Good, Fair, Poor Desktop Performance

Schedule Risk Analysis Simulator using Beta Distribution

Virtualizare sub Linux: avantaje si pericole. Dragos Manac

IOmark- VDI. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC b Test Report Date: 27, April

IxChariot Virtualization Performance Test Plan

Univariate Regression

Boost your VDI Confidence with Monitoring and Load Testing

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

Virtual Switching Without a Hypervisor for a More Secure Cloud

Enabling Technologies for Distributed Computing

DELL. Virtual Desktop Infrastructure Study END-TO-END COMPUTING. Dell Enterprise Solutions Engineering

Technical Operations Enterprise Lab for a Large Storage Management Client

Bright Idea: GE s Storage Performance Best Practices Brian W. Walker

International Journal of Computer & Organization Trends Volume20 Number1 May 2015

A Comparison of Oracle Performance on Physical and VMware Servers

DESIGN OF EXPERIMENTS IN MATERIAL TESTING AND DETERMINATION OF COEFFICIENT OF FRICTION. Ondřej ROZUM, Šárka HOUDKOVÁ

Performance of VMware vcenter (VC) Operations in a ROBO Environment TECHNICAL WHITE PAPER

International Journal of Computer Science Trends and Technology (IJCST) Volume 2 Issue 4, July-Aug 2014

Benchmarking the Performance of XenDesktop Virtual DeskTop Infrastructure (VDI) Platform

White Paper on Consolidation Ratios for VDI implementations

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

Introduction to SQLIO & SQLIOSim & FIO. XLVIII Encontro da Comunidade SQLPort

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

WHITE PAPER 1

Virtualizing Exchange

AirWave 7.7. Server Sizing Guide

Measuring Interface Latencies for SAS, Fibre Channel and iscsi

Virtualization and the U2 Databases

Data Analysis Tools. Tools for Summarizing Data

Building a Business Case for Decoupling Storage Performance from Capacity

SOLUTION BRIEF. Resolving the VDI Storage Challenge

Basic Data Analysis. Stephen Turnbull Business Administration and Public Policy Lecture 12: June 22, Abstract. Review session.

Development of Software Dispatcher Based. for Heterogeneous. Cluster Based Web Systems

Storage Systems Performance Testing

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

Comparative Study of Virtual Machine Software Packages with Real Operating System

EMC Unisphere for VMAX Database Storage Analyzer

Analysis on Virtualization Technologies in Cloud

Deployments and Tests in an iscsi SAN

How To Run Statistical Tests in Excel

Atlantis USX Hyper- Converged Solution for Microsoft SQL 2014

Implementation of Reliable Fault Tolerant Data Storage System over Cloud using Raid 60

Transcription:

29-211 IJRIC& LLS. All rights reserved. IJRIC ISSN: 276-3328 www.ijric.org E-ISSN: 276-3336 ANALYSIS & EXPERIMENTAL EVALUATION OF MS SQL DATABASE ON VARIOUS HYPERVISORS USING DIFFERENT STATISTICAL METHODOLOGY DEVI PRASAD, S RAMACHANDRAM, REETA SONY A L E-mial: bdeviprasad@ipirti.gov.in, schandram@osmania.ac.in, aalsony@gmail.com ABSTRACT Benchmarking is the tedious and time-consuming job. Since the upcoming feature of the new products is so complex, it needs in depth knowledge of the product and its domain. Usually customer comes with a benchmarking of their product with their competitor one; the service company will always go for a costeffective way of benchmarking. There are several methods of benchmarking such as One Factor At a Time (OFAT), Design of Experiment (DOE) Interaction effect approach, Pearson Correlation-co-efficient method and Multi Vari analysis. As a test manager in the project management role need to choose best and suitable one method of benchmarking the products. This work has proved and analyzed the applicability and suitability of benchmarking method for a given Hypervisor testing. Keywords: Benchmarking, OFAT, DOE, Interaction Effect, Hypervisor Testing 1. INTRODUCTION Performance benchmarking of products is common in the computer industry. Normally, computer products are used as part of a larger configurations rather than stand-alone products. Therefore, it is advisable to use the measure of relative performance of products in target configuration for products benchmarking. Many performance groups use One Factor at A Time (OFAT) [1] methodology for performance benchmarking. This approach is good when we have ample time and good amount of money to spend on it. OFAT methodology measures performance difference between products in conjunction with one factor. However, OFAT does not quantify how much product1 contributes to overall system performance compared to product2 and how good product1 is working with other products in the same environment compared to product2. The performance-benchmarking frameworks such as OFAT, DOE Interaction effect, Pearson Correlation method and Multi Vari analysis approach are discussed in this paper, which presents a novel approach such as multi vari and DOE interaction effect approach to reduce the overall effort to conduct performance benchmarking and get better data about the performance comparison of products under test. The core of the framework is Design of Experiments (DOE) [2] a robust statistical methodology for optimizing the experiments. DOE is extensively used to performance tuning [3], performance optimization and screening the vital few factors to control the process. We have attempted to exploit the DOE-interaction effect and ability of DOE methodology with multi vari approach to provide quantitative data on how the performance of a configuration changes when a factor is set at different levels and to what extent the factors interact with each other. This will help in comparing two products and identifies to what extent they can improve the product performance and how good they work with other products in the same environment. The plan of the work is as follows, Section 2 introduces OFAT, Correlationcoefficient method, DOE-Interaction effect and Multi vari approach. Section 3 details a case study of performance benchmarking of two environments in virtual(i.e. with VMware and ) and nonvirtual(i.e. Physical server) environment, Section 4 discuss about the proposed and existing benchmarking frameworks, Section 5 discuss the benchmarking result and experimental finding and finally, Section 6 presents the conclusions and future work. 7

29-211 IJRIC& LLS. All rights reserved. IJRIC ISSN: 276-3328 www.ijric.org E-ISSN: 276-3336 2. OVERVIEW OF OFAT, INTERACTION EFFECT, CORRELATION METHOD AND MULTI VARI ANALYSIS 2.1 One Factor At a Time (OFAT) In this experimental methodology only one input factor is changed at a time, while other factors are held constant (i.e. while conducting the experiment factors are not varied simultaneously) [1][2]. 2.2 DOE- Interaction Effect (IE) Design of Experiments (DOE) [4] is a structured method for conducting experiments and analyzing the results. Sir Ronald A. Fisher [5], the renowned mathematician and geneticist, first developed this method in the 192s and 193s. As well, there have been many other contributors [6] to DOE theory viz, George Box, Dorian Shainan, and Genichi Taguchi and others. DOE also called statistical experimental design; it is a tool for determining the main and interaction effects of different factors affecting process quality and for calculating optimal setting for controllable factors. 2.3. Correlation Coefficient Approach By using the correlation co-efficient [7] of the factors or variable, one can compare the level of interaction amongst the factors. The value of correction usually falls between +1 to - 1. Positive sign of the correlation coefficient shows the increase in values of the two factors shows the increase in response and vice versa for the negative sign. By using the direction of the correlation coefficient, we can identify the interaction among the factors. 2.4 By using the Multi Vari Analysis approach Recently, during our literature survey of this paper, we found that the multi vari analysis [8][9] has the potential to explore the interaction effect. This feature of multi vari analysis approach will be useful for benchmarking the product. This approach identifies the interaction between the factors at various levels. This approach also uses the foundation of Analysis of Variance (ANOVA). The feasibility of this approach is presented in the subsequent section of this paper. 3. CASE STUDY: IDENTIFICATION OF A PERFORMANCE BENCHMARKING FRAMEWORK FOR VIRTUAL & NON- VIRTUAL ENVIRONMENTS This case study describes the four performance benchmarking frameworks of two environments and it has four stages, viz (i) Objective of benchmarking (ii) Designing the experiments (iii) Experimental Design and (iv) Analyzing experimental data. 3.1. Experimentation The goal was to compare the performance of two virtual environments and one physical environments of MS SQL database and interested to know how the sql server interact with application factors in virtual and non-virtual environmental setup, such as Application type (AT), Application Block size (AB) and Read/Write ratio (RW). We have conducted the benchmarking of the products with the test bed configuration as shown in Figure 1. USER Client machine OS VM HYPERVISOR HARDWARE SERVER Figure 1 Shows the experimental setup of virtual environments 8

29-211 IJRIC& LLS. All rights reserved. IJRIC ISSN: 276-3328 www.ijric.org E-ISSN: 276-3336 3.1.1. Input Factors and Levels. There are several input factors influences the performance of MS SQL. Before designing the experiments, we have identified the various controllable and uncontrollable factors of the system, from the list of controllable and uncontrollable factors; we prioritized the controller factors based on its effect on the response data (Throughput in MBps and IOps). Finally, we have screened out five significant factors viz, System type (ST), Application Type (AT), Application Block Size (AB), Read Ratio (RW), Outstanding IOs (OIO). The levels of the factors are chosen based on the exhaustive system study and experimental requirements. 3.1.1.1. (ST). This factor implies the type of environment, we have chosen three types of the system types, namely native (physical server), VMware ESX Server and Server. 3.1.1.2. Application Type (AT). This factor implies the nature of the application data type such as random(r) and sequential (S) 3.1.1.3. Application Block Size (AB). It is the amount of application data size in the KBs (Kilobytes), usually it is set at application user layer. In this experiment, we have chosen three level of AB viz, 8KB, 64 and 256KB. 3.1.1.4. Read Ratio (RW). This ratio specifies the amount of read and writes operation performed on the disk by the application. IO meter is used to simulate this ratio. In this experiment, we have chosen two level of RW viz (total write) and 1(total read). 3.1.1.5. Outstanding IOs (OIO). It is the number of request waiting at the input queue to be processed. In this experiment, we have chosen three level of OIO viz, 4, 8 and 16. 3.1. 2. Output Response Throughput (T) in Input Output Per Second (): Experimenter must choose suitable response. The response data may be single or multiple. In this experiment, response data is in. The factors and levels are summarized in Table 1. Table 1. Shows the description of experimental factors and its level S:No Factors Level1 Level2 Level3 1 ST VMware XEN 2 AT R S - 3 AB 8 64 256 4 RW 1-5 OIO 4 8 16 4. BENCHMARKING FRAMEWORKS 4.1. OFAT: Approach 1 In this experiment, there are five factors. The total number of experimental runs will comes to will give 18 runs for the three replicates its comes to 18x3 =324runs. In order to obtain the performance benchmarking characteristics it needs 324runs for successful experimental completion. 4.2. DOE- Interaction Effect (IE):Approach2 4.2.1. Interaction effect. It is effect of one factor and its levels with levels of other factors in the system, in this work we use this approach to quantify the amount of interaction level amongst the factors, which in turn gave an idea to benchmark the product which is chosen as a one of the factor in this experiment. 4.2.2 Generating DOE Design Table. We used DOE tool to generate randomized design table. In this experiment, we have chosen five factors for benchmarking the hypervisors. The full factorial DOE design will give 18 runs, for three replicates it will become 18x3= 324runs. These runs were randomized to reduce the influence on the system state due to a previous run. Moreover, we decided to conduct the experiment in single block, as the experimental duration was small. We conducted the experiment in the randomized run order. The experiment involved setting up the environment and measuring the throughput using SQLIO [11]. The observed throughput values are tabulated in DOE tool for further data analysis. 4.2.3 Preprocessing. We studied the variance among the replicates to understand which factors causing maximum variance among the replicates. It was found that the application block size at 256 KB was 9

29-211 IJRIC& LLS. All rights reserved. IJRIC ISSN: 276-3328 www.ijric.org E-ISSN: 276-3336 giving maximum variance. This inference was wetted when we found that most of the outliers were data points with 256 KB application block size. Further study on the data sets revealed that AB and the RW combination is causing maximum variance in the output. We decided to conduct a different study to understand this observation. 4.2.4. Residual Analysis. It is difference value between the model fitted output value and the observed output response value for each experimental runs. Figure 2 shows the complete residual analysis of the experiment. A bell-shaped arrangement of the data bars in histogram and residuals data on the straight line in normality plot indicates that the residuals data are normally distributed. These graphs shows the performance data is almost statistically sound for the further analysis. physical system and vmware irrespective of the application type. provides 7.44% better performance in random application and 11.22% better in sequential application. Mean 5 4 3 2 1 Interaction Plot (data means) for Random Sequential Application Type Percent Residual Plots for Normal Probability Plot of the Residuals 99.9 99 9 5 1 1.1-5 -25 25 5 Residual Histogram of the Residuals 3 Residual Residuals Versus the Fitted Values 5 25-25 -5 4 8 12 16 Fitted Value Residuals Versus the Order of the Data 5 6 5 Figure 3. Interaction plot system type vs. application types. Interaction Plot (data means) for Frequency 2 1 Residual 25-25 Mean 4 3-3 -15 15 Residual 3 45-5 1 5 1 15 2 25 Observation Order 3 2 Figure 2. Shows the residual analysis of the DOE experiment. The Figure 2(right top) depicts the residuals vs. fitted value of the experiment and it is very clear that it does not exhibit any shape and was a random cloud. This suggested that there is variance among the residuals is random. Figure 3(right bottom) shows the residual vs. run order did not exhibit any pattern suggesting that there were no time dependencies or no influence of previous run on the current run in the experiment. 4.2.4. DOE Interaction Effect (IE) data Analysis. Following graphs show the interaction effect of system type[12] with application factors in the configuration. Figure 3 shows the interaction effects between system type and the application type. It shows that performs better than 1 8 64 Application Block Size Figure 4. Interaction effect plot system type vs. application block sizes. Figure 4 shows the interaction effect between the system type and application block size. From Figure 4, we can see that the three system types performance is almost same when application block size is set at 256KB and nearly same at 64KB, but gives 33.93% better performance when the application block size is set at 8 KB. Figure 5 shows the interaction effects for system type vs. read-write ratio. The inference from this graph is that performs better when compared with native and vmware. The native system type is providing the mean performance at read only and 256 1

29-211 IJRIC& LLS. All rights reserved. IJRIC ISSN: 276-3328 www.ijric.org E-ISSN: 276-3336 write only application when compared with xen and vmware. Mean 24 22 2 18 16 14 12 1 Interaction Plot (data means) for Read Ratio 1 Figure 5. The Interaction effect plot of system type vs. read-write ratios. 4.3. Correlation co-efficient method- Approach 3 By using the correlation co-efficient [13] of the factors or variable, one can compare the level of interaction amongst the factors. The value of correction usually falls between +1 to - 1. Positive sign of the correlation coefficient shows the increase in values of the two factors shows the increase in response and vice versa for the negative sign. By using the direction of the correlation coefficient, we can identify the interaction among the factors. In this work we have used Pearson corelation co-efficient is used for estimation the factors relationship. The outcome of this work shows the pearson co-efficient value as. for the Application block size, Read ratio and outstanding IOs (it represents they do not have any interaction). In the third column of the TABLE III shows the Pearson co-efficient as +.26 for Read ratio and +.39 for outstanding IOs with Throughput in IOps, it shows OIO and RW are having positive co-relation with the throughput in, the application block size is having negative co-relation (-.424) with throughput in. The main drawback of this approach it can able to identify the interaction only when all the screened controllable factors in systems are quantitative. Since our experiment consist of three quantitative factors (AB, RW and OIO), two text factors, viz ST and AT (non-quantities factors). Since the (ST) is non-quantitative factor, by using the correlation co-efficient method, benchmarking of the system type is not feasible. Table2. Shows the Correlations between Switch Speed, Application Block Size, OIO and Throughput in IOps Cell Contents: Pearson correlation (PC) P-Value Applica tion Outstan ding 4.4 Interaction-effect framework by using Multi Vari Analysis.-- Approach4 Recently we found that Multi vari approach [14] would identify the interaction between the factors and its level. This approach has the potential to identify the interactions without making the independent relationship with other factors. The following graph shows the benchmarking of system type for the given application profile. 5 4 3 2 1 Multi-Vari Chart for by - Application Type Read Ratio PC=. P=1. PC=. P=1. PC=+. 26 P=.644 Application PC=. P=1. PC=-.424 P=. Random Sequential Application Type Outstanding PC=+.39 P=.481 Figure 7. Shows the Interaction effect between the system type and application type. Black dot represent the means of the system type at each level of the application type. The black lines connect the statistical mean values of the 11

29-211 IJRIC& LLS. All rights reserved. IJRIC ISSN: 276-3328 www.ijric.org E-ISSN: 276-3336 system type factor within each level of the application type factor. Red dot in the centre of the black line represent the means of each level of the system type throughput. The red lines connect them. Form the figure 7, there is no evidence of interaction between system type and application type. Red symbols represent the mean value of throughput of the three system type when the application type is random. The red line connects the application type means, the sequential application results in a greater average throughput than random application. The figure 9 shows the good interaction of both the system type at read/write ratio. The performance well when compared to native and VMware at respective read ratios. But at read ratio 1 the performance of is slightly better than at read ratio. Multi-Vari Chart for by - Application Block Size 1 5 8, Random 8, Sequential 1 Multi-Vari Chart for by - Application Block Size 6 5 1 64, Random 64, Sequential 256, Random 256, Sequential 1 5 4 5 3 2 1 Read Ratio Panel variables: Application Block Size, Application Type 1 8 64 Application Block Size Figure 8. Shows the Interaction effect between the system type and application block size. This figure 8 also reveals the same it doesn t show any interaction between the system types at application block sizes. It reveals that performs better at 8KB compare to native and VMware. 256 Figure 1. Shows the complete benchmarking results of the system type The summary of this Multi Vari approach case study from figure 1 are System type has little impact on the overall SQL database IO capacity performance compared to other factors such as application block size and application type. 5. FINAL DISCUSSION OF BENCHMARKING FRAMEWORKS 24 22 2 18 16 14 12 1 Multi-Vari Chart for by - Read Ratio Read Ratio 1 5.1. OFAT One factor at a time methodology executes all possibilities of factors and its level, the amount of resources requires for conducting and benchmarking the system type takes more time, costly and manpower. Many reference and statistical analyst suggest this method cannot quantify the interaction effect, lags validation of normality and prone to experimental errors (time trend, etc). But still I feel the OFAT methodology can also determine the interaction effect, if it interworks with Multi Vari analysis framework. 5.2 DOE Interaction Effect Figure 9. Shows the Interaction effect between the system type and application read ratio. This method of benchmarking uses the interaction feature of DOE methodology. In this 12

29-211 IJRIC& LLS. All rights reserved. IJRIC ISSN: 276-3328 www.ijric.org E-ISSN: 276-3336 experiment, we did not use fractional factorial methods for conducting the experiment. The interaction effect feature of DOE quantifies the relative performance of the product /factor with other factors. Finally this method has given robust and good benchmarking result, saved time and cost of the experiment 5.3 Correlation co-efficient approach This method of benchmarking the product will be more effective if all the factors are quantitative. If directly gives the interaction level of the allexperimental factors. However, this method cannot able to quantify the interaction when the factors are non-quantitative (so-called text factor). Since our experiment consist of three quantitative factors (ABS, RR and OIO), two text factors, viz ST, and AT(non-quantitave factors). The prime factor system type is also a non quantitative factor, by using the correlation co-efficient method, benchmarking of the storage arrays is not feasible. 5.4 Multi Vari analysis This is one of the popular quality control tool used for the pre-experimentation purpose for analyising the variability of the experiment. Recently during the literature survey of this work, we found this method also quantifies the interaction effect very easily. In this work we used the DOE experimental runs and its response (data) for analyzing, the interaction effects; it is found it gives same results as DOE interaction effect (IE) approach2. One can choose either Multi vari or DOE- Interaction effect analysis, depends on data analyzing comfort ability. The interpretation of interaction level of many factors at a time is much simpler in multi vari approach. 6. CONCLUSIONS AND FUTURE WORK In this work, the feasibility of cost-effective way of system (virtual and non-virtual) environment for SQL database IO capacity performance benchmarking is illustrated using DOE methodology and Multi Vari analysis. It is found that the DOE interaction effect and Multi vari approach are very interesting feature to quantify how good a product is working with another in a given configurations. We have used the framework effectively in comparing three system type in virtual and non-virtual environment. We found the proposed frameworks helps in reducing the experimental duration and costs significantly. The case study shows the performance benchmarking of the native, vmware and xen in a scientific manner and the experimental data are statistically sound. The interaction plots provide the relative performance results and its characteristics of the system type with other factors. The outcome of the interaction plot results shows that xen outperforms native and native outperforms vmware for all the given factors in the same environment and having good compatibility with the application factors. Finally, this experiment has given a novel way of benchmarking the products and provides a flexible base for investigating the performance benchmarking of the products. The authors believe that this approach of benchmarking provides simple and cost-effective guidelines to help engineers, scientists, and researchers in the field of benchmarking. REFERENCES [1] Xiang Li, System Regularities in Design of experiments and Their Application Phd Thesis,Massachusetts Institute of Technology, June 26. [2] Vadde, K.K.; Syrotiuk, V.R.; Montgomery, D.C., Optimizing protocol interaction using response surface methodology Mobile Computing, IEEE Transactions, Volume 5, Issue 6, Page(s): 627 639, June 26. [3] Prabu D, Devi Prasad B, Lakshminarayana Prasad K, Performance Tuning of Storage System using Design of Experiments, Proceeding of 33rd International Conference of Computer Measurement Group (CMG), San Diego, California, USA, 27. [4]D. C. Montgomery, The use of Statistical Process Control and Design of Experiments in Product and Process Improvement, IEE Transactions, 24, 5, pp. 4-17, 1992. [5] Sir Ronald A. Fisher. http://en.wikipedia.org/wiki/ronald_fisher [6] DOE contributors. http://www.asq.org/aboutasq/who- we-are/ [7] Casagrande J.T.; Pike M.C and Smith P.G: The Power Function of the 'Exact' Test for Comparing Two Binomial Distributions, 13

29-211 IJRIC& LLS. All rights reserved. IJRIC ISSN: 276-3328 www.ijric.org E-ISSN: 276-3336 Journal of Applied Statistics Volume 27, pp176-18,1978. [8] Zaciewski, Robert D.; Németh, Lou " The Multi-Vari Chart: An Underutilized Quality Tool", ASQ Quality Progress, Vol. 28, No. 1,, pp. 81-83, October 1995 [9] Advanced Systems Consultants, Scottsdale, Arizona. http://www.mpcps.com/mva1.html [1] Martin Schatzoff Design of Experiments in Computer Performance Evaluation IBM J. Res. IBM Corp, Develop. Vol. 25, no. 6, pp 848-859, 1981 [11] SQLIO Project, http://www.microsoft.com/downloads/details. aspx?familyid=9a8b5b-84e4-4f24-8d65- cb53442d9e19&displaylang=en [12] Intel Virtualization technology: http://www.intel.com/technology/computing/ vptech [13] Montgomery, D.C. Introduction to statistical Quality control, 3rd Edition, Newyork:John Wiley and Sons,1996. [14] Multi-vari Chart and Analysis - A Preexperimentation Technique, written by Mario Perez-Wilson to quantify the sources of variability. [15] Minitab Inc, http://www.minitab.com/products/minitab/ 14