Applying Operational Profiles to Demonstrate Production Readiness Of an Oracle to SQL Server Database Port using Web Services.



Similar documents
How To Test On The Dsms Application

Introduction. Part I: Finding Bottlenecks when Something s Wrong. Chapter 1: Performance Tuning 3

The Complete Performance Solution for Microsoft SQL Server

Red Hat Satellite Management and automation of your Red Hat Enterprise Linux environment

Red Hat Network Satellite Management and automation of your Red Hat Enterprise Linux environment

Siebel & Portal Performance Testing and Tuning GCP - IT Performance Practice

Product Review: James F. Koopmann Pine Horse, Inc. Quest Software s Foglight Performance Analysis for Oracle

Oracle Database 12c: Performance Management and Tuning NEW

An Oracle White Paper July Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide

Performance Monitoring with Dynamic Management Views

Microsoft SQL Server: MS Performance Tuning and Optimization Digital

SQL diagnostic manager Management Pack for Microsoft System Center. Overview

SQL Server Performance Tuning and Optimization

PRODUCT OVERVIEW SUITE DEALS. Combine our award-winning products for complete performance monitoring and optimization, and cost effective solutions.

Response Time Analysis

Real Application Testing. Fred Louis Oracle Enterprise Architect

Response Time Analysis

SQL Sentry Essentials

Tableau Server Scalability Explained

System Requirements Table of contents

KPACK: SQL Capacity Monitoring

Server 2008 SQL. Administration in Action ROD COLLEDGE MANNING. Greenwich. (74 w. long.)

2013 OTM SIG CONFERENCE Performance Tuning/Monitoring

10775 Administering Microsoft SQL Server Databases

The IBM Cognos Platform

Performance Tuning and Optimizing SQL Databases 2016

Before attending this course, participants should have:

Microsoft SQL Database Administrator Certification

Management Packs for Database

Load Testing Analysis Services Gerhard Brückl

Server & Application Monitor

Tableau Server 7.0 scalability

A Comparison of Oracle Performance on Physical and VMware Servers

ExecuTrain Course Outline MOC 6231B: Maintaining a Microsoft SQL Server 2008 R2 Database

Client Requirement. Why SharePoint

Response Time Analysis

Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies

LEARNING SOLUTIONS website milner.com/learning phone

WAIT-TIME ANALYSIS METHOD: NEW BEST PRACTICE FOR APPLICATION PERFORMANCE MANAGEMENT

Microsoft SQL Server Beginner course content (3-day)

Optimizing Your Database Performance the Easy Way

Performance Analysis of Web based Applications on Single and Multi Core Servers

SQL Server Solutions GETTING STARTED WITH. SQL Safe Backup

Informatica Data Director Performance

Capacity Planning Use Case: Mobile SMS How one mobile operator uses BMC Capacity Management to avoid problems with a major revenue stream

INTRODUCTION ADVANTAGES OF RUNNING ORACLE 11G ON WINDOWS. Edward Whalen, Performance Tuning Corporation

Performance Testing of a Large Wealth Management Product

Recommendations for Performance Benchmarking

MS-10775: Administering Microsoft SQL Server 2012 Databases. Course Objectives. Required Exam(s) Price. Duration. Methods of Delivery.

6231B: Maintaining a Microsoft SQL Server 2008 R2 Database

Maintaining a Microsoft SQL Server 2008 Database

SQL Server 2008 Designing, Optimizing, and Maintaining a Database Session 1

Below are the some of the new features of SQL Server that has been discussed in this course

Directions for VMware Ready Testing for Application Software

Benchmarking Cassandra on Violin

Case Study: Load Testing and Tuning to Improve SharePoint Website Performance

Microsoft Dynamics CRM 2011 Guide to features and requirements

Mind Q Systems Private Limited

Test Run Analysis Interpretation (AI) Made Easy with OpenLoad

SQL Server 2012 Database Administration With AlwaysOn & Clustering Techniques

EZManage V4.0 Release Notes. Document revision 1.08 ( )

Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering

James Serra Sr BI Architect

Performance White Paper

Designing, Optimizing and Maintaining a Database Administrative Solution for Microsoft SQL Server 2008

Enterprise Planning Large Scale ARGUS Enterprise /29/2015 ARGUS Software An Altus Group Company

Course 20462C: Administering Microsoft SQL Server Databases

Implementing Support and Monitoring For a Business- Critical Application Migrated to Windows Azure

How To Use Ibm Tivoli Monitoring Software

Explain how to prepare the hardware and other resources necessary to install SQL Server. Install SQL Server. Manage and configure SQL Server.

10775A Administering Microsoft SQL Server 2012 Databases

6231B: Maintaining a Microsoft SQL Server 2008 R2 Database

Module 15: Monitoring

High Availability Essentials

CA Insight Database Performance Monitor for Distributed Databases

Administering Microsoft SQL Server Databases

Deployment Planning Guide

ClockWork Enterprise 5

ProSystem fx Engagement. Deployment Planning Guide

Performance Testing. What is performance testing? Why is performance testing necessary? Performance Testing Methodology EPM Performance Testing

Monitoring Databases on VMware

How To Install An Aneka Cloud On A Windows 7 Computer (For Free)

Who is my SAP HANA DBA? What can I expect from her/him? HANA DBA Role & Responsibility. Rajesh Gupta, Deloitte. Consulting September 24, 2015

Monitor and Manage Your MicroStrategy BI Environment Using Enterprise Manager and Health Center

Contents Introduction... 5 Deployment Considerations... 9 Deployment Architectures... 11

MySQL Enterprise Monitor

MS Design, Optimize and Maintain Database for Microsoft SQL Server 2008

MOC Administering Microsoft SQL Server 2014 Databases

Monitoring Remedy with BMC Solutions

The Methodology Behind the Dell SQL Server Advisor Tool

An Oracle White Paper November Oracle Real Application Clusters One Node: The Always On Single-Instance Database

Course 6231B: Maintaining a Microsoft SQL Server 2008 R2 Database

27 th March 2015 Istanbul, Turkey. Performance Testing Best Practice

PERFORMANCE TUNING IN MICROSOFT SQL SERVER DBMS

Why Standardize on Oracle Database 11g Next Generation Database Management. Thomas Kyte

Symantec Workspace Streaming 6.1

Managing Capacity Using VMware vcenter CapacityIQ TECHNICAL WHITE PAPER

MOC 20467B: Designing Business Intelligence Solutions with Microsoft SQL Server 2012

Wait-Time Analysis Method: New Best Practice for Performance Management

What s New in Centrify DirectAudit 2.0

Transcription:

Applying Operational Profiles to Demonstrate Production Readiness Of an Oracle to SQL Server Database Port using Web Services James Cusick, Imran Riaz, Hubert Huang, Allen Tse, Murugan Gnanavel {james.cusick; imran.riaz; hubert.huang; allen.tse; murugan.gnanavel}@wolterskluwer.com Wolters Kluwer Corporate Legal Services New York, NY Abstract A common problem in application development requires a port of existing and modified functionality to a new target technological platform. In this paper a recent experience is detailed on porting a modified Oracle based database implementation to a Microsoft SQL Server implementation. This port effort utilized simulated load to provide confidence around the production readiness of the new target database to support current and future user load conditions. This approach was guided by the use of an Operational Profile and by the use of web services based test scripts. Introduction A recent project at Wolters Kluwers Corporate Legal Services required a port of an online transaction processing relational database from an Oracle platform to a Microsoft SQL Server platform. The decision to migrate was based on cost, performance, application integration, new features and capacity of SQL server 2008, and other factors. The focus of this paper, however, is on the specifics of the efforts taken to ensure that the selected database platform would support the current and future load expectations and that the database platform would meet all technical, functional, and reliability requirements. Several challenges presented themselves immediately. First was to define the current workload of the legacy system that was being rewritten end to end. Second, a test environment needed to be constructed to simulate load levels expected of the new platform. And third was building out the test and production environments and demonstrating their readiness. In addition, the project required a conversion from the legacy Oracle 9i database to the SQL Server 2008 database. This document outlines the steps required to build an Operational Profile [1] driven database proving approach utilizing a web services architecture to meet these challenges and summarizes the actual results. The work reflected in this paper was conducted between late 2009 and August 2010 and required several technical leads, managers, developers, and testers working part-time on this effort during the course of the large project to realize the new application launch. Method and Approach The application of interest described in this paper is a corporate governance product delivered to customers via a SaaS (Software as a Service) model over the public Internet. The redesign of the application was triggered due to several product strategy directions and technical evolution plans. The core functionality of the application was revalidated through an intensive user centered design process [2]. Working with Microsoft, the Wolters Kluwer team developed the platform of choice for the target design which included SQL Server as the primary database. Other technologies introduced with this platform included Microsoft s SilverLight [3], a complete SOA (Service Oriented Architecture [4]) approach, and an automated unit test approach. The details of the full architecture, its capabilities, 1

design philosophy, and results are beyond the scope of this paper. The focus instead is on the preparation and realization of the SQL database supporting the broader application. The application itself consists of a database with a size in the range of 50 GB including data, indexes, log files, and performance data. The application consists of over 200 tables, 500+ procedures, and roughly 50 functions. The largest table contains 5,000,000 records as of the port. For most systems development projects a key objective is to build a system that performs within an acceptable response time for the users at all load levels [5]. Conceptually this can be understood graphically as shown in Figure 1. In this diagram we can see that as load is increased performance as measured by response time can be expected to increase (thus there is an assumed proportional relationship between system workload and response time). In this project efforts were directed at defining the response time goals, workload levels (via an Operational Profile), and a demonstration of the production readiness of the system at any load level by using test scripts which interfaced with the web services employed by the SOA of the new application. In order to understand this relationship (between workload and response times) and other goals, several core technical objectives were established. These basic objectives were set up for the development and database teams working together included the following: 1. Demonstrate database performance early in the lifecycle; 2. Allow time for adjustment should it be required; 3. Meet a 30% CPU utilization level on the new platform with a load equivalent to current legacy; 4. Prepare for conversion and go-live end to end (including conversion performance itself); 5. Provide robust support for SQL tuning, performance results, and configuration correctness; 6. Understand performance of integration requirements to external systems. The core success criteria for the database platform consisted of meeting the above goals were defined within 3 factors: 1. Page Response Time (PRT) a. 98% of requested Pages should be in acceptable response time window at 1XLoad. b. 95% of requested Pages should be in acceptable response time window while running the 2X Load. c. Online transactions vs. Real-time Reporting performance should be non-conflicting. 2. Resource Utilization a. CPU and memory utilization to be under 30% in normal circumstances. b. No Disk I/O bottlenecks. c. During the backup and any system maintenance overall CPU utilization to be under 40% except occasional peaks. d. Failover and 2X load conditions should be achievable and predictable with nominal increases in resource use. 3. Database Health a. No deadlocks should occur. b. Buffer cache hit ratio should be > 90% c. The number of database connections should be within the threshold amount. Database connections should increase gradually when the load increases. d. No significant errors in the alert log while we perform the tests and during baselines. 2

Response Time? Second Business is Closed Unacceptable User Experience? Second Questionable User Experience? Second Acceptable User Experience 0 Second Load Figure 1 Typically, as load increases response time may also increase. This chart is a theoretical layered model of response time. The approach provided in this paper addresses the specifics on how to quantify these levels precisely. The core activities to meet these objectives included the following areas of focus: 1. Early Development Performance Testing 2. Engineering of Production Boxes 3. SQL Tuning as required 4. Treat Test environment as Production including monitoring and adjustments 5. Conversion preparation and testing 6. Production platform performance testing These tasks were further broken down into detailed activity phases summarized in Table 1 below. Phase Tasks Duration Planning & Analysis Development Proving Testing Quality Assurance (QA) 1) Define objectives 2) Create technical approach 3) Define load profiles 4) Prepare environment 5) Prepare test harness 6) Trial testing Conduct multiple rounds of proving testing on early versions of application and database in the development and test environment and then in the production environment prior to go-live. Conduct multiple rounds of proving testing on QA versions of application and database. Conduct this 6 weeks 4 Weeks 3 Weeks 3

Proving Testing Technical Adjustments Production Setup Conversion Trial and Performance Testing testing in conjunction with QA team and load testing specialists. 1) Based on results of early testing, recommend technical, configuration, application, or database changes as required. 2) Fine tune test procedures, scripting, and measurements. 1) Work with hosting vendor on production cluster configuration. 2) Review configuration and basic operational state. 1) Conduct trial runs of data conversion. 2) Carry out load testing. 3) Adjust configuration as required. 2 Weeks 1 Week 2 Weejs Freeze Period 1) Insure no changes to environment 2 weeks prior to go live. 2 Weeks Go-live 1) Support go live 2 Weeks Table 1 Database Proving Activities Phased Plan 4

Tests driven by usage mode Database Proving Approach: Usage Profile Model Creation and Application Legacy app Users Legacy usage Sample data extract Current usage profile Legacy Usage Profile Projected Usage pr (mapped ) Target Usage Projection Load Projection and Mapping Illustration F1 = F2 = F3 = F1 = 20% F2a = 15% F2b = 15 F3 = 40% Target Test Design Legac usage Mapped usage projectio Figure 2 The role of the Operational Profile in building a target test approach. 5

The Operational Profile as Applied An Operational Profile is at its simplest a statistical model of the predicted usage of system functions or operations. Figure 2 indicates the normative and high level approach to the creation of a load profile to accomplish testing between the legacy and target platform. In summary, data from the legacy instrumentation repository was sampled to create an abridged functional model with assigned usage probabilities (see Table 2). This model was mapped to a projected usage model taking into account the changes in functionality between legacy and refresh application implementations (see inset in Figure 2). Further mapping was also done between the application functional usage model and underlying web service scripts. With the projected web service script load model developed the team was able to drive the test mechanisms available. This load profile then exercised the application and, in turn, the database. Instrumentation data captured during testing as well as via resource and performance monitoring tools then provided a basis for analyzing performance and pinpointing tuning opportunities. Module Name Hits Usage Percent Function 1 85901 33.76% Function 2 47474 18.66% Function 3 23171 9.11% Function 4 18646 7.33% Function 5 14916 5.86% Function 6 12020 4.72% Function 7 8710 3.42% Function 8 7905 3.11% Function 9 6606 2.60% Function 10 4018 1.58% Table 2 A Sample Operational Profile Application Architecture and Database Proving Environment The application architecture is shown in Figure 3.1. This diagram indicates the SOA boundary that encloses the web service layer along with the multi-tier backend layers and exposes a presentation layer and other consumers via port 8080. To visualize the database proving environment consider a multi-tier architecture as shown in Figure 3.2. This diagram indicates the basic environment leveling and components required to carry out the database proving approach used. Essentially, the load model produced above from the legacy usage data and modified with the new functions represented in the Operational Profile drove an application harness layer. This harness consisted of scripts which call specific web services of the modules defined in the target usage profile or sets of those services in sufficient frequency to simulate the per hour load rates expected from the workload profile (in production this application receives tens of thousands of page requests per hour). In addition to the frequency, a randomness of invoking the web services was applied to simulate the real world usage pattern. 6

Figure 3.1 Application Architecture Migration and Testing The new application took as a starting point the legacy database schema and redesigned it to fit the needs of the new application. To migrate the application a mapping from legacy to new was required. This also required the staged build-out of the development, test, and production environments. The steps in this process included: 1. Migrate the database architecture a. Build the SQL Server instance b. Configure the server c. Migrate the storage architecture 2. Migrate the schemas a. To migrate the schema we used custom scripts (which involves Bulk Copy) to populate fresh empty tables during migration and for performance collection we have used the Performance Warehouse tools provided by Microsoft. b. Migrate the schema objects 3. Migrate the users a. Create user accounts b. Create roles and grant privileges 4. Migrate the data a. Plan the data migration b. Execute the data migration 7

Actual webservices or core synthetic transactions drive application functionality as per projected load profile Projected Load Profile script script script script script script Application can be exercised without UI Webservices can be driven within one or more functional areas as required load Multi server configuration P Database configuration, response, availability, and performance can be proved on each functional thread, tracking of resources will be provided by built-in tools. Primary Failover node P Storage array Figure 3.2 Database proving environment. As mentioned earlier, a test harness comprising scripts to drive web service calls produced a 1X (one times base expected load) as well as 2X, 3X, 4X, and 5X loads. Eventually, even greater loads were extrapolated from the actual stress testing results (see Figure 7). In theory, the level of load within testing can then be dialed up or down as appropriate or as events might warrant. Overall system performance was observed using pre-deployed monitoring tools and any required adjustments to the application or the database was made during the test phase and then revalidated. In order to create the environment and execute the testing some pre-requisites were required including: 1) Target system work profile. 8

2) Test scripts. 3) Data conversion. 4) Environment availability schedule. 5) Developers time for testing. 6) Readiness of functional Web services The specific database server specifications consisted of the following: Window Server 2008 R2 enterprise 64-bit Operation System with Intel(R) Xeon(R) cpu, E7440 @2.4GHz (4 processors). Installed Memory of 16.0 GB and 8.0 GB allocated to the SQL Server instance. 4.0 GB reserved for the OS and the remaining 4.0 GB allocated for another SQL Server instance which hosts the performance data collection warehouse database. The clustered SQL Server uses Microsoft SQL Server 2008 (SP1) enterprise edition. SQL server version 10.0.2531.0 (X64). The security mode is set to window authentication. The tools involved included: 1. Window system monitor (for window server 2003). 2. Performance monitor (for window vista, window server 2008). 3. SQL server activity monitor: to monitor important parameter such as Process time, waiting tasks, database I/O, Recent expensive query. 4. SQL profiler. 5. Database engine tuning advisor. 6. SQL trace. 7. DMVs: Dynamic management views. 8. IIS Logs 9. Load Runner SQL server 2008 Performance Data Collection provides features that the above tools lack. It provides a repository for historical and baseline performance analysis and comparison. It has a single performance data repository to monitor all instances on a server. The system can stay always on and the overhead is low. We use this tool which is new to SQL server 2008 to automatically collect important performance data during the stress test. There are three components for Performance Data Collection: management data warehouse, data collection sets, standard reports for system data collection sets. The performance data are collected and stored in a cache directory and uploaded to the warehouse database every 15 minutes. The standard report provides server activity history, query statistics history and disk usage summary. Reports can be customized based on need. A sample of the report options is provided in Figure 4. As mentioned, stress testing was also required in our objectives. Once the environment was established, the Operational Profile based functional allocations were assigned, and the test harness created, essentially any load level could be produced and tested for. In our work we decided to use a 2X, 3X, 4X and, 5X load to discover the elbow in the response time curve. Such testing informed us how much stress the new system can handle or what the limits are. If the application fails at these higher levels the system may still be acceptable. It is only an issue if projected load growth rates exceed the system performance ceiling. 9

Figure 4 Sample SQL Server Workbench monitoring tools Results The results of the testing were excellent. The application scaled very well and the database in general never utilized any significant resources during regular load tests. Even under 5X load the database remained extremely constant in resource usage. Only through linear extrapolation assumptions can we see that system memory becomes a potential bottleneck at roughly 20X normally expected load. Please refer to Figures 5, 6, and 7 for details on these tests results. Of special note is the Microsoft preferred metric for database processing (Batch Requests per Second). This value should be under 1,000 for a system of the class we are operating. As Figure 6 indicates even at 1 quarter of this threshold system memory and CPU are barely utilized. This indicates a more than adequate resource level on the device and significant future headroom for the application. Figure 5 System and SQL Server CPU consumption during consecutive load tests. 10

Figure 6 Batch Request per Second vs Memory utilization levels during load testing. Figure 7 Several system parameters plotted against projected load levels. In this chart Batch Requests per Second approaches 1,000 at 4X load. The limiting factor appears to be memory resources at somewhere over 2X the maximum stress level from earlier tests. Thus, while the database shows adequate performance at 5X typical load, stressed to 20X load it is projected that system memory will be the first bottleneck encountered. Note, this is projected and not demonstrated. Conclusions The results of the planning, analysis, system design, usage profiling, and testing of this platform allowed for the successful launch of this application in late 2010 as scheduled. Since go-live there have been essentially no issues with this database platform. Daily user levels remains within the predicted range and resource utilization on the database is watched daily and remains well within predicted tolerances. There have been no anomalies reported. The only technical impact we encountered was due to a mistaken configuration setting made by a vendor staff member which caused a brief outage. The approach taken here can be utilized by any team looking to deploy a new database (or application). Several key components of this approach are easily reusable, including the use of an Operational Profile to guide testing and to 11

determine future estimated load factors and functional distribution in testing. The step-wise approach to configuring, porting, testing, tuning, and finalizing the platform can also be followed. Finally, the test harness approach utilizing web services is without question extensible to other environments. Acknowledgements This paper reflects work that was triggered by the redesign of a core application, thus we thank our business partners and technical leadership for giving us an opportunity to apply the best of breed technology and methods to a very interesting set of challenges. Specifically we would like to thank our CTO, Venkatesh Ramakrishnan, for guidance on the overall goals and ideas on how to achieve them. We also owe a debt to the many developers and testers from Wolters Kluwer and Tata Consultancy Services who worked tirelessly on this project. About Wolters Kluwer Corporate Legal Services (CLS) As the originator of the corporate legal services industry, Wolters Kluwer CLS has been setting the standard for unrivaled service to the legal community for nearly 120 years. Combining unmatched industry expertise, a full spectrum of products and services and a personalized customer experience every time, CLS is the legal professional s clear choice to manage, measure and define the future of legal compliance services and performance management. For more information, go to www.ctlegalsolutions.com. References [1] Musa, J. D., "Operational Profiles in Software-Reliability Engineering", IEEE Software, March 1993. [2] Holtzblatt, K., Contextual Design: Experience in Real Life, Mensch & Computer 2001. [3] Microsoft SilverLight Home Page, http://www.silverlight.net/, viewed 4/2/2011. [4] Erl, Thomas, Service-Oriented Architecture: A Field Guide to Integrating XML and Web Services, Prentice Hall (April 26, 2004) [5] Cusick, J., & Welch, T., "Developing and Applying a Distributed Systems Performance Approach for a Web Platform", 30th International Conference for the Resource Management and Performance Evaluation of Computing Systems, Las Vegas, NV, December 10, 2004. 12