Technical White Paper



Similar documents
Knowledge Article Performance Comparison: BMC Remedy ITSM Incident Management version Vs on Windows

BMC Remedy IT Service Management 7.0 Data Management Administrator s Guide

BMC Remedy IT Service Management Suite Installing and Configuring Server Groups

MID-TIER DEPLOYMENT KB

White Paper BMC Remedy Action Request System Security

BMC Remedy IT Service Management Concepts Guide

This document contains the following topics:

Cisco is a registered trademark or trademark of Cisco Systems, Inc. and/or its affiliates in the United States and certain other countries.

BMC Service Request Management User s Guide

Business white paper. HP Process Automation. Version 7.0. Server performance

BMC Remedy IT Service Management Concepts Guide

PATROL Console Server and RTserver Getting Started

How To Test For Performance And Scalability On A Server With A Multi-Core Computer (For A Large Server)

White Paper: BMC Service Management Process Model 7.6 BMC Best Practice Flows

BSM Interoperability Installation and Configuration Guide

System Requirements Table of contents

BMC Client Management - Technical Specifications. Version 12.0

Monitoring Remedy with BMC Solutions

Integration for BMC Remedy Service Desk

DELL. Virtual Desktop Infrastructure Study END-TO-END COMPUTING. Dell Enterprise Solutions Engineering

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Performance Analysis and Capacity Planning Whitepaper

BMC BladeLogic Client Automation Installation Guide

NetIQ Access Manager 4.1

Virtuoso and Database Scalability

A Scalability Study for WebSphere Application Server and DB2 Universal Database

Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays

Web Application Security Assessment and Vulnerability Mitigation Tests

BMC Performance Manager Portal Monitoring and Management Guide

Liferay Portal Performance. Benchmark Study of Liferay Portal Enterprise Edition

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

JVM Performance Study Comparing Oracle HotSpot and Azul Zing Using Apache Cassandra

Tableau Server Scalability Explained

PHD Virtual Backup for Hyper-V

DEPLOYMENT GUIDE Version 1.0. Deploying F5 with the Oracle Fusion Middleware SOA Suite 11gR1

NetIQ Sentinel Quick Start Guide

Legal Notices Introduction... 3

White Paper November BMC Best Practice Process Flows for Asset Management and ITIL Configuration Management

Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820

BMC Impact Solutions Infrastructure Management Guide

Directions for VMware Ready Testing for Application Software

Infor Web UI Sizing and Deployment for a Thin Client Solution

An Oracle White Paper March Load Testing Best Practices for Oracle E- Business Suite using Oracle Application Testing Suite

User Manual. Onsight Management Suite Version 5.1. Another Innovation by Librestream

DEPLOYMENT GUIDE Version 1.2. Deploying the BIG-IP system v10 with Microsoft Exchange Outlook Web Access 2007

DEPLOYMENT GUIDE DEPLOYING THE BIG-IP LTM SYSTEM WITH MICROSOFT WINDOWS SERVER 2008 TERMINAL SERVICES

BMC Remedy Action Request System 7.0 Configuring

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

Server Installation Manual 4.4.1

Getting Started with ESXi Embedded

SolarWinds. Packet Analysis Sensor Deployment Guide

DEPLOYMENT GUIDE Version 1.2. Deploying F5 with Oracle E-Business Suite 12

SYSTEM SETUP FOR SPE PLATFORMS

BMC FootPrints Asset Core - Asset Discovery. Version 11.7

InterScan Web Security Virtual Appliance

XenClient Enterprise Synchronizer Installation Guide

SOLUTION BRIEF: SLCM R12.7 PERFORMANCE TEST RESULTS JANUARY, Load Test Results for Submit and Approval Phases of Request Life Cycle

Introduction to Virtual Datacenter

DEPLOYMENT GUIDE. Deploying the BIG-IP LTM v9.x with Microsoft Windows Server 2008 Terminal Services

Change Manager 5.0 Installation Guide

An Oracle White Paper July Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide

Dragon NaturallySpeaking and citrix. A White Paper from Nuance Communications March 2009

BMC Remedy Action Request System Configuration Guide

DEPLOYMENT GUIDE Version 1.1. Deploying F5 with Oracle Fusion Middleware Identity Management 11gR1

Performance Analysis of Web based Applications on Single and Multi Core Servers

DEPLOYMENT GUIDE Version 1.2. Deploying the BIG-IP System v10 with Microsoft IIS 7.0 and 7.5

IPRO ecapture Performance Report using BlueArc Titan Network Storage System

DEPLOYMENT GUIDE Version 2.1. Deploying F5 with Microsoft SharePoint 2010

BLACKBOARD LEARN TM AND VIRTUALIZATION Anand Gopinath, Software Performance Engineer, Blackboard Inc. Nakisa Shafiee, Senior Software Performance

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

HP Client Automation Standard Fast Track guide

Symantec Endpoint Protection 11.0 Architecture, Sizing, and Performance Recommendations

Sage SalesLogix White Paper. Sage SalesLogix v8.0 Performance Testing

HYPERION SYSTEM 9 N-TIER INSTALLATION GUIDE MASTER DATA MANAGEMENT RELEASE 9.2

Performance brief for IBM WebSphere Application Server 7.0 with VMware ESX 4.0 on HP ProLiant DL380 G6 server

Microsoft Exchange Server 2003 Deployment Considerations

Scalability. Microsoft Dynamics GP Benchmark Performance: Advantages of Microsoft SQL Server 2008 with Compression.

Oracle Hyperion Financial Management Virtualization Whitepaper

Agility Database Scalability Testing

Tableau Server 7.0 scalability

How To Configure An Orgaa Cloud Control On A Bigip (Cloud Control) On An Orga Cloud Control (Oms) On A Microsoft Cloud Control 2.5 (Cloud) On Microsoft Powerbook (Cloudcontrol) On The

Enterprise Manager. Version 6.2. Installation Guide

White Paper. Scalability Results. Select the hardware configuration that s right for your organization to optimize performance

Load Balancing for Microsoft Office Communication Server 2007 Release 2

An Oracle White Paper August Oracle WebCenter Content 11gR1 Performance Testing Results

Rebasoft Auditor Quick Start Guide

New!! - Higher performance for Windows and UNIX environments

PARALLELS SERVER 4 BARE METAL README

An Oracle Benchmarking Study February Oracle Insurance Insbridge Enterprise Rating: Performance Assessment

Capacity Planning Guide for Adobe LiveCycle Data Services 2.6

JBoss Data Grid Performance Study Comparing Java HotSpot to Azul Zing

Netwrix Auditor for Windows Server

McAfee Enterprise Mobility Management Performance and Scalability Guide

Hardware Sizing and Bandwidth Usage Guide. McAfee epolicy Orchestrator Software

Rev 7 06-OCT Site Manager Installation Guide

Installing and Using the vnios Trial

Veeam Cloud Connect. Version 8.0. Administrator Guide

Estimate Performance and Capacity Requirements for Workflow in SharePoint Server 2010

Improve Business Productivity and User Experience with a SanDisk Powered SQL Server 2014 In-Memory OLTP Database

Server & Application Monitor

Transcription:

Technical White Paper Performance and Scalability of 7.6.04 SP1 BMC Remedy IT Service Management Suite, BMC Service Request Management, BMC Knowledge Management, and BMC Atrium on Windows Benchmarking conducted at Dell Solution Center, Austin,Texas March 2012

Contacting BMC Software You can access the BMC Software website at. From this website, you can obtain information about the company, its products, corporate offices, special events, and career opportunities. United States and Canada Address BMC SOFTWARE INC 2101 CITYWEST BLVD HOUSTON TX 77042-2827 USA Outside United States and Canada Telephone 713 918 8800 or 800 841 2031 Telephone (01) 713 918 8800 Fax (01) 713 918 8000 Fax 713 918 8000 If you have comments or suggestions about this documentation, contact Information Design and Development by email at. Copyright 2012 BMC Software, Inc. BMC, BMC Software, and the BMC Software logo are the exclusive properties of BMC Software, Inc., are registered with the U.S. Patent and Trademark Office, and may be registered or pending registration in other countries. All other BMC trademarks, service marks, and logos may be registered or pending registration in the U.S. or in other countries. All other trademarks or registered trademarks are the property of their respective owners. Linux is the registered trademark of Linus Torvalds. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. The information included in this documentation is the proprietary and confidential information of BMC Software, Inc., its affiliates, or licensors. Your use of this information is subject to the terms and conditions of the applicable End User License agreement for the product and to the proprietary and restricted rights notices included in the product documentation. Restricted Rights Legend U.S. Government Restricted Rights to Computer Software. UNPUBLISHED -- RIGHTS RESERVED UNDER THE COPYRIGHT LAWS OF THE UNITED STATES. Use, duplication, or disclosure of any data and computer software by the U.S. Government is subject to restrictions, as applicable, set forth in FAR Section 52.227-14, DFARS 252.227-7013, DFARS 252.227-7014, DFARS 252.227-7015, and DFARS 252.227-7025, as amended from time to time. Contractor/Manufacturer is BMC Software, Inc., 2101 CityWest Blvd., Houston, TX 77042-2827, USA. Any contract notices should be sent to this address. ii

Customer Support You can obtain technical support by using the Support page on the BMC Software website or by contacting Customer Support by telephone or email. To expedite your inquiry, please see Before Contacting BMC Software. Support website You can obtain technical support from BMC Software 24 hours a day, 7 days a week at this website, you can: Read overviews about support services and programs that BMC Software offers. Find the most current information about BMC Software products. Search a database for problems similar to yours and possible solutions. Order or download product documentation. Report a problem or ask a question. Subscribe to receive email notices when new product versions are released. Find worldwide BMC Software support center locations and contact information, including email addresses, fax numbers, and telephone numbers. Support by telephone or email In the United States and Canada, if you need technical support and do not have access to the Web, call 800 537 1813 or send an email message to. (In the Subject line, enter, such as.) Outside the United States and Canada, contact your local support center for assistance. Before contacting BMC Software Have the following information available so that Customer Support can begin working on your issue immediately: Product information o o o Product name Product version (release number) License number and password (trial or permanent) Operating system and environment information o o o o o Machine type Operating system type, version, and service pack System hardware configuration Serial numbers Related software (database, application, and communication) including type, version, and service pack or maintenance level Sequence of events leading to the problem Commands and options that you used Messages received (and the time and date that you received them) o o o Product error messages Messages from the operating system, such as file system full Messages from related software. From

License key and password information If you have a question about your license key or password, contact Customer Support through one of the following methods: E-mail. (In the Subject line, enter, such as.) In the United States and Canada, call 800 537 1813. Outside the United States and Canada, contact your local support center for assistance. Submit a new issue at. iv

Contents Executive summary... 1 Methodology and result summary... 2 Environment... 3 Test scenarios... 6 minal application workload... 7 Online tests and results... 9 Online stand-alone tests and results... 18 BMC Atrium CMDB batch processing tests and results... 22 Mixed workload test and result... 30 Performance tuning and recommendations... 36 Mid tier settings... 36 F5 load balancer settings... 37 BMC Remedy AR System server settings... 41 DB server settings... 45 NE batch job settings... 46 RE batch job settings... 46 BMC Atrium Discovery and Dependency Mapping settings... 47 Hardware and network... 47 Appendix A: BMC Remedy IT Service Management, BMC Service Request Management, and BMC Knowledge Management foundation data and application data... 49 Data setup... 50 Data loading usage... 51 Appendix B: BMC Remedy ITSM Suite user scenarios and associated actions... 52 BMC Service Request Management test scripts - user scenarios... 64 Appendix C: Product version numbers... 68 Appendix D: Product details used at Dell Solution Center... 69

White paper Performance and Scalability of 7.6.04 SP1 BMC Remedy IT Service Management Suite, BMC Service Request Management, BMC Knowledge Management, and BMC Atrium on Windows Executive summary As businesses continue to grow using BMC Remedy IT Service Management applications, it becomes increasingly critical to provide information about the expected performance for these applications. BMC Software and Dell conducted a series of enterprise scale tests to demonstrate scalability and performance of BMC Remedy IT Service Management Suite (BMC Remedy ITSM Suite), BMC Service Request Management, BMC Knowledge Management, and BMC Atrium Configuration Management Database (BMC Atrium CMDB) applications on Dell PowerEdge servers. These tests were conducted at Dell Solution Center in Austin, Texas, in December, 2011. This white paper provides the following information to assist customers in achieving peak performance and scalability: - Quantitative test results, as a baseline - Guidelines for hardware sizing - System configuration recommendations The testing involved executing online workload (BMC Remedy ITSM Suite, BMC Service Request Management, BMC Knowledge Management) and batch processes (BMC Atrium CMDB) individually and together. Realistic user scenarios and workloads derived from customer cases were used for the tests. The results demonstrate that by using Dell PowerEdge systems and BMC applications on Windows, organizations can gain a standards-based solution that can lower total cost of ownership (TCO) with better performance, and lower energy costs that help to deliver more value throughout the company.

Executive summary Methodology and result summary A series of tests were conducted for both online and batch workloads to characterize the performance and scalability of applications in the BMC Remedy IT Service Management Suite. MicroFocus s Silk Performer was used as the load driver to simulate concurrent users. It submits transactions for each concurrent user based on industry standard transaction rates for each business process. A three thousand concurrent user test was performed by doubling and tripling the transaction rate to demonstrate the scalability of the application. A multi-thread Java application developed internally by BMC generated the BMC Atrium CMDB on-boarding load. A concurrent user is a user who executes a specified number of transactions per hour. To find the specified number of transactions per hour for a concurrent user for each business use case, see Table 3 and Table 4. In BMC Remedy ITSM Suite, concurrent users log on once and complete a specified number of transactions for a business case before logging off. The users are logged on for the duration of the simulation. BMC Service Request Management s concurrent users log on, complete one business transaction and then log off. BMC Service Request Management users are, therefore, not logged on to the system for the entire duration of the simulation. The concurrent user test simulates real-world user behavior for users of both BMC Remedy ITSM Suite and BMC Service Request Management. HTTP Watch measured browser response times from the client PC. Measurements were taken after the expected user load was attained through Silk Performer and the environment reached a steady state. The test results show the following points: In the three thousand concurrent users mixed (online and batch) workload, 93 percent of the end-user response times was under 5 s (seconds). 2

Environment An 8-core, 16-GB single BMC Remedy Mid Tier server can handle up to 3000 concurrent users, using much less CPU, compared to the BMC Remedy AR System server for the same workload. An 8-core, 16-GB single BMC Remedy AR System server can handle up to 3000 concurrent users. Part of the scalability test simulated a transaction load of almost 9,000 users on three BMC Remedy AR System servers. The BMC Knowledge Management application is CPU intensive. Part of the CPU is consumed to index the data coming in. BMC Knowledge Management searches also consume CPU depending on the volume of transaction data, knowledge articles, and external documents. When BMC Knowledge Management is added to the online user load, the BMC Remedy AR System server CPU utilization almost doubles. Consider having a separate AR Integration Server on which to offload BMC Atrium CMDB workload. It minimizes the impact on the online users. Onboarding, normalization, and reconciliation throughput remain roughly unchanged between two and five million CIs and relationships. The reconciliation identification throughput remains the same as well, while the reconciliation merge throughput decreases linearly for five million CIs and relationships compared with a two million test run. All tests show memory consumption to be consistent. Mid tier process memory consumes about 2.5 GB, BMC Remedy AR System server process memory consumes 2.5 to 3 GB, and the database process memory consumes about 6 GB. For configuration settings, see Appendix C: Product version numbers Environment This section describes the benchmark environment and architecture. A dedicated integration server was used in this benchmark to gain optimum performance for BMC Atrium applications, including BMC Atrium Discovery and Dependency Mapping 8.3 and BMC Atrium CMDB normalization and reconciliation jobs. Table 1 describes the benchmark environment at Dell Solution Center, and Figure 1 illustrates the benchmark architecture. Table 1. Benchmark environment at Dell Solution Center Tiers Servers Specification per server BMC Remedy 2 Dell PowerEdge M610 (2) x Intel Xeon X5650 tes Windows Server 2008 R2

Environment Tiers Mid Tier BMC Remedy Action Request (AR) System server BMC Atrium Discovery and Dependency Mapping server Servers Specification per server Quad Core @ 2.67 GHz, a total of 8 core CPU, 8 GB RAM 3 Dell PowerEdge M610 (2) x Intel Xeon X5650 Quad Core @ 2.67 GHz, a total of 8 core CPU, 16 GB RAM 1 Virtual Machine 4 VCPU Intel Xeon X5650 @ 2.67GHz, 8 GB RAM Database server 1 Dell PowerEdge M910 (4) x Intel Xeon L7555 Quad Core @ 1.86 GHz, a total of 32 core CPU, 128 GB RAM Load balancer 1 F5. BIG IP 3600 Local Traffic Manager (LTM) v10.3 Windows client computers SilkPerformer 2010 R2 Controller 2 Virtual Machine, 2 VCPU Intel Xeon X5650 @ 2.67GHz, 12 GB RAM 1 Virtual Machine, 2 VCPU Intel Xeon X5650 @ 2.6 Ghz, 2 GB RAM tes Enterprise 64-bit, Java 1.6.0_25 64-bit, Tomcat 6 Windows Server 2008 R2 Enterprise 64-bit, servers configured in a BMC Remedy AR System server group. One AR System server was used as an integration server Linux 64-bit Dell Equalogic PS6510E model 70-0300 SATA; 2 storages attached using RAID 10 Balances AR System servers and mid-tier servers Silk Performer Agents 4

Environment Figure 1: Benchmark architecture 7.6.04 SP1 Benchmark on Dell Windows/SQL Server 2008 Silk Performer Driver Client Tier Load Balanced(Simulated) OS: Windows Server 2008 R2 CPU: 2 cores RAM: 12 GB HD: 64 GB Silk Performer Agent Silk Performer Agent HTTPS (Port: 443) Load Balancer IP Grouping into single subnet Web/BMC Remedy Mid-Tier Server OS: Windows Server 2008 R2 CPU: 8 cores RAM: 8 GB HD: 200 GB 2 BMC Remedy Mid-tiers BMC Atrium Discovery and Dependency Mapping Appliance (VM) RPC RPC Load Balancer OS: ESX Server CPU: 4 cores RAM: 8 GB HD: 500 GB Application Server Tier (BMC Remedy AR Server Group) Integration Tier (BMC Remedy AR Server Group) OS: Windows Server 2008 R2 CPU: 8 cores RAM: 16 GB HD: 255 GB 2 BMC Remedy AR Servers(as a server group) OS: Windows Server 2008 R2 CPU: 8 cores RAM: 16 GB HD: 64 GB Database Tier Storage Area Network OS: Windows Server 2008 R2 CPU: 32 cores RAM: 128 GB HD: 19TB external storage DB: SQL Server 2008 64 bit BMC Remedy AR Server DB Storage: Dell Equalogic. 2 storages attached using RAID 10 SAN SAN

Test scenarios All three BMC Remedy AR System servers were configured in a server group and configured to the F5 load balancer. Two BMC Remedy Mid Tier instances were configured to the F5 web load balancer. The load balancing scheme for this AR Server group is round-robin without TCP-sticky. Two BMC Remedy Mid Tier (Mid Tier server) instances were configured to the F5 web load balancer. The load balancing scheme for these two web application instances is round-robin with HTTP session affinity via custom cookie insert. The BMC Remedy AR System instance installed on the integration server had all applications that is, BMC Remedy IT Service Management Suite, BMC Service Request Management, and BMC Knowledge Management installed in addition to the core functionalities of BMC Atrium CMDB. Test scenarios The following test scenarios were used for characterizing the performance and scalability on Windows and Microsoft SQL Server environments: Online (combined): BMC Remedy IT Service Management, BMC Service Request Management, and BMC Knowledge Management with a single mid-tier server and a varying number of BMC Remedy AR System servers in a load balanced environment Online stand-alone (single application): Stand-alone BMC Remedy IT Service Management, Stand-alone BMC Service Request Management, BMC Remedy IT Service Management with BMC Knowledge Management. In all three tests, a single mid-tier server and a single BMC Remedy AR System server were used. BMC Atrium CMDB batch processing (excluding online): Stand-alone BMC Atrium CMDB, BMC Atrium Discovery, and Dependency Mapping. An AR integration server was used Mixed (online and batch): Combined testing of online and batch workloads included BMC Remedy IT Service Management, BMC Service Request Management, BMC Knowledge Management, and BMC Atrium CMDB continuous mode jobs and used two mid-tier servers, two BMC Remedy AR System servers, and one BMC Remedy AR integration server in a load balanced environment The activities of the BMC Remedy IT Service Management, BMC Service Request Management, BMC Knowledge Management, and BMC Atrium CMDB test scenarios are described in the following sections. Appendix A provides detailed information about how the test environment was prepopulated. The benchmarking response times do not include the more variable client-side components of response times that a typical end user observes. To quantify the end-toend user response times, results of client browser timings for the three thousand user mixed-load test is presented. 6

Test scenarios Table 2 summarizes the load tests that were conducted: Table 2. Test case load summarized Load Online Online stand-alone 2000 Users X X BMC Atrium CMDB batch Mixed 2500 Users X 3000 Users X X 3000 Users x (2 times the transaction workload) 3000 Users x (3 times the transaction workload) 100K CIs + relationships (new/updated, normalized, reconciled) 250K CIs + relationships (new) 2M CIs + relationships (new, normalized, reconciled) 5M CIs + relationships (new, normalized, reconciled) X X X X X X minal application workload Because testing all possible workload combinations is not practical, BMC defined a nominal workload to represent a typical workload of large customer bases for Remedy IT Service Management, BMC Service Request Management, and BMC Knowledge Management. The nominal workload can be used as a baseline for benchmarking performance and scalability of BMC Remedy solutions consistently over time. A nominal workload is defined by the distributions of concurrent users and transaction rates among the test scenarios being considered. The workload type was the SilkPerformer queuing model. Load tests were executed for over an hour to simulate a real-life customer environment. The workload spread among the Remedy IT Service Management, BMC Service Request Management, and BMC Knowledge Management was split into 40 percent, 50 percent and 10 percent of the total workload respectively, to simulate customer environments as shown in Table 3. This nominal workload was used in the online and mixed applications tests. The online stand-alone tests were special cases in which the percentage of users depended on the applications used.

Test scenarios Table 3. minal workload for BMC IT Service Management, BMC Service Request Management, and BMC Knowledge Management solution test BMC Remedy ITSM Suite and BMC Knowledge Management test scenario Percentage of virtual users Transaction rate (per hour per user) Search Incident By ID 1% 10 Search Incident By Customer 1% 10 Create Incidents with Service CI Related with New Request After Submit 4% 4 Create Incidents with Service CI Related with Modify Request After Submit 4% 4 Modify Incident to Resolve 7% 6 Create Change with Service CI and Task 1% 2 Search Change by ID 1% 3 Change Approval 1% 1 Knowledge Base Search and View Small Documents 2% 25 Knowledge Base Search and View Large Documents 2% 25 Knowledge Base Search and View Articles 2% 25 Knowledge Base Search and View Incidents 2% 25 Knowledge Base Search and View Problems 2% 25 Create Ad Hoc Web Report 6% 2 Run Incident Count By Product Categorization Web Report 7% 1 Run Asset Print Web Report 7% 1 Table 4 summarizes the nominal workload for BMC Service Request Management solution test. Table 4. minal workload for the BMC Service Request Management BMC Service Request Management test scenario Percentage of virtual users Transaction rate (per hour per user) Add Activity Log 2% 6 View services in category 6% 7 Browse sub-category 4% 7 Create service request with six 7% 6 questions mapped to two fields Create service request with six 7% 6 questions and no mapping View popular services 11% 7 Search by keyword 11% 6 8

Test scenarios View service request 2% 6 Table 5 summarizes the projected data for BMC Remedy ITSM Suite. Table 5. Projected data after 1 hour of simulation of nominal workload for 3000 users Entry type Projected numbers for nominal workload Incidents created 960 Incidents modified 1260 Changes created 60 Service requests created 2520 Web reports created 360 Online tests and results This section describes the outcome of the online tests. Scalability These online test cases show the scalability of the products with respect to the hardware. Initially, a single mid-tier and single BMC Remedy AR System server were used to find the maximum workload the systems could handle with a varying number of users. Then, additional BMC Remedy AR System servers were added to the server group and load balanced to help execute larger workloads. To simulate a high load environment, the Silk Performer scenario scripts were configured with a higher transaction pacing (three times in this case) than the nominal workload while keeping the number of users constant. Tests to assess the scalability of BMC IT Service Management, BMC Service Request Management, and BMC Knowledge Management consist of the parameters listed in Table 6 and Table 7: Table 6. Scalability tests using single mid-tier and single BMC Remedy AR System server performed for BMC IT Service Management, BMC Service Request Management, and BMC Knowledge Management workload Number of users Transaction workload 2000 minal 2500 minal 3000 minal

Test scenarios Table 7. Scalability tests using single mid-tier and two or three BMC Remedy AR System servers performed for BMC IT Service Management, BMC Service Request Management, and BMC Knowledge Management workload Number of users Transaction workload Equivalent workload 3000 minal 3000 users 3000 2 x minal 6000 users 3000 3 x minal 9000 users Online test results The average response time for each user scenario represents the response times averaged over all the actions, listed in Appendix B, for each user scenario with a test period of one hour. In all tests, most response times were less than three seconds, even with triple the nominal workload transaction rate. The Incident Run Counts by Product Report response was greater than others because of the incident volume at over 500,000 entries. Resource utilization for all mid-tier, AR, and DB servers stayed within the threshold limits. To quantify the end-to-end user response times, results of client browser timings for the 3000 user mixed-load test are presented in a later section. Charts 1-3 show how BMC IT Service Management, BMC Service Request Management, and BMC Knowledge Management will respond to 2000, 2500, and 3000 concurrent users under the nominal workload using a single mid-tier and BMC Remedy AR System server. The charts present the data organized by application. 10

Test scenarios Chart 1. Comparison for nominal workload with varying users for BMC IT Service Management use cases in a single mid-tier and single BMC Remedy AR System server setup Chart 2. Response time comparison for nominal workload with varying users for BMC Service Request Management use cases in a single mid-tier and single BMC Remedy AR System server setup

Test scenarios Chart 3. Response time comparison for nominal workload with varying users for BMC Knowledge Management use cases in a single mid-tier and single BMC Remedy AR System server setup Charts 4-6 show how Remedy IT Service Management, BMC Service Request Management, and BMC Knowledge Management respond to 3000 concurrent users, 2x3000 concurrent users under the nominal workload, and 3x3000 concurrent users under the nominal workload, using a single mid-tier and multiple BMC Remedy AR System servers in a server group and a load-balanced environment. For the 3000-and 2x3000 -concurrent-user, nominal workloads, two BMC Remedy AR System servers were used. For the 3x3000-concurrent-user nominal workload, three BMC Remedy AR System servers were used. The charts present the data organized by application. 12

Test scenarios Chart 4. Response time comparison for workload with varying users for BMC IT Service Management use cases in a single mid-tier server and multiple BMC Remedy AR System servers setup Chart 5. Response time comparison for workload with varying users for BMC Service Request Management use cases in a single mid-tier server and multiple BMC Remedy AR System servers setup

Test scenarios Chart 6. Response time comparison for workload with varying users for BMC Knowledge Management use cases in a single mid-tier server and multiple BMC Remedy AR System servers setup Table 8 and Table 9 summarize the transaction data created in the system for the online scalability test simulations for an hour under varying workloads. Table 8. Transaction data created for varying workload runs in a single mid-tier and single BMC Remedy AR System server environment Entry type 2000 user workload 2500 user workload 3,000 user workload Incidents created 622 765 889 Emails sent 10,862 15,894 21,063 Changes created 21 18 30 Service requests created 1,639 2,046 2,445 Web reports created 224 296 302 Table 9. Transaction data created for all runs in a single mid-tier server and multiple BMC Remedy AR System servers environment Entry type 3000 user workload 6,000 user workload 9,000 user workload Incidents created 885 1,820 2,802 Emails sent 19,485 27,743 30,263 Changes created 27 63 89 14

Test scenarios Entry type 3000 user workload 6,000 user workload 9,000 user workload Service requests created 2,449 4,926 7,447 Web reports created 299 589 933 System resource utilization for online tests This section describes the average system resource utilization for scalability test runs supporting the high scalability of Remedy IT Service Management, BMC Service Request Management, and BMC Knowledge Management with varying workloads. Chart 7 and Chart 8 compare the scalability test runs for nominal workloads using a single mid-tier and single BMC Remedy AR System server. Chart 7. CPU utilization comparison for mid-tier server, BMC Remedy AR System server, and DB server tiers for all nominal runs in a single mid-tier and single BMC Remedy AR System server setup

Test scenarios Chart 8. Memory utilization comparison for mid-tier server, AR System server, and DB server tiers for all minal runs in a single mid-tier and single BMC Remedy AR System server setup Chart 9 and Chart 10 compare the scalability test runs for nominal workloads, and for double and triple the nominal workloads using a single mid-tier and multiple AR servers in a server-group and load-balanced environment. The 3x3000 nominal workload test was the only test that required a third AR server. 16

Test scenarios Chart 9. CPU utilization comparison for mid-tier server, AR System server, and DB server tiers for nominal time and double and triple the nominal time in a single mid-tier and multiple BMC Remedy AR System servers environment

Test scenarios Chart 10. Memory utilization comparison for mid-tier server, BMC Remedy AR System server, and DB server tiers for nominal run time, and double and triple the nominal run time in a single mid-tier server and multiple BMC Remedy AR System servers setup Online stand-alone tests and results These application stand-alone tests reveal the impact of each application to the system. All Remedy IT Service Management, BMC Service Request Management, and BMC Knowledge Management stand-alone load tests were conducted with a fixed 2000- concurrent-users, fixed transaction pacing workload, and using a single mid tier and a single BMC Remedy AR System server. The only variations were the combination of applications to be run together or stand-alone and the user distribution for each scenario. BMC Remedy IT Service Management stand-alone workload Table 10 describes the BMC Remedy IT Service Management workload in this standalone test scenario. Table 10. Workload for stand-alone BMC Remedy IT Service Management BMC Remedy IT Service Management test scenario Percentage of virtual users Transaction rate (per hour per user) Search incident by ID 7% 10 Search incident by customer 7% 10 18

Test scenarios BMC Remedy IT Service Management test scenario Percentage of virtual users Transaction rate (per hour per user) Create incident with CI no action 12% 4 Create incident with CI redisplay current 12% 4 Modify incident to resolve 15% 6 Create change with service CI and task 6% 2 Search change by ID 7% 3 Change approval 4% 1 Create ad hoc Web report 10% 2 Run incident count by product categorization Web report 10% 1 Run Asset Print Web report 10% 1 BMC Service Request Management stand-alone workload Table 11 describes the BMC Service Request Management workload in this stand-alone test. Table 11. Workload for stand-alone BMC Service Request Management BMC Service Request Management test scenario Percentage of virtual users Transaction rate (per hour per user) Add Activity Log 5% 6 View Services in Category 14% 7 Browse Sub Category 10% 7 Create Service Request w/ 6 15% 6 questions mapped to 2 fields Create Service Request w/ 6 15% 6 questions no mapping View Quick Picks 18% 7 Search by Keyword 18% 6 View Service Request 5% 6 BMC Remedy IT Service Management and BMC Knowledge Management stand-alone workload Table 12 describes the workload for both BMC IT Service Management and BMC Knowledge Management that were run together in this stand-alone test. Table 12. Workload for stand-alone BMC Remedy IT Service Management and BMC Knowledge Management BMC Remedy IT Service Management and BMC Knowledge Management test scenario Percentage of virtual users Transaction rate (per hour per user) Search incident by ID 4% 10

Test scenarios BMC Remedy IT Service Management and BMC Knowledge Management test scenario Percentage of virtual users Transaction rate (per hour per user) Search incident by customer 4% 10 Create incident with CI no action 10% 4 Create incident with CI redisplay current 10% 4 Modify incident to resolve 13% 6 Create change with service CI and task 4% 2 Search change by ID 4% 3 Change approval 2% 1 Knowledge Base search and view small documents 5% 25 Knowledge Base search and view large documents 5% 25 Knowledge Base search and view articles 5% 25 Knowledge Base search and view incidents 5% 25 Knowledge Base search and view problems 5% 25 Create ad hoc Web report 8% 2 Run incident count by product categorization Web report 8% 1 Run Asset Print Web report 8% 1 Online stand-alone test results BMC Remedy IT Service Management stand-alone application and Service Request Management stand-alone application can support 2000 concurrent users comfortably. The addition of BMC Knowledge Management almost doubles the BMC Remedy AR System server CPU utilization. The individual application tests show all response times to be within an acceptable range. 20

Test scenarios Chart 11 and Chart 12 compare the CPU and memory utilization data from the standalone tests. Chart 11. CPU utilization comparison for mid-tier server, AR System server, and DB server tiers for stand-alone tests

Test scenarios Chart 12. Memory utilization comparison for mid-tier server, AR System server, and DB server tiers for stand-alone tests BMC Atrium CMDB batch processing tests and results The BMC Atrium 7.6.04 SP 1 scalability test consisted of BMC Atrium CMDB batch jobs to show the performance of creating, normalizing, and reconciling two million and five million CI and relationships. It also showed the performance of BMC Atrium Discovery and Dependency Mapping (ADDM) to discover and map approximately 250,000 CIs and relationships. BMC Atrium CMDB batch jobs Test cases conducted to assess performance of the BMC Atrium CMDB batch functionalities are as follows: Load CI (regular mode) rmalization (NE) batch mode Reconciliation (RE) batch mode Load CI regular mode and result Loading a CI in regular mode creates one instance at a time. A multi-threaded Java application developed internally by BMC was used to generate the load. An instance of this application was executed locally in the BMC Remedy AR System integration server. 22

Test scenarios Table 13 describes the data model that was created by this tool. The computer system is the root of the tree, and all other CIs that were connected to the computer system through a relationship are also shown. Table 13. Class relationship distribution for creating CI batch tests Class Relationship Number of CIs BMC_ComputerSystem BMC_Dependency 1 BMC_Product BMC_HostedSystemComponents 77 BMC_Monitor BMC_HostedSystemComponents 1 BMC_IPEndpoint BMC_HostedAccess Point 1 BMC_OperatingSystem BMC_HostedSystemComponents 1 BMC_Person BMC_Dependency 1 BMC_Printer BMC_Dependency 1 BMC_Processor BMC_HostedSystemComponents 1 BMC_DiskDrive BMC_HostedSystemComponents 4 BMC_Card BMC_HostedSystemComponents 1 BMC_BIOSElement BMC_HostedSystemComponents 1 BMC_NetworkPort BMC_HostedSystemComponents 1 A total of 91 class instances and 91 relationship instances were used per iteration. Each instance generator had 55 threads. A total of 4,977,500 instances were created for the five million CIs and relationships, of which 27,500 were computer-system CIs. Each class instance was populated with over 17 attributes including the following BMC Remedy Asset attributes and several other class-specific attributes: Name Serial Number Short Description Owner Name Owner Contact Dataset Id Reconciliation Identity Category Type Item Model Manufacturer Name Description Version Number Company Department Site IsVirtual The following tests were conducted to measure CI loading performance: Load regular CI for 1 million CIs (a total of two million CIs and relationships)

Test scenarios Load regular CI for 2.5 million CIs (a total of five million CIs and relationships) Chart 13 and Chart 14 show the throughput and resource usage during CI loading. BMC Remedy AR System server memory usage averaged 16 percent of the total memory and the database memory usage averaged 5 percent of the total memory. Chart 13. Throughput for loading CIs Chart 14. Resource utilization for loading CIs 24

Test scenarios NE batch mode and results The batch normalization mode normalized two million and five million instances that were generated by the initial bulk load of CIs. The batch mode was used to show a typical large customer environment going live in production. Table 14 summarizes the class distribution used in normalization tests. Table 14. Class distribution used for NE batch job ClassName # of instances Level Relationship BMC_ComputerSystem 1 1 BMC_Dependency BMC_Product 77 2 BMC_HostedSystemComponents BMC_Monitor 1 2 BMC_HostedSystemComponents BMC_IPEndpoint 1 2 BMC_HostedAccessPoints BMC_OperatingSystem 1 2 BMC_HostedSystemComponents BMC_Person 1 2 BMC_Dependency BMC_Printer 1 2 BMC_Dependency BMC_Processor 1 2 BMC_HostedSystemComponents BMC_DiskDrive 4 2 BMC_HostedSystemComponents BMC_Card 1 2 BMC_HostedSystemComponents BMC_BIOSElement 1 2 BMC_HostedSystemComponents BMC_NetworkPort 1 2 BMC_HostedSystemComponents The following attributes were populated for each product: Name Serial Number Short Description Owner Name Owner Contact Dataset Id Reconciliation Identity Company Category Type Item Model Manufacturer Name Description Version Number Company Department Site IsVirtual Patch Number Token Id

Test scenarios Chart 15 and Chart 16 show the normalization throughput and resource utilization. BMC Remedy AR System server memory usage averaged 16 percent of the total memory and the database memory usage averaged 5 percent of the total memory. Chart 15. Throughput for normalization Chart 16. Resource utilization for normalization RE batch mode and results A standard reconciliation job was set up for this test case. The following two most common reconciliation activities were tested: 26

Test scenarios Identifying class instances that represent the same entity in two or more datasets Merging class instances from one dataset (such as Discovery) to another dataset (by default, the production BMC.ASSET dataset) All the identification and merge settings use standard rules. These standard rules work with all classes in the Common Data Model (CDM) and BMC extensions. They identify each class, using attributes that typically have unique values, and they merge based on rules of precedence set for BMC datasets. The standard reconciliation job was configured in a noncontinuous mode to identify and merge 5 million CIs and relationships. All CIs were created with reconciliation identity set to 0, indicating that these newly created CIs had not yet been identified. The distribution of the CIs across classes used to create data for RE batch jobs is summarized in Table 15. Table 15. Class distribution used for RE batch job ClassName Number of instances Level Relationship BMC_ComputerSystem 1 1 BMC_Dependency BMC_Product 77 2 BMC_HostedSystemComponents BMC_Monitor 1 2 BMC_HostedSystemComponents BMC_IPEndpoint 1 2 BMC_HostedAccessPoints BMC_OperatingSystem 1 2 BMC_HostedSystemComponents BMC_Person 1 2 BMC_Dependency BMC_Printer 1 2 BMC_Dependency BMC_Processor 1 2 BMC_HostedSystemComponents BMC_DiskDrive 4 2 BMC_HostedSystemComponents BMC_Card 1 2 BMC_HostedSystemComponents BMC_BIOSElement 1 2 BMC_HostedSystemComponents BMC_NetworkPort 1 2 BMC_HostedSystemComponents Chart 17 and Chart 18 show the reconciliation throughput and resource utilization. BMC Remedy AR System server memory usage averaged 15% of the total memory and the database memory usage averaged 5% of the total memory.

Test scenarios Chart 17. Throughput for reconciliation Chart 18. Resource utilization for reconciliation BMC Atrium Discovery and Dependency Mapping tests and results The BMC Atrium Discovery and Dependency Mapping 8.3 performance tests were conducted to show the discovery performance for approximately 5,707 servers that translated to about 250,000 CIs and relationships. The BMC Atrium Discovery and Dependency Mapping appliance was hosted on a virtual machine (VM) using Linux. 28

Test scenarios Two methods of syncing were used. A full sync represents a first-time production roll-out scenario while a delta sync represents checking for new servers in the environment. A full sync and delta sync were conducted to measure the throughput, which is summarized in Chart 19. Chart 19. Throughput for BMC Atrium Discovery and Dependency Mapping full and delta syncs Chart 20 summarizes CPU resource utilization for BMC Atrium Discovery and Dependency Mapping.

Test scenarios Chart 20. CPU resource utilization for BMC Atrium Discovery and Dependency Mapping Mixed workload test and result In a mixed workload, a typical day in the life scenario was run. Remedy IT Service Management, BMC Service Request Management, BMC Knowledge Management, and BMC Atrium CMDB were run simultaneously for a period of an hour. The workload consisted of 3000 online users using the nominal workload, creating 10,000 new CIs and relationships, and updating 90,000 existing CIs and relationships, with normalization and reconciliation running in continuous mode. Running normalization and reconciliation processes in continuous mode provided near real-time reconciliation of configuration items. In this mode, the reconciliation and normalization engines run continuously to reconcile and normalize CIs in small batches based on either time interval or record count configuration settings. CIs and relationships are reconciled into the BMC.Asset dataset which has approximately 5 million CIs and relationships. The source dataset, BMC.Dell, also has approximately 5 million CIs and relationships. The BMC Atrium CMDB tests for creating and updating CIs and relationships were executed on the BMC Remedy AR System server hosted in the dedicated integration server. This was done to simulate a typical large customer base in which both BMC Remedy IT Service Management and BMC Atrium CMDB components are deployed in production and hosted on separate nodes for better scalability and reliability. All BMC Remedy AR System servers were participating in a server group environment where the integration server was the primary server for all CMDB related activities. 30

Test scenarios Mixed workload test results All mixed mode runs resulted in minimal impact on the Remedy IT Service Management, BMC Service Request Management, and BMC Knowledge Management load tests, while BMC Atrium CMDB tests were simultaneously creating, updating, normalizing, and reconciling CIs into the system. Charts 21 23 show the response times for this mixed-mode test. The charts present the data by application. Chart 21. BMC Remedy IT Service Management response time 3000 users nominal mixed workload

Test scenarios Chart 22. BMC Service Request Managment response time 3000 users nominal mixed workload 32

Test scenarios Chart 23. BMC Knowledge Management response time 3000 users nominal mixed workload Chart 24 and Chart 25 show the percentage of memory usage that each process consumes over the total system memory. Other AR processes take minimal usage, so only the BMC Remedy AR System server is presented here.

Test scenarios Chart 24. CPU consumption for 3000 users mixed mode Chart 25. Memory consumption for 3000 users mixed mode All transaction data created and/or modified in the system for the mixed mode test runs is summarized in Table 16. 34

Test scenarios Table 16. Transactions created during 3000 users mixed workload Entry type 3000 user mixed workload Incidents created 881 Emails sent 20,647 Changes created 31 Service requests created 2,426 Web Reports created 324 CI+ relationship created, normalized, reconciled 10,018 CI updated, normalized, reconciled 89,988 To help establish a performance baseline for the BMC applications deployed in the Dell Solution Center environment, a few Remedy IT Service Management, BMC Service Request Management, and BMC Knowledge Management use cases were tested manually to record the response times, using the HTTP Watch tool. The HTTP Watch tool gives close to end-user response time and is more accurate than using the stopwatch method. Chart 26 summarizes the manual end-user response times. Chart 26. Manual end-user response times over LAN connection for 3000 Users mixed mode workload TCP ping network latency was 0 milliseconds. HTTP ping latency was 10 milliseconds.

Performance tuning and recommendations Performance tuning and recommendations Tuning parameters were set in different tiers to achieve this highly scalable architecture in the benchmarking environment. Mid tier settings Tables 17 19 summarize the configuration settings used for all mid tier server instances. Table 17. Mid tier server settings Parameter Settings arsystem.cache_update_interval= 86400 arsystem.pooling_max_connections_per_server= 1500 arsystem.ehcache.overflowtodisk= TRUE arsystem.ehcache.diskpersistent= TRUE arsystem.ehcache.overflowtodisktemp= FALSE arsystem.formhtmljs_expiry_interval= 86400 arsystem.resource_expiry_interval= 86400 arsystem.log_level= Severe arsystem.log_category= INTERNAL arsystem.ehcache.maxelementsinmemory 30,000 arsystem.ehcache.referencemaxelementsinmemory 1800 Mid-tier Life cycle mgmt & Connection Life span option Enabled. The Life cycle management & Connection Life span set to 15 minutes Table 18. Tomcat server settings Parameter Settings acceptcount= 500 connectiontimeout= 60,000 maxsparethreads= 100 maxthreads= 3,000 minsparethreads= 50 36

Performance tuning and recommendations Table 19. JVM settings Parameter -Dcatalina.base= -Dcatalina.home= -Djava.endorsed.dirs= -Djava.io.tmpdir= -Djava.util.logging.manager= -Djava.util.logging.config.file= -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:ErrorFile= Settings D:\BMCPrereqs\Apache\Tomcat D:\BMCPrereqs\Apache\Tomcat D:\BMCPrereqs\Apache\Tomcat\endorsed D:\BMCPrereqs\Apache\Tomcat\temp org.apache.juli.classloaderlogmanager D:\BMCPrereqs\Apache\Tomcat\conf\loggin g.properties D:\BMCPrereqs\Apache\Tomcat\logs\hs_err_ pid<pid>.log -XX:PermSize= 256m -Djava.library.path= D:\BMCPrereqs\Apache\Tomcat\shared\lib - XX:+HeapDumpOnOutOfMemoryErro r -Dcom.sun.management.jmxremote - Dcom.sun.management.jmxremote.port = 8086 - Dcom.sun.management.jmxremote.ssl= FALSE - Dcom.sun.management.jmxremote.auth enticate= FALSE -XX:+UseParNewGC JVM min heap size JVM max heap size 2000 MB 2500 MB F5 load balancer settings Table 20 summarizes the configuration settings used for load balancing. Table 20. F5 Load Balancer settings Parameter BIG-IP code version deployed Sticky bit on Mid-tier Virtual server List Sticky bit on AR Virtual server List TCP Idle Timeout(AR Virtual Servers) TCP Idle Timeout(Mid-tier Virtual Servers) Settings v10.2.3 Enabled Disabled 7260 seconds 7260 seconds

Performance tuning and recommendations Parameter OneConnect profile(server-side TCP multi-plexing) HTTPS virtual servers irule that does a LB::detach for every new HTTP_REQUEST Two new SNAT pools, each with 4 IP addresses (8 total additional addresses). One SNAT Pool goes to the HTTP virtual servers and the other goes to the RPC virtual server. Settings Enabled Added Added F5 BIG-IP modifications for BMC Remedy 1. Create irule to handle the load balance detach function: a) Navigate to Local Traffic irules irule List b) Click Create and define the irule as shown in the following screenshot: c) Click Finished. 2. Create a OneConnect profile: a. Navigate to Local Traffic Profiles Other OneConnect. b. Click Create and define the profile as show in the following screenshot: 38

Performance tuning and recommendations 3. Create a SNAT Pool for additional TCP port capacity: a) Navigate to Local Traffic SNATs SNAT Pool b) Click Create and create the SNAT Pool as shown in the following screenshot:

Performance tuning and recommendations 4. Modify the BMC Virtual Server. a) Add the new Profile and SNAT Pool: i. Navigate to Local Traffic Virtual Servers Virtual Server List ii. Click the BMC Virtual Server and configure as shown in the following screenshot: Select BMC OneConnect Profile Click Update 5. Add the new irule as shown in the following screenshots: Select BMC SNAT Pool 40

Performance tuning and recommendations Click Resources Click Manage Select BMC irule Click Finished BMC Remedy AR System server settings Table 21 summarizes the configuration settings used for BMC Remedy AR System server instances. Table 21. BMC Remedy AR System server settings Parameter Delay-Recache-Time: 300 Settings Max-Entries-Per-Query: 2000 Next-ID-Block-Size: 100 Server-Side-Table-Chunk-Size: 1000 Allow-Unqual-Queries: Cache-Mode: 0 Debug-mode: 0 Submitter-Mode: 1 Authentication-Chaining-Mode: 0 F