Red Hat Reference Architecture Series

Similar documents
Vertical Scaling of Oracle 10g Performance on Red Hat Enterprise Linux 5 on Intel Xeon Based Servers. Version 1.0

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

SAS Business Analytics. Base SAS for SAS 9.2

Performance and Scalability of the Red Hat Enterprise Healthcare Platform

HP ProLiant BL460c takes #1 performance on Siebel CRM Release 8.0 Benchmark Industry Applications running Linux, Oracle

RAID 5 rebuild performance in ProLiant

HP reference configuration for entry-level SAS Grid Manager solutions

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage

Benchmark Testing Results: TEMENOS T24 Running on Microsoft SQL Server 2012

PARALLELS CLOUD STORAGE

Migrating from Unix to Oracle on Linux. Sponsored by Red Hat. An Oracle and Red Hat White Paper September 2003

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

Assessing RAID ADG vs. RAID 5 vs. RAID 1+0

QuickSpecs. HP SAS Enterprise and SAS Midline Hard Drives Overview

Performance brief for IBM WebSphere Application Server 7.0 with VMware ESX 4.0 on HP ProLiant DL380 G6 server

Centrata IT Management Suite 3.0

QuickSpecs. What's New HP 750GB 1.5G SATA 7.2K 3.5" Hard Disk Drive. HP Serial-ATA (SATA) Hard Drive Option Kits. Overview

HP ProLiant DL580 Gen8 and HP LE PCIe Workload WHITE PAPER Accelerator 90TB Microsoft SQL Server Data Warehouse Fast Track Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

HP recommended configuration for Microsoft Exchange Server 2010: HP LeftHand P4000 SAN

PRODUCT BRIEF 3E PERFORMANCE BENCHMARKS LOAD AND SCALABILITY TESTING

White Paper. Proving Scalability: A Critical Element of System Evaluation. Jointly Presented by NextGen Healthcare & HP

Performance and scalability of a large OLTP workload

HP ProLiant DL585 G5 earns #1 virtualization performance record on VMmark Benchmark

Benchmark Results for Temenos T24 with SQL Server 2008 R2 on Intel-based NEC Servers

HP ProLiant BL460c achieves #1 performance spot on Siebel CRM Release 8.0 Benchmark Industry Applications running Microsoft, Oracle

QuickSpecs. What's New Dual Port SFF 10K and 15K SAS drives Dual Port 3.5" 15K SAS drives. HP SAS Drives (Servers) Overview

RED HAT ENTERPRISE VIRTUALIZATION PERFORMANCE: SPECVIRT BENCHMARK

Cloud Storage. Parallels. Performance Benchmark Results. White Paper.

HP ProLiant DL380 G5 takes #1 2P performance spot on Siebel CRM Release 8.0 Benchmark Industry Applications running Windows

Best Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card

HP recommended configurations for Microsoft Exchange Server 2013 and HP ProLiant Gen8 with direct attached storage (DAS)

SAN Conceptual and Design Basics

QuickSpecs. HP Serial-ATA (SATA) Entry (ETY) and Midline (MDL) Hard Drive Option Kits. Overview

Virtuoso and Database Scalability

Technical Paper. Performance and Tuning Considerations for SAS on Pure Storage FA-420 Flash Array

HP and Mimosa Systems A system for archiving, recovery, and storage optimization white paper

HP ProLiant Gen8 vs Gen9 Server Blades on Data Warehouse Workloads

Mission-Critical Fault Tolerance for Financial Transaction Processing

Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software

Microsoft Windows Server 2003 with Internet Information Services (IIS) 6.0 vs. Linux Competitive Web Server Performance Comparison

Red Hat enterprise virtualization 3.0 feature comparison

Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003

Red Hat Enterprise Linux: The ideal platform for running your Oracle database

Microsoft SharePoint Server 2010

An Oracle White Paper Released October 2008

HP StorageWorks MPX200 Simplified Cost-Effective Virtualization Deployment

Distribution One Server Requirements

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

Oracle9i Release 2 Database Architecture on Windows. An Oracle Technical White Paper April 2003

NoSQL Performance Test In-Memory Performance Comparison of SequoiaDB, Cassandra, and MongoDB

Maximum performance, minimal risk for data warehousing

Deploying IBM Lotus Domino on Red Hat Enterprise Linux 5. Version 1.0

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

Red Hat Enterprise Linux: The Clear Leader for Enterprise Web Applications

HP Smart Array Controllers and basic RAID performance factors

Red Hat Global File System for scale-out web services

HP SN1000E 16 Gb Fibre Channel HBA Evaluation

Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

HP StorageWorks Entry-level Enterprise Backup Solution

INFOBrief. Red Hat Enterprise Linux 4. Key Points

Developing a dynamic, real-time IT infrastructure with Red Hat integrated virtualization

HP ProLiant BL685c takes #1 Windows performance on Siebel CRM Release 8.0 Benchmark Industry Applications

System Requirements. Version 8.2 November 23, For the most recent version of this document, visit our documentation website.

Technical Paper. Performance and Tuning Considerations for SAS on Fusion-io ioscale Flash Storage

HP ProLiant Storage Server family. Radically simple storage

Contents. 2. cttctx Performance Test Utility Server Side Plug-In Index All Rights Reserved.

An Oracle White Paper Released April 2008

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Avid ISIS

QuickSpecs. HP StorageWorks 60 Modular Smart Array. Overview

OTM in the Cloud. Ryan Haney

How To Store Data On An Ocora Nosql Database On A Flash Memory Device On A Microsoft Flash Memory 2 (Iomemory)

Desktop virtualization: Implementing Virtual Desktop Infrastructure (VDI) with HP. What is HP Virtual Desktop Infrastructure?

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions:

Microsoft Exchange Server 2007 and Hyper-V high availability configuration on HP ProLiant BL680c G5 server blades

TECHNOLOGY BRIEF. Compaq RAID on a Chip Technology EXECUTIVE SUMMARY CONTENTS

Recommended hardware system configurations for ANSYS users

The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment

Esri ArcGIS Server 10 for VMware Infrastructure

OVERVIEW. CEP Cluster Server is Ideal For: First-time users who want to make applications highly available

white paper Capacity and Scaling of Microsoft Terminal Server on the Unisys ES7000/600 Unisys Systems & Technology Modeling and Measurement

VTrak SATA RAID Storage System

HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief

Performance Analysis of Web based Applications on Single and Multi Core Servers

Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers

QuickSpecs. What's New New GB Pluggable Ultra320 SCSI 15,000 rpm Universal Hard Drive (1") HP SCSI Ultra320 Hard Drive Option Kits (Servers)

<Insert Picture Here> Introducing Oracle VM: Oracle s Virtualization Product Strategy

Delivering Accelerated SQL Server Performance with OCZ s ZD-XL SQL Accelerator

An Oracle White Paper July Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide

MS Exchange Server Acceleration

Data Protection with IBM TotalStorage NAS and NSI Double- Take Data Replication Software

HP Smart Array 5i Plus Controller and Battery Backed Write Cache (BBWC) Enabler

Sizing guide for SAP and VMware ESX Server running on HP ProLiant x86-64 platforms

Fusion iomemory iodrive PCIe Application Accelerator Performance Testing

MS EXCHANGE SERVER ACCELERATION IN VMWARE ENVIRONMENTS WITH SANRAD VXL

ARCHITECTING COST-EFFECTIVE, SCALABLE ORACLE DATA WAREHOUSES

Transcription:

Red Hat Reference Architecture Series Tuning and&optimizing Performance Scalability of the the PROFILE Core Banking Platform Red Hat Core Banking Platform (based on National Information Services PROFILE PROFILE ) ) (based onfidelity Fidelity Information Services Profile 7 GT.M 5.2 Red Hat Enterprise Linux 5 HP DL58 G5 Version 1 November 27

Performance & Scaling of the PROFILE Core Banking Platform (based on Fidelity National Information Services PROFILE ) 181 Varsity Drive Raleigh NC 2766-272 USA Phone: +1 919 754 37 Phone: 888 733 4281 Fax: +1 919 754 371 PO Box 13588 Research Triangle Park NC 2779 USA "Red Hat," Red Hat Linux, the Red Hat "Shadowman" logo, and the products listed are trademarks or registered trademarks of Red Hat, Inc. in the United States and other countries. Linux is a registered trademark of Linus Torvalds. All other trademarks referenced herein are the property of their respective owners. 27 by Red Hat, Inc. This material may be distributed only subject to the terms and conditions set forth in the Open Publication License, V1. or later (the latest version is presently available at http://www.opencontent.org/openpub/). The information contained herein is subject to change without notice. Red Hat, Inc. shall not be liable for technical or editorial errors or omissions contained herein. Distribution of modified versions of this document is prohibited without the explicit permission of Red Hat Inc. and Fidelity National Information Services, Inc. Distribution of the work or derivative of the work in any standard (paper) book form for commercial purposes is prohibited unless prior permission is obtained from Red Hat Inc. and Fidelity National Information Services, Inc. The GPG fingerprint of the security@redhat.com key is: CA 2 86 86 2B D6 9D FC 65 F6 EC C4 21 91 8 CD DB 42 A6 E www.fidelityinfoservices.com 2 www.redhat.com

Table of Contents 1. Executive Summary... 4 2. FIS PROFILE - Architecture... 6 2.1 Multi-Threaded, Message-Based Online Processing... 6 2.2 Multi-Threaded Scheduled Processing... 7 3. FIS PROFILE Benchmarking Methodology... 1 3.1 Simulating Online Processing... 1 3.2 Simulating Scheduled Processing... 11 3.2.1 Scheduled Transaction Posting... 12 3.2.2 Interest Posting... 12 3.2.3 Accruals and Account Housekeeping... 12 4. FIS PROFILE - Benchmark Description and Results... 13 4.1 System Under Test (SUT)... 13 4.1.1 Server... 13 4.1.2 Storage... 13 4.1.3 OS & System Software... 15 4.1.4 Application, DB & Other Layered Software... 15 4.2 Benchmark Database... 16 4.2.1 Database Configuration... 16 4.2.2 Database and Database Journal File Systems... 16 4.2.3 Database File Size (mounted in /fis14) 1M Account DB... 16 4.2.4 Database File Size (mounted in /fis14) 25M Account DB... 17 4.2.5 Database Product Distribution... 17 4.2.6 Database Transaction Volumes... 18 4.3 Test Results... 19 4.3.1 Online & Scheduled Process TPS... 19 4.3.2 Scheduled Process Runtimes... 19 4.3.3 Average CPU Utilization (Steady State)... 19 4.4 Resource Utilization Plots... 2 4.4.1 Resource Utilization (CPU and Disk) - 1 Million Account DB... 2 4.4.2 Resource Utilization (CPU and Disk) 25 Million Account DB... 24 5. Conclusions... 28 www.redhat.com 3 www.fidelityinfoservices.com

1. Executive Summary Fidelity National Information Services, Inc. (FIS), a worldwide leader in processing and technology solutions for financial institutions, recently announced new performance benchmarks for FIS PROFILE core banking application, operating on one of the market s newest technology platforms created through its go-to-market alliance with leading technology providers Red Hat, Intel and HP. On this market-leading technology stack consisting of the latest HP ProLiant DL58 G5 server platform featuring Intel 73 series quad-core Xeon processors, Red Hat Enterprise Linux 5 (RHEL 5) operating system and Cisco Systems networking infrastructure, PROFILE s performance was benchmarked at more than 3,2 online banking transactions per second on a 25 million account database. The real-time core banking solution also processed 25 million interest accruals and balance accumulation updates at a rate exceeding 25,8 transactions per second, or more than 93 million transactions per hour. 1 Million Accounts 25 Million Accounts Transactions Per Second (TPS) Rates Online Accruals Online Response Processing TPS Time Accounts/Sec (seconds) 3,682.31 31,56 3,269.36 25,853 As demonstrated in these benchmark results, PROFILE continues to surpass performance and scalability standards throughout the financial services market, exceeding the capacity requirements of 99 percent of the banking market, said Frank Sanchez, president of FIS Enterprise Solutions division. It also showcases one of the industry s lowest total-cost-of-ownership systems in the market today. The tightly engineered market alliance among FIS, Red Hat, Intel and HP and its resulting technology platform is distinctive not only because it creates one of the best business- continuity models in the industry, but because it also creates a packaged, validated and certified configuration that is generally available across all components. By bringing together these powerful components of the overall solution, this alliance highlights the performance, the availability and the operational completeness of the solution, said Sanchez. This unique alliance and the impressive total cost of ownership advantages it brings to the financial services www.fidelityinfoservices.com 4 www.redhat.com

markets directly support our ongoing thought leadership toward promoting new technology solutions to meet emerging market needs and our joint commitment to providing the industry with the lowest total cost of ownership of any banking platform system. Robert Hunt, research director in the Retail Banking Practice at TowerGroup believes that a compelling business case exists for mid-tier banks to implement a single, integrated core banking system that utilizes a low-cost operating platform. Most mid-tier banks continue to process their core systems using best-of-breed mainframe-based systems developed in the 198s, said Hunt. The hardware and software savings achievable by modernizing the core processing environment can provide banks with a significant cost advantage. PROFILE is FIS premier real-time, multicurrency core banking system that supports hundreds of institutions around the world, ranging from de novo startups to top-tier global banks. Companies using PROFILE as their core banking system experience industry-leading total cost of ownership benefits based on lower infrastructure costs, ongoing operating costs and increased productivity. PROFILE running on Red Hat Enterprise Linux & Intel Xeon based Servers provides mainframe-class performance at a fraction of the cost with greater than 1x savings in cost per account, per month compared to legacy systems because of a combination of factors including: Savings on Hardware = Commodity Intel Xeon processor based servers compared to proprietary servers Savings on Open Source Operating System = Red Hat Enterprise Linux 5 (RHEL 5) Savings on Open Source Database = FIS GT.M 5.2 As testing and development continue, additional savings are expected in PROFILE and other components of the FIS banking application environment which surrounds PROFILE: Savings on Logical Volume Manager (LVM), Multi-path I/O Software (MPIO), Global File System (GFS) = included with RHEL 5 Savings on Open Source Java Enterprise Edition, Enterprise Portal, SOA = JBoss www.redhat.com 5 www.fidelityinfoservices.com

2. FIS PROFILE - Architecture 2.1 Multi-Threaded, Message-Based Online Processing The PROFILE Application Server is a long running process designed to receive a message from an external touchpoint, and format, interpret, process, and commit a transaction to the database and then return a response to the originating client. This server architecture is available to all types of clients, including front-end applications as well as middle tier solutions (e.g. FIS Xpress application server) by exposing API s and specific formats and standards (such as IFX).. Examples of client applications include FIS Touchpoint solutions, FIS Profile Direct, 3rd party teller systems, online debit networks and ODBC/JDBC clients. Figure 1 The Profile Application Server depends on a mechanism to deliver messages across LANs/WANs and in to the Profile system for processing and subsequent delivery of responses back to the originating client. The proprietary FIS www.fidelityinfoservices.com 6 www.redhat.com

mechanism used here is a communications monitor called the Message Transport Monitor (MTM). The MTM receives messages sent to the Profile Application Server over TCP/IP. The MTM routes client messages to an available Profile server from a pool of processes that it manages and also routes the corresponding reply message back to the originating client. The MTM uses queues to send and receive messages to/from the FIS Profile server processes. FIS Profile server processes are multi-purpose, homogenous instances of the application, each fully capable of responding to all supported message types, including remote procedure calls (RPCs) to the PROFILE application, SQL transactions, industry-standard and proprietary financial transactions and customized message interfaces. Since any application server process can respond to any and all messages, separate function oriented servers are not required and server-to-server inter-process communication is eliminated. This architecture simplifies configuration and tuning requirements and optimizes server availability. Multiple server processes provide scalability in an SMP environment, improve resource overlap and utilization and also improve reliability and response time consistency. In the event that any one server process encounters a long running transaction, the effect is limited to that single transaction. The FIS Profile banking server architecture supports multiple MTM processes, each listening on its own socket and with its own pool of application server processes. In addition to providing horizontal scalability, utilizing this capability can provide application granularity by establishing classes of service. For example, client service representatives providing private bank services to high net worth customers could be associated with separate MTM and banking server processes. As a practical matter, the level of performance achieved already means that few, if any, customer sites choose to run multiple MTM processes. Rates exceeding 3,2 online financial transactions per second have been achieved in benchmarking. 2.2 Multi-Threaded Scheduled Processing The accounting and/or operational requirements of a financial institution may dictate that certain processes should be performed on a periodic, or scheduled basis (e.g., daily, monthly or annually). Many of these processes, such as account interest accrual and posting, involve a high percentage and sometimes all of an institution s accounts. Traditional, batch core processing systems perform these activities as jobs that move data from one flat file to another and can operate rapidly because the flat files are optimized for sequential access. However, real-time processing demands have required transaction consistency across the entire database 1% of the time. FIS Profile is a real-time system that processes client financial transactions on a 24x7 basis. Within its architecture, scheduled processes must interact with the www.redhat.com 7 www.fidelityinfoservices.com

same database as the online transactions and must serialize with other transactions occurring concurrently. Typically, scheduled processes must complete within a predetermined processing window (for example, there may be a deadline to provide a feed to a general ledger system), resulting in throughput requirements that are often significantly greater than those of online transaction processing. For example, the peak online transaction rates of even the world s largest banks are typically below one thousand transactions per second, a number that FIS Profile can handle with capacity to spare. On the other hand, the overnight processing of a bank with tens of millions of accounts requires scheduled transaction rates on the order of many thousands of accounts per second. FIS Profile real-time scheduled processing is designed to achieve maximum performance on today s multiprocessor/multi-core computer systems. All standard scheduled processes that involve a large number of records are multithreaded. In multi-threaded scheduled processing, multiple server processes are started, each of which works separately and independently on a portion of the total workload. A process called the Heavy Thread Manager (HTM) parcels out groups of records to the server processes. Since the work is distributed among many server processes, the overall throughput is multiplied and the elapsed time to complete a scheduled process is reduced. The actual time to complete a scheduled process is determined by a number of factors including the hardware configuration, logic complexity and I/O requirements of the scheduled process and overall system activity. Rates of over 25, scheduled transactions per second have been achieved during benchmarking. FIS Profile processing and the DBMS have been designed to work with each other. During scheduled processing, the updating of each account is treated as a transaction and each account is therefore a checkpoint. If the system crashes during a scheduled process and then recovers, the restarted scheduled process determines the last account processed and then continues with the next account. FIS Profile real-time scheduled processes use a relaxed algorithm to commit transactions to the database. In the event that the system crashes during a scheduled process the work is reprocessed from the point where the last transaction was committed, or checkpointed. This feature substantially improves throughput in time-critical scheduled processing at no cost during normal processing. If an online transaction is received and processed during a scheduled process, any preceding scheduled transactions are immediately check-pointed, thereby maintaining transaction serialization. The combination of multi-threaded real-time scheduled processing and checkpointing yields scheduled-processing rates on the order of thousands of www.fidelityinfoservices.com 8 www.redhat.com

transactions per second. This throughput is required to meet the needs of financial institutions with millions to tens of millions of accounts. FIS Profile is unique in its ability to synchronize scheduled and online transactions, supporting the requirements of a true 24x7 real-time system. www.redhat.com 9 www.fidelityinfoservices.com

3. FIS PROFILE Benchmarking Methodology Depending on the specific platform solution being benchmarked, databases with several million accounts are routinely tested. These databases have a distribution of accounts based on a matrix of products typical to the target market very large banks, within a specific global region.(in fact, the standard methodology evolved from a benchmark conducted in 1997 for a multi-national bank that required the Profile database to scale to 4 million accounts.). The benchmark consists of two parts, real-time scheduled processing and online transaction processing. The objective of real-time scheduled processing is to measure how efficient a set of standard Profile scheduled processes that represent a typical set of tasks required in daily overnight processing can complete. The objective of the online transaction processing is to create a maximum load of transactions using workload generator(s) to determine how much transaction throughput the system can sustain. The benchmark test suite represents key business functions as well as significant underlying processing against the Profile database. Online transactions are measured in terms of response times of complete and atomic messages processed by the Profile Application Server. The financial transaction messages process two sided transactions (debits and credits to the account as well as the general ledger account). The real-time scheduled process transactions selected for the tests serve as key indicators of overall processing performance due to the underlying impact on the Profile database as well as representing real world, daily processing requirements of any sized banking application. 3.1 Simulating Online Processing For the Profile online transaction processing benchmark, a transaction generator is used to create complete network messages that simulate online transaction processing requests from clients. Since the Profile Application Server receives and responds to messages it receives from the message transport, it neither knows nor cares where the messages actually originate. The client transaction simulators generate messages, throttled on the client side and can continue to increase the load on the Profile Application Server side. The load generators scale horizontally, allowing for an unlimited load to be created and sent to the Profile Application Servers. Typically, two sub-networks are used to connect the workstations to separate network cards on the Profile server to ensure that the network does not limit transaction throughput. The online test simulates online activity including monetary transactions (6%) and account inquiries (4%). This set of test utilized a Fidelity developed Java load tool to generate the online load. These transactions are passed to Profile Application Servers on the target machine through Profile s Message Transaction www.fidelityinfoservices.com 1 www.redhat.com

Monitor (MTM). The MTM is connected to the clients using a TCP/IP connection. Transactions received by the server(s) are processed and responded with a reply message. Figure 2 The client transaction generators create messages for random transactions against random accounts in the database according to a statistical distribution. 3.2 Simulating Scheduled Processing The real-time scheduled processes are: Scheduled Transaction Posting, Interest Posting, Accrual and account update processing. Scheduled transaction posting refers to a class of operations where a debit or credit transaction is received from a system file via an external source, such as a clearing house, or which may even be generated within Profile, such as a www.redhat.com 11 www.fidelityinfoservices.com

scheduled transfer of an amount from one account to another. The Profile benchmark includes three processes that perform real-time scheduled processing. Scheduled Transaction Posting takes the transactions from a system file and processes them with multiple threads. An institution that receives and processes a daily feed with a large number of transactions from a clearinghouse would typically process these transactions using this functionality. In the standard benchmark, Accrual and Account Update Processing accesses and updates every account in the database to calculate and update the accrued interest as well as other housekeeping functions for an account. Interest posting credits accrued interest to those accounts that are scheduled to receive interest on the posting date. 3.2.1 Scheduled Transaction Posting General real-time Scheduled Transaction Posting is a multi-threaded function that processes financial transaction records delivered in a standard input/source table. For the benchmark, an input file containing random accounts was used to generate the transaction set. This test measures transaction processing from simulated sources, such as ACH or batch ATM, etc. 3.2.2 Interest Posting Account Interest Posting is a multi-threaded real-time scheduled function that processes transactions to those accounts that are scheduled to post interest on the system date that the test is run. The account database was created with a significantly large number of accounts set to post interest transactions as part of this test. 3.2.3 Accruals and Account Housekeeping Accrual and account update processing is a multi-threaded, real-time scheduled process that calculates one day s accrued interest and updates the applicable fields for every account record in the database. This information is summarized by currency code, product class, product group, product type, G/L set code and cost center. The summary information is one of the sources that feed the General Ledger. Because every account record in the database is processed, this function serves as a significant indicator of overall performance. www.fidelityinfoservices.com 12 www.redhat.com

4. FIS PROFILE - Benchmark Description and Results The 25 million account benchmark uses: 1 load generator per Intel Xeon cpu-core with 1 client processes per load generator for a total of 16 load generators and 16 client processes. 8 MTMs with 8 PROFILE server processes each for a total of 64 PROFILE server processes. Each scheduled job spawns 16 heavy threaded processes dedicated to that job. 4.1 System Under Test (SUT) 4.1.1 Server Vendor Model Processors Clock Speed Memory 1 x HP DL58 G5 (based on Intel Xeon Processor X735) (Tigerton) 4 sockets, 4 cores/socket = 16 cores 2.93 GHz 128 GB 4.1.2 Storage Vendor Model Disk Controller 4 x HP StorageWorks MSA7 Direct Attached Storage with 25 HDDs each 4 x HP Smart Array P8 Controllers Disks 1 x 146GB 1K RPM Serial Attach SCSI (SAS) Drives (14.4 TB raw) Fault Tolerance Support for various RAID levels ( / 1 / 5 / 6) [RAID 6 with ADG: Allocates the equivalence of two parity drives across multiple drives and allows simultaneous write operations. Distributed Data Guarding (RAID 5): Allocates parity data across multiple drives and allows simultaneous write operations. Drive Mirroring (RAID 1 and 1+ Striped Mirroring): Allocates half of the drive array to data and the other half to mirrored data, providing two copies of every file.] www.redhat.com 13 www.fidelityinfoservices.com

Figure 3 www.fidelityinfoservices.com 14 www.redhat.com

4.1.3 OS & System Software Vendor Version Kernel Version File System Other Red Hat Red Hat Enterprise Linux (RHEL) 5-64 bit Kernel v2.6.18-8.1.15.el5 SMP EXT3 The RHEL High Availability software solution is not configured 4.1.4 Application, DB & Other Layered Software Application Database PROFILE v7 GT.M v5.2-b Single database instance Database Journaling enabled www.redhat.com 15 www.fidelityinfoservices.com

4.2 Benchmark Database 4.2.1 Database Configuration Database Configuration # of Customer Records # of Account Records # History Records 1 Million Accounts 1,, 1,, 5,, 25 Million Accounts 25,, 25,, 125,, 4.2.2 Database and Database Journal File Systems Mount Point RAID Level Number of Spindles Estimated Size Enclosure Number /fis14 1+ 12 x 146 GB 88 GB 4 /fis12 1+ 12 x 146 GB 88 GB 3 4.2.3 Database File Size (mounted in /fis14) 1M Account DB Database Name Bytes profile.acn 2,52,145,48 profile.cif 4,924,887,552 profile.dtx 4,14,81,92 profile.hist 12,312,113,664 profile.tmp 41,468,864 profile.ttx 82,871,68 profile.ubg 2,52,76,32 profile.xref 4,14,81,92 Total 49,248,727,4 www.fidelityinfoservices.com 16 www.redhat.com

4.2.4 Database File Size (mounted in /fis14) 25M Account DB Database Name Bytes profile.acn 51,3,262,4 profile.cif 12,722,516,48 profile.dtx 8,28,97,792 profile.hist 41,4,224,768 profile.tmp 1,26,73,88 profile.ttx 2,52,76,32 profile.ubg 5,13,88,96 profile.xref 1,26,17,776 Total 131,739,447,296 4.2.5 Database Product Distribution Product Class Deposit Deposit Deposit Loan Loan Loan Product Group Certificate of Deposit (CD) Regular Checking (DDA) Regular Savings (SAV) Installment LN Fixed MTG LN Revolving Credit (RC) Product Type % Distribution 1M DB Number of Accounts 25M DB Number of Accounts 35 1% 1,, 2,5, 4 4% 4,, 1,, 3 2% 2,, 5,, 5 1% 1,, 2,5, 7 1% 1,, 2,5, 6 1% 1,, 2,5, Total DB Size 1,, 25,, www.redhat.com 17 www.fidelityinfoservices.com

4.2.6 Database Transaction Volumes Database Transaction Volumes Online Transactions Transaction Inclearing Interest Posting Accrual Processing 1 Million Accounts 8, 2, 3,, 1,, 25 Million Accounts 8, 3, 7,5, 25,, www.fidelityinfoservices.com 18 www.redhat.com

4.3 Test Results 4.3.1 Online & Scheduled Process TPS 1 Million Accounts 25 Million Accounts Online TPS Online Response Time (seconds) TPS Rates Transaction Posting TPS Interest Posting TPS Accruals Processing Accounts/Sec 3,682.31 4,545 7,557 31,56 3,269.36 1,68 7,591 25,853 4.3.2 Scheduled Process Runtimes Avg. Transaction Posting Runtime (Seconds) Scheduled Process Runtimes Avg. Interest Posting Runtime (Seconds) Avg. Accruals Processing Runtime (Seconds) 1 Million Accounts 44 397 322 25 Million Accounts 281 988 967 4.3.3 Average CPU Utilization (Steady State) On Line Simulation CPU Utilization Transaction Posting Interest Posting Accruals Processing 1 Million Accounts 54% 4% 75% 97% 25 Million Accounts 53% 19% 77% 7% www.redhat.com 19 www.fidelityinfoservices.com

4.4 Resource Utilization Plots 4.4.1 Resource Utilization (CPU and Disk) - 1 Million Account DB On Line Simulation System Summary ProfileSvr1 11/6/27 CPU% IO/sec 1 16 usr%+sys% 9 8 7 6 5 4 3 2 1 14 12 1 8 6 4 2 Disk xfers 12:14 12:14 12:14 12:15 12:15 12:15 12:16 12:16 12:16 12:16 12:17 12:17 12:17 12:17 12:18 12:18 12:18 12:19 12:19 12:19 12:19 12:2 12:2 12:2 12:2 12:21 12:21 12:21 Disk total kb/s ProfileSvr1-11/6/27 Disk Read kb/s Disk Write kb/s IO/sec 12 1 8 6 4 2 12:14 12:14 12:14 12:15 12:15 12:15 12:16 12:16 12:16 12:16 12:17 12:17 12:17 12:17 12:18 12:18 12:18 12:19 12:19 12:19 12:19 12:2 12:2 12:2 12:2 12:21 12:21 12:21 Thousands 16 14 12 1 8 6 4 2 www.fidelityinfoservices.com 2 www.redhat.com

Scheduled Transaction Posting System Summary ProfileSvr1 11/6/27 CPU% IO/sec usr%+sys% 1 8 6 4 2 4 35 3 25 2 15 1 5 Disk xfers Disk total kb/s ProfileSvr1-11/6/27 Disk Read kb/s Disk Write kb/s IO/sec Thousands 3 25 2 15 1 5 4 35 3 25 2 15 1 5 www.redhat.com 21 www.fidelityinfoservices.com

Scheduled Interest Posting System Summary ProfileSvr1 11/6/27 CPU% IO/sec 1 7 usr%+sys% 9 8 7 6 5 4 3 2 1 6 5 4 3 2 1 Disk xfers 12:26 12:26 12:26 12:27 12:27 12:27 12:28 12:28 12:28 12:28 12:29 12:29 12:29 12:29 12:3 12:3 12:3 12:31 12:31 12:31 12:31 12:32 12:32 12:32 12:32 12:33 12:33 12:33 Disk total kb/s ProfileSvr1-11/6/27 Disk Read kb/s Disk Write kb/s IO/sec 5 45 4 35 3 25 2 15 1 5 12:26 12:26 12:26 12:27 12:27 12:27 12:28 12:28 12:28 12:28 12:29 12:29 12:29 12:29 12:3 12:3 12:3 12:31 12:31 12:31 12:31 12:32 12:32 12:32 12:32 12:33 12:33 12:33 Thousands 7 6 5 4 3 2 1 www.fidelityinfoservices.com 22 www.redhat.com

Scheduled Accruals Processing System Summary ProfileSvr1 11/6/27 CPU% IO/sec usr%+sys% 1 8 6 4 2 35 3 25 2 15 1 5 12:34 12:34 12:34 12:35 12:35 12:35 12:35 12:36 12:36 12:36 12:36 12:37 12:37 12:37 Disk xfers 12:38 12:38 12:38 12:38 12:39 12:39 12:39 12:39 12:4 12:4 Disk total kb/s ProfileSvr1-11/6/27 Disk Read kb/s Disk Write kb/s IO/sec Thousands 25 2 15 1 5 35 3 25 2 15 1 5 12:34 12:34 12:34 12:35 12:35 12:35 12:35 12:36 12:36 12:36 12:36 12:37 12:37 12:37 12:38 12:38 12:38 12:38 12:39 12:39 12:39 12:39 12:4 12:4 www.redhat.com 23 www.fidelityinfoservices.com

4.4.2 Resource Utilization (CPU and Disk) 25 Million Account DB On Line Simulation System Summary ProfileSvr1 11/7/27 CPU% IO/sec 1 14 usr%+sys% 9 8 7 6 5 4 3 2 1 12 1 8 6 4 2 Disk xfers 8:49 8:5 8:5 8:5 8:5 8:5 8:51 8:51 8:51 8:51 8:51 8:52 8:52 8:52 8:52 8:52 8:53 8:53 8:53 8:53 8:53 8:54 8:54 8:54 8:54 8:54 8:55 8:55 8:55 8:55 8:55 8:56 8:56 Disk total kb/s ProfileSvr1-11/7/27 Disk Read kb/s Disk Write kb/s IO/sec 1 9 8 7 6 5 4 3 2 1 8:49 8:5 8:5 8:5 8:5 8:5 8:51 8:51 8:51 8:51 8:51 8:52 8:52 8:52 8:52 8:52 8:53 8:53 8:53 8:53 8:53 8:54 8:54 8:54 8:54 8:54 8:55 8:55 8:55 8:55 8:55 8:56 8:56 Thousands 14 12 1 8 6 4 2 www.fidelityinfoservices.com 24 www.redhat.com

Scheduled Transaction Posting System Summary ProfileSvr1 11/7/27 CPU% IO/sec 1 8 6 4 2 9:4 9:4 9:5 9:5 9:5 9:5 9:6 9:6 9:6 9:7 9:7 9:7 9:7 9:8 9:8 9:8 9:8 9:9 9:9 9:9 9:1 usr%+sys% 14 12 1 8 6 4 2 Disk xfers Disk total kb/s ProfileSvr1-11/7/27 Disk Read kb/s Disk Write kb/s IO/sec Thousands 12 1 8 6 4 2 14 12 1 8 6 4 2 9:4 9:4 9:5 9:5 9:5 9:5 9:6 9:6 9:6 9:7 9:7 9:7 9:7 9:8 9:8 9:8 9:8 9:9 9:9 9:9 9:1 www.redhat.com 25 www.fidelityinfoservices.com

Scheduled Interest Posting System Summary ProfileSvr1 11/2/27 CPU% IO/sec 1 6 usr%+sys% 9 8 7 6 5 4 3 2 1 5 4 3 2 1 Disk xfers 14:59 14:59 15: 15:1 15:1 15:2 15:2 15:3 15:4 15:4 15:5 15:5 15:6 15:6 15:7 15:8 15:8 15:9 15:9 15:1 15:11 15:11 15:12 15:12 15:13 15:14 15:14 15:15 15:15 15:16 Disk total kb/s ProfileSvr1-11/2/27 Disk Read kb/s Disk Write kb/s IO/sec 4 35 3 25 2 15 1 5 14:59 14:59 15: 15: 15:1 15:1 15:2 15:2 15:3 15:3 15:4 15:4 15:5 15:5 15:6 15:6 15:7 15:7 15:8 15:8 Thousands 15:9 15:9 15:1 15:1 15:11 15:11 15:12 15:12 15:13 15:13 15:14 15:14 15:15 15:15 15:16 6 5 4 3 2 1 www.fidelityinfoservices.com 26 www.redhat.com

Scheduled Accruals Processing System Summary ProfileSvr1 11/7/27 CPU% IO/sec usr%+sys% 1 8 6 4 2 8 7 6 5 4 3 2 1 9:3 9:31 9:32 9:32 9:33 9:34 9:35 9:35 9:36 9:37 9:38 9:38 9:39 9:4 9:4 9:41 9:42 9:43 Disk xfers 9:43 9:44 9:45 9:46 9:46 9:47 Disk total kb/s ProfileSvr1-11/7/27 Disk Read kb/s Disk Write kb/s IO/sec Thousands 6 5 4 3 2 1 9:3 9:31 9:31 9:32 9:33 9:33 9:34 9:35 9:36 9:36 9:37 9:38 9:38 9:39 9:4 9:4 9:41 9:42 9:42 9:43 9:44 9:44 9:45 9:46 9:46 9:47 8 7 6 5 4 3 2 1 www.redhat.com 27 www.fidelityinfoservices.com

5. Conclusions The objectives of these benchmark tests were to evaluate the FIS Profile application configured on Red Hat Enterprise Linux (RHEL) running on commodity x86-64 processors; to utilize information gained through measurement and observation in order to apply tuning and modifications for optimizations; and to validate this technology stack as a viable, competitive solution. The results indicate that despite very limited tuning and use of a less than optimal storage subsystem to deploy the database, FIS Profile was still able to achieve exceptional throughput for both on-line as well as scheduled transactions. Additionally, the processing tests from the scheduled transaction posting function produced spurious results, indicating that some anomalous activities may have been occurring concurrent with the test. Further analysis and testing is warranted and planned as part of the continued scaling up of the account database. Following this test, our short-term and intermediate term objectives include further optimizing the existing components of the configuration as well as scaling up the account volume and real-time on-line and scheduled load to levels beyond the current tests and to utilize additional and/or different tuning parameters in order to maintain the exceptional price performance results from this current set of tests. Looking beyond the current 25 million accounts, we anticipate much larger database configurations, with an ultimate goal of attaining optimal price/performance for 1 million accounts. Additionally, we anticipate that the optimized configuration will also incorporate logical multi-site replication, with a highly available primary system replicating to a simulated disaster recovery system. Finally, the out-of-the-box results attained in this set of tests, with 25 million accounts processing in a very modest system configuration provides a very promising outlook for much larger scale tests! www.fidelityinfoservices.com 28 www.redhat.com