Informatica Master Data Management Multi Domain Hub API: Performance and Scalability Diagnostics Checklist



Similar documents
Informatica Data Director Performance

MDM Multidomain Edition (Version 9.6.0) For Microsoft SQL Server Performance Tuning

Performance Tuning for Oracle WebCenter Content 11g: Strategies & Tactics CHRIS ROTHWELL & PAUL HEUPEL FISHBOWL SOLUTIONS, INC.

Three Simple Ways to Master the Administration and Management of an MDM Hub

ActiveVOS Performance Tuning

An Oracle White Paper July Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide

MONITORING A WEBCENTER CONTENT DEPLOYMENT WITH ENTERPRISE MANAGER

Informatica Corporation Proactive Monitoring for PowerCenter Operations Version 3.0 Release Notes May 2014

Oracle Weblogic. Setup, Configuration, Tuning, and Considerations. Presented by: Michael Hogan Sr. Technical Consultant at Enkitec

Performance brief for IBM WebSphere Application Server 7.0 with VMware ESX 4.0 on HP ProLiant DL380 G6 server

WebLogic Server Admin


Contents Introduction... 5 Deployment Considerations... 9 Deployment Architectures... 11

Oracle WebLogic Thread Pool Tuning

Tuning WebSphere Application Server ND 7.0. Royal Cyber Inc.

Oracle WebLogic Server Monitoring and Performance Tuning

Monitoring and Diagnosing Production Applications Using Oracle Application Diagnostics for Java. An Oracle White Paper December 2007

Performance Testing of Java Enterprise Systems

WEBLOGIC SERVER MANAGEMENT PACK ENTERPRISE EDITION

Tool - 1: Health Center

Oracle WebLogic Server 11g Administration

CHAPTER 1 - JAVA EE OVERVIEW FOR ADMINISTRATORS

MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM?

ILOG JRules Performance Analysis and Capacity Planning

WEBLOGIC ADMINISTRATION

Case Study - I. Industry: Social Networking Website Technology : J2EE AJAX, Spring, MySQL, Weblogic, Windows Server 2008.

Justin Bruns, Performance Test Lead, Fireman's Fund Insurance Robert May, Enterprise Software Tools Administrator Fireman's Fund Insurance

Performance Optimization For Operational Risk Management Application On Azure Platform

Various Load Testing Tools

Mission-Critical Java. An Oracle White Paper Updated October 2008

Running a Workflow on a PowerCenter Grid

WebSphere Performance Monitoring & Tuning For Webtop Version 5.3 on WebSphere 5.1.x

Winning the J2EE Performance Game Presented to: JAVA User Group-Minnesota

Identifying Performance Bottleneck using JRockit. - Shivaram Thirunavukkarasu Performance Engineer Wipro Technologies

<Insert Picture Here> Java Application Diagnostic Expert

Monitoring applications in multitier environment. Uroš Majcen A New View on Application Management.

StreamServe Persuasion SP5 StreamStudio

A Performance Engineering Story

McAfee Enterprise Mobility Management Performance and Scalability Guide

Java Performance Tuning

Enterprise Manager Performance Tips

MagDiSoft Web Solutions Office No. 102, Bramha Majestic, NIBM Road Kondhwa, Pune Tel: /

B M C S O F T W A R E, I N C. BASIC BEST PRACTICES. Ross Cochran Principal SW Consultant

MID-TIER DEPLOYMENT KB

JBoss Seam Performance and Scalability on Dell PowerEdge 1855 Blade Servers

Performance Testing Percy Pari Salas

CA Identity Governance

Performance Management for Cloudbased STC 2012

Holistic Performance Analysis of J2EE Applications

Web Services Performance: Comparing Java 2 TM Enterprise Edition (J2EE TM platform) and the Microsoft.NET Framework

JBoss Data Grid Performance Study Comparing Java HotSpot to Azul Zing

NetIQ Access Manager 4.1

WSO2 Business Process Server Clustering Guide for 3.2.0

ITG Software Engineering

Recommendations for Performance Benchmarking

Fine-Tune Performance of Enterprise Portal 6.0

Novell Access Manager

WebSphere Server Administration Course

Monitoring HP OO 10. Overview. Available Tools. HP OO Community Guides

IBM WebSphere Server Administration

Sample. WebCenter Sites. Go-Live Checklist

WebSphere Architect (Performance and Monitoring) 2011 IBM Corporation

A technical guide for monitoring Adobe LiveCycle ES deployments

Weblogic Server Administration Top Ten Concepts. Mrityunjay Kant, AST Corporation Scott Brinker, College of American Pathologist

Performance White Paper

IBM InfoSphere MDM Server v9.0. Version: Demo. Page <<1/11>>

SQL Server 2012 Optimization, Performance Tuning and Troubleshooting

Liferay Portal Performance. Benchmark Study of Liferay Portal Enterprise Edition

STeP-IN SUMMIT June 18 21, 2013 at Bangalore, INDIA. Performance Testing of an IAAS Cloud Software (A CloudStack Use Case)

Delivering Quality in Software Performance and Scalability Testing

Performance Tuning and Optimizing SQL Databases 2016

Exam Name: IBM InfoSphere MDM Server v9.0

Stratusphere Solutions

ORACLE ENTERPRISE MANAGER 10 g CONFIGURATION MANAGEMENT PACK FOR ORACLE DATABASE

Agility Database Scalability Testing

FAQ: Data Services Real Time Set Up

An Oracle White Paper September Advanced Java Diagnostics and Monitoring Without Performance Overhead

Blackboard Learn TM, Release 9 Technology Architecture. John Fontaine

Load Manager Administrator s Guide For other guides in this document set, go to the Document Center

Advanced Liferay Architecture: Clustering and High Availability

Mohammed Khan SUMMARY

IBM Tivoli Composite Application Manager for WebSphere

2013 OTM SIG CONFERENCE Performance Tuning/Monitoring

Understanding Server Configuration Parameters and Their Effect on Server Statistics

Top 10 Performance Tips for OBI-EE

Liferay Portal s Document Library: Architectural Overview, Performance and Scalability

Deployment Checklist. Liferay Portal 6.1 Enterprise Edition

ArcGIS for Server Performance and Scalability-Testing and Monitoring Tools. Amr Wahba

An Oracle White Paper March Load Testing Best Practices for Oracle E- Business Suite using Oracle Application Testing Suite

Oracle WebLogic Foundation of Oracle Fusion Middleware. Lawrence Manickam Toyork Systems Inc

An Oracle White Paper October, Enterprise Manager 12c Cloud Control Sizing Guidelines

Tuning Your GlassFish Performance Tips. Deep Singh Enterprise Java Performance Team Sun Microsystems, Inc.

JVM Performance Study Comparing Oracle HotSpot and Azul Zing Using Apache Cassandra

Liferay Performance Tuning

KillTest. 半 年 免 费 更 新 服 务

Transcription:

Informatica Master Data Management Multi Domain Hub API: Performance and Scalability Diagnostics Checklist 2012 Informatica Corporation. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying, recording or otherwise) without prior consent of Informatica Corporation.

Abstract The number of users and nodes affect the performance of Informatica Master Data Management Multi Domain Hub. You can measure the performance and scalability to ensure that it is working efficiently. This article describes how to measure performance and scalability of Informatica Master Data Management Multi Domain Hub, including Informatica Data Director (IDD) and Services Integration Framework (SIF). Supported Versions Informatica Master Data Management XU SP2 Table of Contents Abstract... 2 Overview... 2 Configuring Informatica Master Data Management Hub... 2 Performance / Scalability Sample Test Process... 4 Scalability Diagnostics Checklist... 4 Identify CPU Performance Bottlenecks... 4 Identify Network and Connectivity Issues... 5 Identify Database-Related Issues... 5 Perform Spot Checks... 6 Overview You can measure performance and scalability of Informatica Master Data Management Multi Domain Hub, including Informatica Data Director (IDD) and Services Integration Framework (SIF). Before you run the tests to measure performance and scalability, you must configure Informatica Master Data Management Hub. If you encounter issues during the tests, review the scalability diagnostics checklist. Configuring Informatica Master Data Management Hub The following table describes the recommended values for parameters and configuration options when you want to perform performance and scalability tests on the Informatica Master Data Management Multi Domain Hub: Parameter / Configuration Option Hub Server Logging Recommended Value(s) Set the level of all Hub Server logging to WARN or higher. Description Configure the level of Hub Server logging in the Log4J configuration file. For information about the location of the Log4J configuration file, see the Hub documentation. Hub ORS DB Logging Turn off all logging. Configure Hub database logging in the Database Log Configuration tool. To access, go to Enterprise Manager > ORS Databases > Database Log Configuration tool. 2

Parameter / Configuration Option ORS DB Connection Pool Configuration Recommended Value(s) Set the maximum pool size to 2 times the expected maximum number of concurrent users. Description For WebSphere and WebLogic: Use the administration tool of the application server to configure the size. CMX_SYSTEM DB Connection Pool Configuration Set the max pool size to 1.5 times the expected maximum number of concurrent users. For JBoss: Edit the data source XML file in the deploy or farm directory. The file name has the following syntax: siperian-<dbinstance>-<schema>-ds.xml For example, if the maximum number of concurrent users is 100, set the following value in the siperianorcl-cmx_ors-ds.xml file: <max-pool-size>200</max-pool-size> For WebSphere and WebLogic: Use the administration tool of the application server to configure the size. App Server Java Memory Settings Configure the following settings: -Xms1024m -Xmx1536m -XX:PermSize=256m -XX:MaxPermSize=512m For JBoss: Edit the siperian-mrm-ds.xml file in the deploy or farm directory. For example, if the maximum number of concurrent users is 100, set the following value: <max-pool-size>150</max-pool-size> For WebLogic or WebSphere: Use the administration tool of the application server to configure the memory settings. For JBoss: Configure the property in the run.conf file in the JBoss bin directory. Application Server Logging and Tracing Hub Schema Write Locks API Auditing BDD Traffic Compression Turn off any diagnostic logging or tracing. Verify that there are no active schema write locks issued from the Hub Console against the test ORS. Disable any auditing that has been enabled. Set this value to true to reduce the data volumes being passed between the client and server. Default is true. Specific recommendations may vary based on the operating system and application server configuration. You can review the available memory size using the application server tools to ensure that the server has enough heap space. Excessive garbage collection and a small amount of available memory can lead to decreased performance. In modern JVMS, the JVM can use heuristics to determine the optimal values for these settings. For Java 6 VMs, do not modify these values. Only modify the values if you encounter an issue while monitoring performance. The tracing and logging configuration tools are specific to each J2EE application server. Verify that the C_REPOS_SCHEMA_WRITE_LOCK table is empty. Schema write locks disable metadata caching in the Hub APIs, which significantly affect system performance. Configure Hub API auditing in the Audit Manager tool in the Hub Console. Set the following property in the cmxserver.properties file: cmx.bdd.server.traffic.compression_enabl ed=true 3

Parameter / Configuration Option BDD Lookup Data Cache BDD SAM Data Cache ORACLE Instance Tuning Recommended Value(s) Set the value to the largest acceptable value. If lookup data is static, this number can be large. Set the value to the largest acceptable value. If SAM data is static, this number can be large. Verify that the init.ora parameters are inline with the MDM Hub recommendations. Description For information about setting the lookupcacheupdateperiod parameter, see the BDD Implementation Guide For information about setting the samcacheupdateperiod parameter, see the BDD Implementation Guide. Contact Informatica Global Customer Support or professional services to get the recommended values for the Oracle configuration parameters. JGroups Config For any release after 9.1, confirm the configuration of JGroups. This is especially important in clustered environments where this is used as a distributed cache to share information between cluster nodes. The configuration is stored in the following file: <INFAHOME>\hub\server\resources\jbossCac he.xml For more information, see the JBoss cache configuration documentation. Performance / Scalability Sample Test Process 1. Run a single-thread (user) test scenario to establish the baseline performance numbers for the test. 2. Run a slow ramp-up test, where you increase the number of concurrent users over a period of time. Allow the entire test to execute for at least 30 minutes. Record the latency time and the throughput time for each time period. 3. Monitor the CPU usage on the machine running the Hub Server and the client machine generating the load. Silkperformer, Jmeter, Soapui, Loadrunner are some examples of loadgenerating clients. If either of these machines approaches 80-100% CPU utilization, more machines may need to be added to achieve the desired targets. There is no benefit in increasing concurrency if the target machines are oversubscribed. Scalability Diagnostics Checklist Use the following checklist to resolve the issues during the performance and scalability test: 1. Identify CPU performance bottlenecks. 2. Identify network and connectivity issues. 3. Identify database-related issues. 4. Perform spot checks. Identify CPU Performance Bottlenecks 1. Observe the CPU usage of the test driver during the test. (The test driver is the machine that simulates the user load.) Validate that the CPU usage is below 50%. If CPU usage is above 50%, you may need to drive the test from multiple machines. 2. Observe the CPU usage of the Hub Server during the test. Excessive CPU utilization (>80%) will affect system performance. 3. Observe the CPU usage of the DB Server during the test. Excessive CPU utilization (>80%) will affect system performance. 4. Share the observations with Informatica Global Customer Support to help diagnose the issues. 4

Identify Network and Connectivity Issues 1. Ping the Hub Server from the test driver machine in an idle environment. A response latency greater than 10 MS in a local environment may indicate a network configuration issue. 2. Ping the Hub Server from the test driver machine in a non-idle environment. A response latency greater than 10 MS in a local environment may indicate a network configuration issue. 3. Ping the Hub Server when the performance tests are running. Deviations greater than 30% compared to an idle environment may indicate a network issue. 4. Share the observations with Informatica Global Customer Support to help diagnose the issues. Identify Database-Related Issues 1. Enable the Hub Server DB performance logging to identify possible connectivity and database server configuration issues. To enable logging, set the Priority Value property in the Log4j configuration file as shown below: <category name="siperian.performance">? <priority value="debug"/>? </category> The default is OFF. Depending on the application server, the Log4J configuration file may be located in <INFAHOME>\server\resources or in the JBOSS \config directory. 2. Run a single-thread (user) test. After running the test, the log statements will look similar to the following statement: 2011-11-29 23:09:39,054 DEBUG (Thread-21) connconn 1 25057589 - getconnection; datasourceid=main-cmx_ors, isusedbtx=true; 2011-11-29 23:09:39,054 DEBUG (Thread-21) connsacm 0 25057589 - setautocommit(false); 2011-11-29 23:09:39,056 DEBUG (Thread-21) stmtexec 1 25057589 preparestatement( delete from c_... 2011-11-29 23:09:39,056 DEBUG (Thread-21) conncomm 0 25057589 - commit(); 2011-11-29 23:09:39,056 DEBUG (Thread-21) stmtclos 2 25057589 - close(); total statement time (Thread-21) stmtexec 1 25057589 preparestatement( select rowid_search_result_state, table_name from c_repos_search_... (Thread-21) stmtclos 1 25057589 - close(); total statement time 5

(Thread-21) conncomm 0 25057589 - commit(); (Thread-21) connclos 4 25057589 - close(); total connection time 2011-11-2 The first number after stmtexec, connconn, connclos indicates the elapsed time (in milliseconds) of the operation. For example, the elapsed time is 4 MS for the following connclos statement: (Thread-21) connclos 4 25057589 - close(); total connection time 2011-11-2 Average open and close connection times greater than 2-3 MS indicate connection pooling or database connectivity problems. Note: On Wintel machines, the minimal clock resolution is about 15 MS. As a result, it is acceptable to occasionally have 15 MS for average open and close connection times. A database configuration problem exists if an individually executed simple statement takes longer than 10-20 MS. 3. Collect the logs to share with Informatica Global Customer Support. 4. Run the Oracle performance (AWR) reports and review the results with the database administrator to identify and remediate Oracle performance bottlenecks. 5. Share the reports with Informatica Global Customer Support to help diagnose the issues. Perform Spot Checks 1. Disk I/O on the application server. The Hub Server should not perform any significant disk I/O on the application server in a normal configuration. To ensure that there is no excessive I/O, use the operating system tools to observe the disk I/O while the test is executing. 2. Database CPU utilization greater than 80% can cause very unpredictable and fluctuating performance results. Authors Dmitri Korablev Vice President Strategy and Planning Michael Thomson Senior Architect Research and Development 6