An Oracle White Paper June Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers
|
|
|
- Tracey Clark
- 10 years ago
- Views:
Transcription
1 An Oracle White Paper June 2010 Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers
2 Executive Overview... 1! Introduction... 2! Key Solution Technologies... 4! An Overview of Oracle s Siebel CRM Application Architecture... 5! Workload Description... 6! Business Transaction Types... 8! Test Environments... 9! Phase 1 Test Environment... 9! Phase 2 Test Environment... 10! Phase 1 Testing Consolidating Tiers Using Containers and Domains... 11! Performance and Scalability Results with Oracle Solaris Containers... 12! Performance and Scalability Results with Oracle VM Server for SPARC... 18! Phase 2 Testing Implementing HA... 22! Configuring for HA Using Oracle Solaris Cluster Software... 23! Phase 2 Testing Scenarios... 26! Performance and Scalability Results with Oracle Solaris Cluster. 26! Failover Testing with Oracle Solaris Cluster... 32! Best Practices and Recommendations... 34! Server/Operating System Optimizations... 34! I/O Best Practices... 35! Web Tier Best Practices... 36! Siebel Application Tier Best Practices... 37! Oracle Database Tier Best Practices... 37! Best Practices for High Availability Configurations... 38!
3 Sizing Guidelines... 39! Baseline Configurations... 41! Small HA Configuration up to 3,500 users... 41! Medium HA Configuration up to 7,000 users... 41! Large HA Configuration up to 14,000 users... 41! Conclusion... 42! Appendix A Phase 1 Configuration of Containers... 43! Web Server... 43! Application Server... 44! Database Server... 44! Appendix B Phase 1 Configuration of Oracle VM Server for SPARC... 45! Primary Domain... 45! Siebel Application Server Domain... 45! Siebel Web Server Domain... 46! Appendix C Phase 2 Configuration of Zone Clusters... 46! Web Server... 47! Gateway Server... 49! Application Server... 51! Database Server... 52! About the Authors... 55! Acknowledgements... 55! References... 56!
4 Executive Overview Founded on a service-oriented architecture, Oracle! Siebel Customer Relationship Management (CRM) software allows businesses to build scalable standards-based applications that can help to attract new business, increase customer loyalty, and improve profitability. As companies deliver more comprehensive and rich customer experiences through CRM tools, demand can scale rapidly, forcing datacenters to expand system resources quickly to meet increasing workloads. Datacenter resources can be scaled horizontally (with more servers added at each tier), vertically (by adding more powerful servers), or both. As servers are added at Siebel Web, Gateway, Application, and Database tiers, a frequent result is server sprawl. Over time, this can result in negative consequences greater complexity, poor utilization, increased maintenance fees, and skyrocketing power and cooling costs. Consolidating tiers is one approach that can help to contain server sprawl and reduce costs. Recognizing the need to grow efficiently while scaling Oracle Siebel CRM capabilities, Oracle created a proof-of-concept solution that consolidates Web, Gateway, Application, and Database tiers on a single Sun SPARC Enterprise server from Oracle, limiting the number of physical machines needed to effectively deploy applications and improving the bottom line. As shown in testing exercises using a well-known Siebel CRM workload and virtualization technologies built into Sun SPARC Enterprise servers, the solution scales easily to accommodate user load, even with workloads of up to 14,000 users. Because Oracle Siebel CRM applications support business profit centers, they often operate under stringent availability requirements and necessitate demanding service levels. For this reason, Oracle engineers conducted a second phase of proof-of-concept testing. In the second phase, software tiers 1
5 were again consolidated using built-in virtualization technologies but this time in a clustered server configuration that provided high availability (HA). The HA tests demonstrated near linear scalability while at the same time providing mission-critical levels of application availability. Introduction To safely and securely consolidate Siebel CRM application tiers, Sun SPARC Enterprise servers offer a choice of built-in, no-cost virtualization technologies: Oracle Solaris Containers. Containers are an integrated virtualization mechanism that can isolate application services within a single Oracle Solaris instance. Faults in one container have no impact on applications or service instances running in other containers. Oracle VM Server for SPARC (formerly known as Sun Logical Domains). Native to Sun CMT processors (like UltraSPARC T2 Plus processors), this technology allows multiple tiers to be consolidated within isolated domains, without imposing additional cost. Each domain runs an independent copy of Oracle Solaris, and there are no licensing fees for additional OS copies. Using one or both of these virtualization technologies, Siebel CRM services in each tier can run in isolation, without impacting service execution in other tiers. System resources can be allocated and reassigned to each tier as needed. Compared to other competitive and proprietary virtualization technologies, using Oracle Solaris Containers and/or Oracle VM Server for SPARC can provide significant cost savings when consolidating a Siebel CRM infrastructure. In addition, Oracle guarantees binary compatibility for applications running under Oracle Solaris, whether the OS runs natively as the host OS or as a guest OS in a virtualized environment. 2
6 In two phases of scalability testing, Oracle engineers configured different Siebel CRM tiers in virtualized environments on Sun SPARC Enterprise servers. In the first phase, engineers consolidated tiers on a single server, configuring each Siebel CRM tier in a separate container or domain. The initial phase of testing compared the scalability of the two different virtualization technologies. In a second phase of testing, engineers implemented Oracle Solaris Cluster (which supports both containers and domains) on two Sun SPARC Enterprise servers to simulate mission-critical Siebel CRM application workloads in a consolidated yet resilient virtualized environment. For both phases of testing, the test workload was extracted from the wellestablished Siebel Platform Sizing and Performance Program (PSPP) benchmark, which simulates real-world environments using some of the most popular Siebel CRM modules. Engineers looked at system resource utilization, response time, and throughput metrics as they scaled the number of users under typical application workloads. This paper shows the test results and clearly documents best practices, which can help system architects more effectively size and optimize the Siebel CRM application on Sun SPARC Enterprise servers. The test results demonstrate how no-cost virtualization technologies in Sun SPARC Enterprise servers combined with Oracle Solaris Cluster software can optimize scalability while reducing datacenter complexity, lowering operating costs and delivering high availability for business-critical CRM services. 3
7 Key Solution Technologies The tested solution was based on Oracle s massively scalable Sun SPARC Enterprise servers, the Oracle Solaris 10 operating system, and Oracle s open storage technologies, as shown in Figure 1. Built-in, no-cost virtualization technologies Oracle Solaris Containers or Oracle VM Server for SPARC reside at the heart of the solution architecture and enable a flexible infrastructure for consolidation. Oracle Solaris Cluster (and often third-party management tools) are typically added to enhance business continuity and simplify resource allocation tasks for virtualized environments. Figure 1. Using no-cost virtualization technologies, the proof-of-concept combined Siebel CRM tiers on a Sun SPARC Enterprise T5440 server. In the first phase of testing, Oracle engineers constructed a proof-of-concept solution based on a single Sun SPARC Enterprise T5440 server (see Figure 1), which features up to four UltraSPARC T2 Plus processors with up to 32 cores and up to 256 concurrently executing threads. With such advanced thread density, a single Sun SPARC Enterprise T5440 server is a powerhouse for consolidating a Siebel infrastructure. To demonstrate this point, Oracle engineers ran a series of scalability tests using both container and domain virtualization technologies. As the test results show, the consolidated solution on a single Sun SPARC Enterprise T5440 server exhibited good scalability, providing reasonable response times and high throughput rates for simulated user populations of up to 14,000 users. In Sun SPARC Enterprise servers, Chip Multi-Threading (CMT) technology in UltraSPARC T2 Plus processors enables effective scalability. CMT technology applies the available transistor budget to achieve up to eight cores within a single processor. Each core can switch between threads on a clock cycle, helping to keep the processor pipeline active while lowering power consumption and heat dissipation. Because of the advanced thread density, the Sun SPARC Enterprise T5440 server scales well to provide headroom to support growth while minimizing power use. In the second phase of testing (see Figure 2), Oracle engineers used a clustered configuration of two Sun SPARC Enterprise T5240 servers. Each Sun SPARC Enterprise T5240 server houses two UltraSPARC T2 Plus processors for a maximum of 128 threads per server. In an economical clustered configuration like that used in the Phase 2 testing, two servers support a total of 256. The clustered 4
8 configuration also demonstrated good scalability, reasonable response times, and high levels of throughput, at the same time enabling highly available Siebel CRM application services. Figure 2. The second phase of testing implemented Oracle Solaris Cluster on two Sun SPARC Enterprise T5240 servers in a consolidated, clustered HA configuration. An Overview of Oracle s Siebel CRM Application Architecture The Oracle Siebel CRM application suite includes the following tiers (see Figure 3): Web Clients. Web Clients provide user interface functionality and can encompass a variety of types (Siebel Web Client, Siebel Wireless Client, Siebel Mobile Web Client, Siebel Handheld Client, etc.). In both phases of testing, Mercury LoadRunner version 8.1 simulated the load generated by the different sized end-user populations. Web Server. This tier processes requests from Web Clients and interfaces to the Gateway/Application layer. In the scalability testing performed, Sun engineers installed the Siebel Web Server Extension and configured the Oracle iplanet Web Server (formerly Sun Java System Web Server) at this tier. Gateway/Application Server. This tier provides services on behalf of Siebel Web Clients. It consists of two sub-layers: the Siebel Enterprise Server and the Siebel Gateway Server. Database Server. While the Siebel File System stores data and physical files used by Siebel Clients and Siebel Enterprise Server, the Siebel Database Server stores Siebel CRM database tables, indexes, and seed data. 5
9 In a multiple server deployment, the Siebel Enterprise Server includes a logical grouping of Siebel Servers. (However, in a small configuration, the Siebel Enterprise Server might contain a single Siebel Server.) The Siebel Gateway coordinates the Siebel Enterprise Server and its set of Siebel Servers. It also provides a persistent backing store of Siebel Enterprise Server configuration information. Each Siebel Server is a flexible and scalable application server that supports a variety of services such as data integration, workflow, data replication, and synchronization services for mobile clients. The Siebel Server also includes logic and infrastructure for running different CRM modules, as well as providing connectivity to the Database Server. The Siebel Server consists of several multithreaded processes that are commonly known as Siebel Object Managers. Figure 3. This high-level overview of the Oracle Siebel CRM application architecture shows the tiered software architecture. To provide high availability to all three tiers of Oracle Siebel CRM 8, Oracle Solaris Cluster software is deployed to support mission-critical application availability (see Configuring for HA Using Oracle Solaris Cluster Software, page 23). In the second phase of testing, engineers analyzed performance and scalability with Siebel CRM workloads in an HA configuration, using clustered zones to support each software tier. Workload Description CRM systems often require customization typically more frequently than other business applications. Common changes include adding or removing certain application modules, modifying the function of existing modules, or integrating the CRM application with other business applications and processes. While application performance varies according to the particulars of any deployment, testing 6
10 a configuration s scalability with a well-defined workload helps to provide a useful starting point for defining appropriate configurations and sizing. For the purposes of scalability testing, engineers used a workload extracted from the well-known Siebel Platform Sizing and Performance Program (PSPP) workload. This workload is based on scenarios derived from large Siebel customers and replicates real-world, concurrent, thin-client requirements of typical end-users. The PSPP 8.0 workload is based on user populations who repeatedly perform the following types of tasks and functions: Siebel Financial Services Call Center. The Siebel Financial Services Call Center software provides a comprehensive solution for sales and service, helping customer service and telesales representatives to provide world-class customer support, improve customer loyalty, and increase revenues through cross-selling and up-selling opportunities. Siebel Partner Relationship Management. Representing echannel users in partner organizations, the Siebel Partner Relationship Management application enables organizations to effectively and strategically manage relationships with partners, distributors, resellers, agents, brokers, and dealers. Siebel Workflow. This business process management engine automates user interaction, business processes, and integration. A graphical drag-and-drop user interface allows simple administration and customization. Administrators can add custom or pre-defined business services, specify logical branching, updates, inserts, and subprocesses to create a workflow process tailored to specific business requirements. Siebel escript. escript is a programming language that application developers use to write simple scripts to extend Siebel applications. The JavaScript programming language (a popular scripting language used extensively to deploy Web sites) is the core language underlying the Siebel escript language. Siebel Enterprise Application Integration (EAI). EAI software allows organizations to integrate legacy applications with Siebel CRM applications and to integrate Web Service support. This capability enables organizations to extend the functionality of existing applications to provide up-to-the-minute information through standard Web portals and other Web Service-enabled environments. In Phase 1 of the testing, the PSPP workload simulates the following task mix for the functions listed above: Financial Services Call Center - 30% of active concurrent users Partner Relationship Management, escript, and Workflow - 10% of active concurrent users Enterprise Application Integration with Web Services - 60% of active concurrent users In Phase 2, the test configuration used version of the Oracle Siebel CRM software instead of version 8.0. Because of changes to the PSPP workload for testing (e.g., Partner Relationship is not included in version of the PSPP), the task mix for Phase 2 changed as follows: Financial Services Call Center - 40% of active concurrent users Enterprise Application Integration with Web Services - 60% of active concurrent users 7
11 Business Transaction Types Based on the Siebel PSPP benchmark workload described above, Mercury LoadRunner 8.1 generated loads to simulate different user populations simultaneously executing complex business transactions. Between each user operation, think time (a synthetic delay simulating the typical pause between a user s actions) averaged approximately 15 seconds. The following paragraphs characterize core business transaction types used in the testing. Siebel Financial Services Call Center Incoming Call Creates Opportunity, Quote, and Order This transaction simulates the pattern of activity in a typical call center transaction: Create a new contact Create a new Opportunity for that contact Add two products to Opportunity Navigate to Opportunities Quote View Click AutoQuote button to generate quote Enter Quote Name, and Price List Drill down on the quote name to go to Quote Line Items View and specify discount Click Reprice All button Update Opportunity Navigate to Quotes Order View Click on AutoOrder button to automatically generate the order Navigate back to Opportunity Siebel Partner Relationship Management, escript, and Workflow Sales and Service This transaction simulates the steps that occur when entering a partner service request: Partner creates a new service request with appropriate detail A service request is automatically assigned The saving service request invokes scripting that brings the user to the appropriate opportunity screen A new opportunity with detail is created and saved The saving opportunity invokes scripting that brings the user back to the service request screen Web Services Find, Submit a New Service Request, and Update the Service Request This transaction simulates a Web Service that interfaces to a hypothetical legacy application to find or create a service request. The Web Service acts as a delivery mechanism for integrating heterogeneous applications through Internet protocols. A Web Service can be specified using Web Services 8
12 Description Language (WSDL) and is then transported via Simple Object Access Protocol (SOAP), a transport protocol based on XML. Since the PSPP benchmark suite has no UI presentation layer, the load generator simulates a Java Platform, Enterprise Edition (Java EE) Web application to send a Web Service request to a Siebel Server (EAIObjMgr_enu) to invoke Siebel business services. The Siebel Web Services framework generates WSDL files to describe the Web Services hosted by the Siebel application. Also, this framework can call external Web Services by importing a WSDL document as an external Web Service (using the WSDL import wizard in Siebel Tools). Each Web Service exposes multiple methods, such as Query Service Request, Create Service Request, and Update Service Request. Web Service authentication is done through a session token. The ServerDetermine session type is used and a session token is maintained to avoid a Login process for each request. To use the ServerDetermine session type, a login Web Service call (SessionAccessPing) retrieves the session token before calling other Web Services. At the end of the transaction, a logout call (SessionAccessPing) makes the session token unavailable. Test Environments As noted previously, there were two phases of testing: one to determine scalability on a single Sun SPARC Enterprise T5440 server and another using a clustered configuration of two Sun SPARC Enterprise T5240 servers. These test environments are not representative of typical production deployments but are simplified proof-of-concept configurations designed for test and development. Phase 1 Test Environment Figure 4 depicts the Phase 1 test environment. Figure 4. The Phase 1 test environment consolidated Siebel CRM tiers on a single Sun SPARC Enterprise T5440 server. 9
13 The Phase 1 test environment consisted of the following hardware and software components: Hardware One Sun SPARC Enterprise T5440 server with four 1.4GHz UltraSPARC T2 Plus processors and 128 GB of RAM Two Oracle Sun Storage J4200 arrays (with SAS drives) or two Oracle StorageTek 2540 arrays Nine Sun Fire X4200 servers from Oracle for load generation Software Oracle Solaris 10 5/08 s10s_u5wos_10 SPARC and Oracle Solaris 10 10/08 s10s_u5wos_10 SPARC Oracle 10g R2 Database Server v Siebel CRM Release 8.0 Industry Applications Oracle iplanet Web Server (formerly Sun Java" System Web Server) 6.1 SP10 Note that the testing was performed once with two StorageTek 2540 arrays and once with two Sun Storage J4200 arrays. Generally, the workload imposed such a low amount of I/O that the difference in results was negligible. Phase 2 Test Environment Figure 5 shows the Phase 2 test environment. Figure 5. The HA test environment implemented Siebel CRM tiers on two clustered Sun SPARC Enterprise T5240 servers. 10
14 The second phase of testing used the following hardware and software components: Hardware Two Sun SPARC Enterprise T5240 servers, each with two UltraSPARC T2 Plus processors and 128 GB of RAM Two Oracle Sun Storage 6140 arrays (with SAS drives) Four Sun Fire X2270 servers from Oracle for load generation Software Oracle Solaris 10 u8 SPARC Oracle 11g R2 Database Server Siebel CRM Release Industry Applications Oracle iplanet Web Server (formerly Sun Java System Web Server) 7.0 Oracle Solaris Cluster 3.2u3 Phase 1 Testing Consolidating Tiers Using Containers and Domains In the first phase of testing, engineers executed three test scenarios: once each with 3,500, 7,000 and 14,000 active users respectively. Table 1 shows the Siebel CRM server configuration for the three user population scenarios. TABLE 1. CONFIGURATION OF SERVICES FOR EACH TEST SCENARIO NUMBER OF NUMBER OF WEB NUMBER OF SIEBEL TOTAL NUMBER OF NUMBER OF ORACLE CONCURRENT SERVERS SERVERS SIEBEL OBJECT DATABASE INSTANCES USERS MANAGERS 3, , , During the execution of each scenario, data was collected from the following sources: Unix performance metrics Load Runner (the workload generator software) Oracle Automatic Workload Repository (AWR) A power measurement tool 11
15 Engineers repeated the same testing scenarios on a single server using different built-in, no-cost virtualization technologies: Oracle Solaris Containers and Oracle VM Server for SPARC (previously known as Sun Logical Domains). The following pages include Phase 1 testing results for configurations using containers (see page 12) and domains (see page 18). Performance and Scalability Results with Oracle Solaris Containers In Phase 1 testing on the Sun SPARC Enterprise T5440 server, Oracle Solaris 10 was first configured with three containers (zones) in addition to the global zone. Each zone was used to isolate a different Siebel CRM tier Web, Gateway/Application, or Database. System resources were dedicated to each tier as indicated in Table 2 (for more information on resource allocation for zones, see Sizing Recommendations, page 39 and Appendix A, Configuration of Containers, page 43). TABLE 2. RESOURCES ALLOCATED TO EACH TIER AND CONTAINER TIER AND CONTAINER VCPUS 1 MEMORY Web tier 22 vcpus 8GB Gateway/Application tier 196 vcpus 88GB Database tier 38 vcpus 32GB 1 A vcpu (virtual CPU) correlates to a processing thread. Since the Sun SPARC Enterprise T5440 server has four UltraSPARC T2 Plus processors with 8 cores and 8 threads per core, there is a maximum possible 256 vcpus per system. The following pages summarize test results for user populations of 3,500, 7,000 and 14,000 active concurrent users with Oracle Solaris Containers, including these metrics: CPU utilization (as a percentage) Memory utilization (in GB) Business transaction throughput (in number of transactions per hour) Average transaction response time (in seconds) Power consumption 12
16 CPU Utilization (Containers) Figure 6 shows CPU utilization for Siebel CRM Web, Gateway/Application, and Database tiers in separate containers. Table 3 gives the CPU utilization percentage for each tier under each user population load. As shown, additional CPU processing capacity is available, especially for the 3,500 and 7,000 user scenarios. For these user populations in an actual deployment, a single Sun SPARC Enterprise T5440 server can potentially support additional applications using this excess processing capacity, or a smaller server (such as Oracle s Sun SPARC Enterprise T5240 server) could be used. Figure 6. CPU utilization percentage is shown for each tested user population. TABLE 3. CPU UTILIZATION (%) SIEBEL CRM TIER 3,500 USERS 7,000 USERS 14,000 USERS Web Server 10,106 19,866 39,715 Gateway/Application Server 8,306 16,645 33,105 Database Server 31,478 62, ,475 13
17 Memory Utilization (Containers) Figure 7 shows memory utilization for the Siebel CRM tiers running in different containers. Table 4 lists corresponding utilization (in gigabytes). As the data and graph illustrate, in all three population scenarios, memory utilization remains low, which indicates that more than adequate memory resources are configured. Figure 7. Percentage of memory utilization is shown for each tested user population. TABLE 4. MEMORY UTILIZATION (GB) SIEBEL CRM TIER 3,500 USERS 7,000 USERS 14,000 USERS Web Server Gateway/Application Server Database Server
18 Business Transaction Throughput (Containers) Figure 8 shows the number of business transactions per hour for the three transaction types under each user population load. Table 5 lists the throughput rates. As the data indicates, as the user population doubles from 3,500 to 7,000 to 14,000 users, throughput increases almost linearly. Figure 8. Business transaction throughput is shown for each user population. TABLE 5. TRANSACTION THROUGHPUT (TRANSACTIONS/HOUR) BUSINESS TRANSACTION TYPE 3,500 USERS 7,000 USERS 14,000 USERS Financial Services Call Center Partner Relationship Management EAI Web Services Average Transaction Response Time (Containers) Figure 9 depicts the average transaction response time for the three transaction types under each user population using containers. Table 6 lists the average response time in seconds for each transaction type. For purposes of the testing exercise, response times are measured at the Web server instead of at the end user. (This is because response times at the end user depend on a number of other variables such as network latency, the bandwidth between Web server and browser, and the time for content rendering by the browser.) 15
19 Figure 9. Average transaction response time is shown for different workload types and each user population. TABLE 6. AVERAGE TRANSACTION RESPONSE TIME (SECONDS) SIEBEL CRM TIER 3,500 USERS 7,000 USERS 14,000 USERS Financial Services Call Center Partner Relationship Management EAI Web Services Transaction Throughput and Response Time (Containers) Performance and scalability are inextricably linked. For this reason it is important to examine throughput and response time metrics together when analyzing application performance and configuration scalability. As application load increases, response time must remain within acceptable bounds. As a rule of thumb, as the number of concurrent users increases, if there is a linear increase in throughput, then the increase in response times should also be within an acceptable limit. Figure 10 combines transaction throughput and response time for the three transaction types and user population loads. Table 7 lists the corresponding data values. As the data indicates, increases in throughput remain almost linear as user load increases, and response times continue to remain within reasonable, sub-second bounds. 16
20 Figure 10. Transaction Throughput and Response Times are shown for different workload types and each user population. TABLE 7. TRANSACTION THROUGHPUT (TPH, TRANSACTIONS PER HOUR) AND RESPONSE TIME (RT, IN SECONDS) SIEBEL CRM TIER 3,500 USERS 7,000 USERS 14,000 USERS Financial Services Call Center TPH 10,106 19,866 39,715 Partner Relationship Management - TPH 8,306 16,645 33,105 EAI Web Services -TPH 31,478 62, ,475 Financial Services Call Center -RT Partner Relationship Management - RT EAI Web Services RT Power Consumption During the 14,000 concurrent user test at a steady state, the Sun SPARC Enterprise T5440 server consumed an average of 1,276 watts. This translates to one watt of energy spent for every users. Given that the Sun SPARC Enterprise T5440 server occupies a total of 4 rack units, the server supports about 3,500 users per rack unit. 17
21 Performance and Scalability Results with Oracle VM Server for SPARC Like Oracle Solaris Containers, Oracle VM Server for SPARC (formerly known as Logical Domains) allows multiple physical servers to be consolidated into isolated domains on a single server. (For more information about this technology, see the paper Oracle VM Server for SPARC: Enabling A Flexible, Efficient IT Infrastructure. ). In the Phase 1 testing, engineers repeated the testing scenarios on a single server using domains instead of using containers. Each domain was configured with the same system resources that had been configured for each container (resource allocations are summarized below in Table 8 and presented in Appendix B, Configuration of Oracle VM Server for SPARC Domains, page 45). TABLE 8. DOMAIN CONFIGURATIONS TIER AND DOMAIN VCPUS MEMORY Web tier 22 vcpus 8GB Gateway/Application tier 196 vcpus 87.5GB Database tier 38 vcpus 32GB While the Web tier and Gateway/Application tiers resided in separate Guest Domains, the Database tier resided in the Primary Domain. If the database tier had instead been deployed in a Guest Domain, then a minimum of 1 virtual cpu (vcpu) and 512 MB of memory should have been allocated to the Primary Domain. Results for testing Siebel CRM on a single server with domains and user populations of 3,500, 7,000, and 14,000 users are shown in Figure 11 through Figure 14. Table 9 through Table 12 list corresponding data values. The results indicate that, depending on the workload, there can be some additional resource utilization needed to run Siebel CRM applications using domains rather than using Oracle Solaris Containers. CPU Utilization (Domains vs. Containers) Figure 11 shows CPU utilization for Siebel CRM Web, Gateway/Application, and Database tiers in separate domains. Table 9 gives the CPU utilization percentage for each tier under populations of 3,500, 7,000, and 14,000 users. As the data indicates, additional CPU processing capacity is available and tracks closely with utilization with Oracle Solaris Containers, with the exception of the Web tier under 14,000 users. Excess processing capacity on a single Sun SPARC Enterprise T5440 server can potentially support other applications, or a smaller server such as the Sun SPARC Enterprise T5240 server could be used. 18
22 Figure 11. CPU utilization (%) results compare domains and containers for each tier and user population. TABLE 9. CPU UTILIZATION (%) COMPARING DOMAINS AND CONTAINERS NUMBER OF 3,500 USERS 3,500 USERS 7,000 USERS 7,000 USERS 14,000 USERS 14,000 USERS CONCURRENT USERS (CONTAINER) (DOMAIN) (CONTAINER) (DOMAIN) (CONTAINER) (DOMAIN) Web Server Gateway/Application Server Database Server Memory Utilization (Domains vs. Containers) Figure 12 shows memory utilization with each tier running in a different domain, and Table 10 lists the amount of memory (in gigabytes). As shown, memory utilization with domains is comparable to that with containers. Overall, the testing shows the system is configured with adequate memory resources. 19
23 Figure 12. Memory utilization (GB) results compare domains and containers for different tiers and user populations. TABLE 10. MEMORY UTILIZATION (GB) COMPARING DOMAINS AND CONTAINERS NUMBER OF 3,500 USERS 3,500 USERS 7,000 USERS 7,000 USERS 14,000 USERS 14,000 USERS CONCURRENT USERS (CONTAINER) (DOMAIN) (CONTAINER) (DOMAIN) (CONTAINER) (DOMAIN) Web Server Gateway/Application Server Database Server Throughput (Domains vs. Containers) Figure 13 illustrates throughput using domains in comparison to containers. Table 11 lists the number of business transactions per hour for the three transaction types under population loads of 3,500, 7,000, and 14,000 users using either containers or domains. With either built-in, no-cost virtualization technology, throughput increases almost linearly as the user population increases. 20
24 Figure 13. Throughput results compare domains and containers for different workloads and user populations. TABLE 11. THROUGHPUT USING DOMAINS AND SOLARIS CONTAINERS (TRANSACTIONS/HOUR) NUMBER OF 3,500 USERS 3,500 USERS 7,000 USERS 7,000 USERS 14,000 USERS 14,000 USERS CONCURRENT USERS (CONTAINER) (DOMAIN) (CONTAINER) (DOMAIN) (CONTAINER) (DOMAIN) Financial Call Center Services 10,106 10,091 19,866 20,131 39,715 39,004 Partner Relationship Management 8,306 8,325 16,645 16,633 33,105 33,076 EAI Web Services 31,478 31,538 62,845 62, , ,219 Response Times (Domains vs. Containers) Figure 14 depicts the average transaction response time for the three transaction types and compares response times with domains and containers. Table 12 lists the average response time in seconds. With either built-in, no-cost virtualization technology, response times remained in the subsecond range. 21
25 Figure 14. Response time results compare domains and containers for various workloads and populations. TABLE 12. RESPONSE TIMES USING DOMAINS AND CONTAINERS (SECONDS) NUMBER OF 3,500 USERS 3,500 USERS 7,000 USERS 7,000 USERS 14,000 USERS 14,000 USERS CONCURRENT USERS (CONTAINER) (DOMAIN) (CONTAINER) (DOMAIN) (CONTAINER) (DOMAIN) Financial Call Center Services Partner Relationship Management EAI Web Services Phase 2 Testing Implementing HA Highly available (HA) clusters provide nearly continuous access to data and applications by keeping systems running through failures that would normally bring down a single server. In mission-critical clustered systems, no single failure whether it is a hardware, software, or network failure can cause a cluster to fail. Recognizing the need to keep business-critical Siebel CRM applications up and running (and to support disaster planning scenarios), Oracle conducted a second phase of testing using a clustered HA configuration for Siebel CRM workloads. Oracle s clustering products in particular Oracle Solaris Cluster software enable highly available solutions that can meet stringent business continuity requirements for Siebel CRM deployments. 22
26 Configuring for HA Using Oracle Solaris Cluster Software A cluster is two or more servers (or nodes) that work together as a single, continuously available system to provide applications, system resources, and data to users. Each cluster node is a fully functional standalone system. However, in a clustered environment, an interconnect bridges the nodes, which work together as a single entity to provide increased availability and performance. The interconnect carries important cluster information (data as well as a heartbeat) that allows cluster nodes to monitor the health of other cluster nodes. High availability using clustered systems is achieved through a combination of both hardware and software. Oracle Solaris Cluster software enables business continuity and global disaster recovery solutions to meet evolving datacenter needs. In a nutshell, the clustering software: Makes use of proven availability and virtualization features in Oracle Solaris 10 and in UltraSPARC processor-based systems, including those in Sun SPARC Enterprise servers Supports an industry-leading portfolio of commercial applications, including Oracle RDBMS, Oracle Siebel CRM, and Web server technologies Is certified with a broad range of storage arrays and SPARC and x64/x86 platforms The most recent release of Oracle Solaris Cluster software implements high availability for consolidated environments that use container or domain virtualization technologies, such as the Siebel CRM proof-of-concept solution described in this paper. Oracle Solaris Cluster software supports Oracle Solaris Containers for fault isolation, security isolation, and resource management. Oracle Solaris Cluster can also help to protect virtualized environments that use Oracle VM Server for SPARC domains, lowering risk for servers that provide multiple application services. When consolidating Siebel CRM tiers in this way, Oracle Solaris Cluster provides high availability agents to monitor components running in different virtualized environments (see Table 13). Available Oracle Solaris Cluster agents include software to support services such as Oracle RDBMS, Siebel services, NFS, DNS, the Oracle iplanet Web Server, the Apache Web Server, and so forth. Oracle Solaris Cluster software provides configuration files and management methods to start, stop, and monitor these application services. TABLE 13. ORACLE SOLARIS CLUSTER AGENTS SOLUTION COMPONENT PROTECTED BY Web Server Oracle Solaris Cluster HA for Oracle iplanet Web Server Siebel Gateway Oracle Solaris Cluster HA for Siebel (resource type: SUNW.sblgtwy) Siebel Server Oracle Solaris Cluster HA for Siebel (resource type: SUNW.sblsrvr) Oracle Database Oracle Solaris Cluster HA for Oracle Database 23
27 Figure 15 depicts the HA proof-of-concept configuration used as the basis of the Phase 2 testing. The HA configuration uses Oracle Solaris Cluster's Zone Cluster feature to consolidate the entire solution stack on two physical machines by deploying the Web server, Gateway, Application and Database tiers in four separate virtual clusters. Figure 15. Oracle Solaris Cluster can help to deliver highly available Siebel CRM services. Designed as a failover environment, the Web server and Database are deployed on one machine, and the Gateway and Siebel Servers are deployed on the other. This distributes the workload across the two machines. If one machine fails, all services are hosted on the surviving machine. When the failed machine is restored, Oracle Solaris Cluster can automatically restore application distribution across the two machines, or an operator can do it manually. This HA configuration is intended to retain operational capability during any single failure, including hardware faults, with as little user impact as possible. As a result, optimization of the servers is biased for maximum concurrent user performance with sufficient computing power kept in reserve to elegantly facilitate transition to a single server with full operational capability. Using the GUI management tool shown in Figure 16, each virtual cluster is assigned appropriate system resources, and each environment operates independently of the others. Appendix C (see page 46) includes configuration information for the zone clusters. Note that the proof-of-concept 24
28 configuration, while useful for purposes of this testing, is not necessarily typical of a production Siebel CRM environment. Figure 16. Oracle s Sun Cluster Manager is used to configure and monitor clustered resources for each zone cluster. In conjunction with highly reliable solution components (such as Sun SPARC Enterprise servers, Sun Storage and StorageTek products, and Oracle Solaris), Oracle Solaris Cluster helps to construct HA solutions that can deliver reliable and resilient Siebel CRM application services. Figure 17 illustrates a large-scale deployment environment Gateway and Database services are clustered and redundant Web and Siebel Servers are deployed to achieve high levels of availability. 25
29 Figure 17. A typical large-scale deployment of clustered servers creates a reliable environment for Oracle Siebel CRM services. Phase 2 Testing Scenarios In the second phase of testing, engineers executed three test scenarios: once each with 2,500, 5,000 and 7,000 active users using an HA configuration and clustered zones defined on the two Sun SPARC Enterprise T5240 servers. Table 14 shows the Siebel CRM server configurations for the three user population scenarios. TABLE 14. CONFIGURATION OF SERVICES FOR HA TESTING NUMBER OF NUMBER OF WEB NUMBER OF SIEBEL TOTAL NUMBER OF SIEBEL NUMBER OF ORACLE CONCURRENT USERS SERVERS SERVERS OBJECT MANAGERS DATABASE INSTANCES 2, , , Performance and Scalability Results with Oracle Solaris Cluster In Phase 2 testing, Oracle Solaris 10 on each server was configured with four clustered containers (zones) in addition to the global zone. Each clustered zone isolated a different Siebel CRM tier Web, Gateway, Application, or Database. Table 15 shows how system resources were dedicated to each tier. This design represents a reasonable and likely deployment scenario. 26
30 TABLE 15. RESOURCES ALLOCATED TO EACH TIER AND CONTAINER IN PHASE 2 TESTING TIER AND CONTAINER VCPUS 2 MEMORY Web tier 16 vcpus 3GB Application tier 70 vcpus 34GB Gateway tier 2 vcpus 1GB Database tier 32 vcpus 24GB 2 Since the Sun SPARC Enterprise T5240 server has two UltraSPARC T2 Plus processors with 8 cores and 8 threads per core, there is a maximum possible 128 vcpus per system, for a total of 256 vcpus in this configuration. (Thus the tested clustered configuration has the same number of vcpus as the single Sun SPARC Enterprise T5440 server used in Phase 1 testing.) 27
31 In this round of testing, data was also collected from Unix system performance tools, Load Runner (the workload generator software), and Oracle Automatic Workload Repository (AWR). The following pages contain metrics for Phase 2 testing of the HA configuration, including: CPU utilization (as a percentage) Memory utilization (in GB) Business transaction throughput (in number of transactions per hour) Average transaction response time (in seconds) CPU Utilization (Clustered Configuration) Figure 18 shows CPU utilization for Web, Gateway/Application, and Database tiers in clustered zones. Table 16 gives the CPU utilization percentage for each tier under each user population. As shown, CPU utilization scales as the number of users increases, and there is additional compute capacity available to handle peaks in utilization, especially in the small and medium configurations. Figure 18. CPU utilization percentage with an HA configuration is shown for each tested user population. TABLE 16. CPU UTILIZATION (%) SIEBEL CRM TIER 2,500 USERS 5,000 USERS 7,000 USERS Web Server Gateway/Application Server Database Server
32 Memory Utilization (Clustered Configuration) Figure 19 shows memory utilization for Siebel CRM tiers deployed in clustered zones. Table 17 lists corresponding utilization (in gigabytes). As the data and graph illustrate, in all three population scenarios, memory utilization remains low, indicating that more than adequate memory resources are configured. (Note that each Sun SPARC Enterprise T5240 server can support up to a maximum of 256GB.) Figure 19. Memory utilization in the HA configuration is given in gigabytes for each tested user population. TABLE 17. MEMORY UTILIZATION (GB) SIEBEL CRM TIER 2,500 USERS 5,000 USERS 7,000 USERS Web Server Gateway/Application Server Database Server Business Transaction Throughput (Clustered Configuration) Figure 20 shows the number of business transactions per hour for the three transaction types under each user population load. Table 18 lists the throughput rates. As the user population increases from 2,500 to 5,000 to 7,000 users, throughput increases almost linearly. 29
33 Figure 20. Business transaction throughput with an HA configuration is shown for each user population. TABLE 18. TRANSACTION THROUGHPUT (TRANSACTIONS/HOUR) BUSINESS TRANSACTION TYPE 2,500 USERS 5,000 USERS 7,000 USERS Financial Services Call Center EAI Web Services Average Transaction Response Time (Clustered Configuration) Figure 21 depicts the average transaction response time for the three transaction types under each user population using containers. Table 19 lists the average response time in seconds for each transaction type. For purposes of the testing exercise, response times are measured at the Web server instead of at the end user. (This is because response times at the end user depend on a number of other variables such as network latency, the bandwidth between Web server and browser, and the time for content rendering by the browser.) 30
34 Figure 21. Average transaction response time (given an HA configuration) is shown for different workload types and each user population. TABLE 19. AVERAGE TRANSACTION RESPONSE TIME (SECONDS) SIEBEL CRM TIER 2,500 USERS 5,000 USERS 7,000 USERS Financial Services Call Center EAI Web Services Transaction Throughput and Response Time (Clustered Configuration) Performance and scalability are inextricably linked. For this reason it is important to examine throughput and response time metrics together when analyzing application performance and configuration scalability. As application load increases, response time must remain within acceptable bounds. As a rule of thumb, as the number of concurrent users increases, if there is a linear increase in throughput, then the increase in response times should also be within an acceptable limit. Figure 22 combines transaction throughput and response time for the three transaction types and user population loads. Table 20 lists the corresponding data values. As the data indicates, increases in throughput remain almost linear as user load increases, and response times continue to remain within reasonable, sub-second bounds. 31
35 Figure 22. Throughput and response times (given an HA configuration) are shown for different workload types and user populations. TABLE 20. TRANSACTION THROUGHPUT (TPH, TRANSACTIONS PER HOUR) AND RESPONSE TIME (RT, IN SECONDS) SIEBEL CRM TIER 2,500 USERS 5,000 USERS 7,000 USERS Financial Services Call Center TPH EAI Web Services TPH Financial Services Call Center RT EAI Web Services RT Power Consumption (Clustered Configuration) During the Phase 2 testing of the HA configuration, power consumption was not explicitly measured. Estimated power consumption for a Sun SPARC Enterprise T5240 server supporting 7000 concurrent Siebel users is around 778 watts, which is approximately 8.9 users per watt. Failover Testing with Oracle Solaris Cluster In addition to performance and scalability testing, Oracle engineers conducted failover testing. Using the same Phase 2 test configuration shown in Figure 15 (page 24), in which one server node hosts primary instances of Web and Database services while a second node hosts primary instances of Gateway and Seibel servers, Oracle engineers conducted four separate failover tests. The failover tests executed under a workload simulating 1000 concurrent users (40% Financial and 60% EAI) and consisted of these four scenarios: Failover of the primary Gateway server on node 2. After the simulated workload reached 1000 active users, engineers killed all processes associated with the Gateway server on node 1. As a result, Oracle 32
36 Solaris Cluster restarted the Gateway resource group on node 2. Once the Gateway server came online, workload generation resumed. Throughput and response time were measured to examine whether these metrics were consistent both before and after the failover. Reboot of the primary Web server on node 1. With 1000 simulated concurrent users, engineers rebooted the zone cluster on node 1 supporting the Web server. Oracle Solaris Cluster then failed over the Web server resource group to the second node. Once the Web server came online, the workload simulator resumed load generation and engineers measured throughput and response time to determine consistency before and after the fault. Reboot of the Database server instance on node 1. After the simulated workload reached 1000 active users, engineers rebooted the zone cluster on node 1 with the Database server. Oracle Solaris Cluster failed over the Database server resource group to the second node. Once the Database server came online, workload generation resumed. Throughput and response time were measured to determine consistency before and after the failover. Complete power loss of node 2. In this scenario, after the simulated workload reached 1000 users, engineers powered off node 2 via the server s built-in service processor. In response, Oracle Solaris Cluster restarted the Gateway and Siebel Server resource groups on node 1. Again, throughput and response time were measured for consistency before and after the node failure. For purposes of this test, a simplified 1000-user workload in a single pass was used rather than a more burdensome load. This was done to produce a cogent, representative sample of results rather than a range of results that would differ very little (if at all) with increased workload. Concurrent users in this configuration have very little impact on failure detection or recovery times. In all four scenarios, throughput and response times were consistent before and after failover. Table 21 shows metrics for the 1000-user workload, including baseline values measured prior to testing. TABLE 21. TRANSACTION THROUGHPUT AND RESPONSE TIME IN FAILOVER SCENARIOS FAILOVER TEST SCENARIO # USERS THROUGHPUT (TPH) RESPONSE TIME (SEC) DETECTION (D) AND RECOVERY (R) TIMES Baseline 400 Financial N/A (All tiers, nodes 1 and 2) 600 EAI Failover of primary Gateway 400 Financial Gateway: D = 1s, R = 1mn17s server on node EAI Siebel: R = 26s Total stack: D+R = 1mn44s Failover of primary Web server 400 Financial Web: D = 14s, R = 1mn57s on node EAI Total: D+R = 2mn11s Failover of primary Database 400 Financial Database: D = 17s, R = 1mn1s server on node EAI Total: D+R = 1mn18s 33
37 Failover of node 2 (power-off) 400 Financial D = 16s 600 EAI Gateway: R = 23s Siebel: R = 1mn24s Total stack: D+R = 2mn3s Best Practices and Recommendations Prior to testing the solution, engineers made several optimizations to the Siebel CRM configurations. Summarized below, these settings and modifications can help customers optimize performance and scalability when consolidating Siebel CRM Web, Gateway/Application, and Database tiers on a server. Sizing recommendations are included at the end of this section, and can be tailored to site-specific requirements. Oracle consultants are experienced in designing optimal solutions for Siebel CRM applications, and are knowledgeable about best practices. By engaging these consultants in application and system architectural design, customers can achieve optimal configurations to help meet business and site requirements. Server/Operating System Optimizations Best practices for optimizing the server and operating system include the following: Make sure the server firmware is up to date. Check the Sun System Firmware Release site ( for the latest firmware release. Install the latest release of Oracle Solaris 10. Customers running Siebel CRM applications on Oracle Solaris 10 OS 5/08 should apply kernel patch from sunsolve.sun.com. Later releases incorporate an equivalent workaround for this critical Siebel-specific bug, so no additional patching is required. Eventually Oracle will fix this bug in their code base, but in the meantime the Solaris 10 OS 10/08 release (or the patch for the earlier Solaris OS version) addresses this issue for Siebel applications (and other 32-bit applications that include memory allocators that return unaligned mutexes). For more information, see Sun RFE ( Need to accommodate non-8-byte-aligned mutexes ) or the Oracle Siebel support document # Optimize Oracle Solaris 10 settings in /etc/system. Enable 256M memory page sizes on all nodes. By default, the latest update of the Solaris 10 OS uses a default maximum of 4M memory pages even when 256M pages are a better application fit. To set a 256M page size, change the setting in /etc/system as follows: set max_uheap_lpsize=0x To avoid running into the standard input/output (stdio) limitation of 256 file descriptors, add the following lines to start_server in the Siebel CRM Gateway/Application tier: ulimit n 2048 LD_PRELOAD_32=/usr/lib/extendedFILE.so.1 export LD_PRELOAD_32 34
38 The default file descriptor limit in a shell is 256 and the maximum limit is 65,536. However, 2,048 is a reasonable limit from the application s perspective. Improve scalability with a MT-hot memory allocation library, libumem or libmtmalloc. To improve the scalability of the multi-threaded workloads, preload an MT-hot, object-caching memory allocation library like libumem(3lib), mtmalloc(3malloc). To preload the libumem library, set the LD_PRELOAD_32 environment variable in the shell (bash/ksh) as shown below. Export LD_PRELOAD_32=/usr/lib/libumem.so.1:$LD_PRELOAD_32 Web and Application servers in the Siebel Enterprise stack are 32-bit. However Oracle 10g or 11g RDBMS on the Solaris 10 OS for UltraSPARC processor-based servers is 64-bit. Hence the path to the libumem library in the PRELOAD statement differs slightly in the Database tier, as shown below. Export LD_PRELOAD_64=/usr/lib/sparcv9/libumem.so.1:$LD_PRELOAD_64 Be aware that the trade-off is an increase in memory footprint there can be a resulting 5 to 20% increase in the memory footprint with an MT-hot memory allocation library preloaded. In previous Siebel 8 code testing there has been around a 5% improvement in CPU utilization with a 9% increase in the memory footprint with a load of 400 users. Tune the TCP/IP network stack by modifying these settings: ndd set /dev/tcp tcp_time_wait_interval ndd set /dev/tcp tcp_conn_req_max_q 1024 ndd set /dev/tcp tcp_conn_req_max_q ndd set /dev/tcp tcp_ip_abort_interval ndd set /dev/tcp tcp_keepalive_interval ndd set /dev/tcp tcp_rexmit_interval_initial 3000 ndd set /dev/tcp tcp_rexmit_interval_max ndd set /dev/tcp tcp_rexmit_interval_min 3000 ndd set /dev/tcp tcp_smallest_anon_port 1024 ndd set /dev/tcp tcp_slow_start_initial 2 ndd set /dev/tcp tcp_xmit_hiwat ndd set /dev/tcp tcp_recv_hiwat ndd set /dev/tcp tcp_max_buf ndd set /dev/tcp tcp_cwnd_max ndd set /dev/tcp tcp_fin_wait_2_flush_interval ndd set /dev/udp udp_xmit_hiwat ndd set /dev/udp udp_recv_hiwat ndd set /dev/udp udp_max_buf I/O Best Practices The Siebel 8 PSPP workload is moderately sensitive to disk I/O. For example, when all 14,000 concurrent users are online, the database writes about 7.5MB worth of data per second (out of 7.5MB, approximately 3MB is written to the redo logs), and reads about 18.5kB per second. The Oracle database server writes data randomly into the data files (because the tables are scattered), whereas writes to the redo logs are largely sequential. For the purpose of testing, the database resided on a UFS file system. 35
39 Best practices relating to I/O include the following: Store the data files separately from the redo log files. If the data files and redo log files are stored on the same disk drive and the disk drive fails, then the redo files cannot be used in the database recovery procedure. For this reason, the proof-of-concept configuration uses two Oracle StorageTek 2540 arrays connected to the Sun SPARC Enterprise T5440 server. One StorageTek 2540 array houses the data files, whereas the other stores Oracle redo log files. File systems for data files and redo logs were hosted under UFS and mounted with the forcedirectio option. Size the online redo logs to control the frequency of log switches. In the tested configuration, two online redo logs were configured each with 10GB disk space. With 14,000 concurrent users in the Phase 1 testing, there was only one log switch during a 60-minute simulated usage period. Eliminate double buffering by forcing the file system to use direct I/O. The Oracle database caches data in its own cache within the Oracle shared global area (SGA) known as the database block buffer cache. Database reads and writes are cached in block buffer cache so that subsequent accesses for the same blocks do not need to re-read data from the operating system. In addition, UFS file systems in Oracle Solaris default to reading data though the global file system cache for improved I/O. This is why, by default, each read is potentially cached twice one copy in the operating system s file system cache, and the other copy in Oracle s block buffer cache. In addition to double caching, extra CPU overhead exists for the code that manages the operating system file system cache. The solution is to eliminate double caching by forcing the file system to bypass the OS file system cache when reading and writing to the disk. To implement direct I/O and eliminate double caching, mount the UFS file systems (that hold the data files and the redo logs) with the forcedirectio option: mount o forcedirectio /dev/dsk/<partition> <mountpoint> Enable the StorageTek 2540 array s read-ahead feature. When read-ahead enabled is set to true, the write is committed to the cache as opposed to the disk, and the OS signals the application that the write has been committed. The read-ahead feature is enabled through the GUI of the StorageTek Common Array Manager (CAM) software. Web Tier Best Practices Best practices for the Web tier include the following: Upgrade to the latest service pack of the Oracle iplanet Web Server (formerly Sun Java Web Server). Run the Web server in multi-process mode by setting the MaxProcs directive in magnus.conf to a value greater than 1. In multi-process mode, the Web server can handle requests using multiple processes with multiple threads in each process. With a value greater than 1 for MaxProcs, the Web server relies on the operating system to distribute connections among multiple Web server processes. However, many modern operating systems (including Oracle Solaris) do not distribute connections evenly, particularly when there are a small number of concurrent connections. For this reason, tune the parameter for the maximum number of simultaneous requests by setting the RqThrottle parameter in magnus.conf to an appropriate value. In Phase 1 testing, a value of 1024 was used in the 14,000 user test. 36
40 Siebel Application Tier Best Practices Best practices for the Siebel Application tier include the following: Comment out the following lines in $SIEBEL_HOME/siebsrvr/bin/siebmtshw. # This will set 4M page size for Heap and 64 KB for stack # MPSSHEAP=4M # MPSSSTACK=64K # MPSSERRFILE=/tmp/mpsserr # LD_PRELOAD=/usr/lib/mpss.so.1 # export MPSSHEAP MPSSSTACK MPSSERRFILE LD_PRELOAD All Sun SPARC Enterprise T-series systems (Sun SPARC Enterprise T1000/T2000, T5120/T5220, T5140/T5240, and T5440 servers) support a 256M page size. However Siebel s siebmtshw script restricts the page size to 4M and 64kB for stack unless indicated lines are commented out in the script. Experiment with a fewer number of Siebel Object Managers. Configure the Object Managers in such a way that each Object Manager handles at least 200 active users. Siebel s standard recommendation of 100 or less users per Object Manager is suitable for conventional systems but not ideal for CMT systems like the Sun SPARC Enterprise T5440 server. Sun s CMT systems are ideal for running multi-threaded processes with numerous lightweight processors (LWPs) per process. With fewer Siebel Object Managers, there is also usually a significant improvement in the overall memory footprint. Oracle Database Tier Best Practices Best practices for the Oracle Database tier include setting the following initialization parameters: Set the Oracle initialization parameter DB_FILE_MULTIBLOCK_READ_COUNT to appropriate value, such as 8. The DB_FILE_MULTIBLOCK_READ_COUNT parameter specifies the maximum number of blocks read in one I/O operation during a sequential scan. In the testing, DB_BLOCK_SIZE was set to 8kB. Since average reads are around 18.5kB per second, setting DB_FILE_MULTIBLOCK_READ_COUNT to a higher value does not necessarily help to improve I/O performance. Set the database initialization parameter CPU_COUNT to 64 on Sun SPARC Enterprise T5240 and T5440 servers. Otherwise, by default, the Oracle RDBMS assumes CPU_COUNT of 128 and 256 for Sun SPARC Enterprise T5240 and T5440 servers respectively. If this parameter is not adjusted, the Oracle optimizer can use a completely different execution plan when it notices such a large CPU_COUNT number, which might not be optimal. In the 14,000-user benchmark in Phase 1 testing, setting CPU_COUNT to 64 produced optimal execution plans. Explicitly set the database initialization parameter enablenumaoptimization to FALSE for Sun SPARC Enterprise T5240 and T5440 servers. On these multi-socket servers, the parameter enablenumaoptimization is set to TRUE by default. During the 14,000-user test, intermittent shadow process crashes occurred with the default. There were no additional gains with the default NUMA optimizations. 37
41 Best Practices for High Availability Configurations Oracle Solaris Cluster HA for Siebel provides fault monitoring and automatic failover for the Siebel Gateway and Siebel Server. However, in a Siebel cluster deployment, any physical node running the Oracle Solaris Cluster agent for Siebel cannot also run the Resonate agent. (Resonate and Oracle Solaris Cluster can coexist in the same Siebel enterprise, but not on the same physical server. For more information, see Oracle s Sun Cluster Data Service for Siebel Guide for Solaris OS on docs.sun.com.) Load balancing is a technique to spread workload between two or more instances of the same application to increase throughput and availability. The Web tier can be load-balanced for high availability in a N+1 architecture, such as having multiple containers or domains housing the Web Server with SWSE (Siebel Web Server Extensions) along with a hardware load balancer. Additionally, Oracle Solaris Cluster can load balance the Web server. An Oracle Solaris Cluster feature called Shared Address Resource for Scalable Services allows multiple instances of the same application (such as the Web server) on each node to listen and process requests sent to the same IP address and port number. However, when the Cluster HA agent for the Web server is used together with the Cluster HA agent for Siebel Server, Oracle Solaris Cluster can only provide failover service to the Web server. To provide disaster recovery over unlimited distance, Oracle Solaris Cluster Geographic Edition provides a multi-site, multi-cluster disaster recovery solution to manage application availability across geographically remote clusters. In the event that a primary cluster fails, Oracle Solaris Cluster Geographic Edition enables administrators to initialize business services with replicated data on a secondary cluster, as depicted in Figure 23. Figure 23. Oracle Solaris Cluster Geographic Edition enables disaster recovery solutions over long distances for Siebel CRM services. 38
42 Sizing Guidelines Under the Siebel 8 PSPP testing workload in Phase 1, engineers set virtual CPU (vcpu) and memory allocations for Oracle Solaris Containers as shown in Table 22: TABLE 22. ACTUAL RESOURCE ALLOCATIONS FOR 14,000 USERS ON SUN SPARC ENTERPRISE T5440 SERVER TIER VCPUS MEMORY ACTUAL USAGE IN TESTED CONFIGURATION Web tier 22 vcpus 8GB CPU: 78.21% Memory: 4.5 GB Application tier 196 vcpus 87.5GB CPU: 76.29% Memory: 73 GB Database tier 38 vcpus 32GB CPU: 71.33% Memory: 20 GB While the above resource allocations proved to be ideal for the large 14,000-user configuration, these allocations were not optimal for the small (3,500-user) and medium (7,000-user) configurations in Phase 1 overall resource utilization was much lower for these populations. For small and medium configurations, Table 23 and Table 24 (respectively) project how CPU and memory resources should instead be allocated based on actual CPU and memory utilization. TABLE 23. RECOMMENDED RESOURCE ALLOCATIONS FOR 3,500 USERS TIER VCPUS MEMORY ACTUAL USAGE IN TESTED CONFIGURATION Web tier 6 vcpus 2GB With 22 vcpus, 8GB RAM: CPU: 13.67% Memory: 1.1 GB Application tier 49 vcpus 22GB With 98 vcpus, 44GB RAM: CPU: 10.75% Memory: 19 GB Database tier 10 vcpus 8GB With 19 vcpus, 16GB RAM: CPU: 14.22% Memory: 12 GB 39
43 TABLE 24. RECOMMENDED RESOURCE ALLOCATION FOR 7,000 USERS TIER VCPUS MEMORY ACTUAL USAGE IN TESTED CONFIGURATION Web tier 11 vcpus 4GB With 22 vcpus, 8GB RAM: CPU: 30.65% Memory: 2 GB Application tier 98 vcpus 44GB With 98 vcpus, 44GB RAM: CPU: 26.80% Memory: 36 GB Database tier 19 vcpus 16GB With 19 vcpus, 16GB RAM: CPU: 29.60% Memory: 15 GB Given resource allocations in Table 23 and Table 24, a Sun SPARC Enterprise T5440 Server could potentially be configured as summarized in Table 25. TABLE 25. POSSIBLE CONFIGURATIONS FOR SUN SPARC ENTERPRISE T5440 SERVER NUMBER USERS OF DESCRIPTION TOTAL VPCUS PHYSICAL CPUS TOTAL MEMORY 3,500 Small GB 7,000 Medium GB 14,000 Large GB Of course, actual resource configurations depend specifically on site requirements. In small to mediumsized deployments, one strategy is to deploy a server with a greater number of physical resources than what is minimally required for Siebel CRM applications, and to use excess resources and additional virtualized environments to support other (non-siebel) application workloads. This enables tremendous flexibility as growth occurs. Another alternative is to deploy the Siebel CRM solution on a smaller server, such as the Sun SPARC Enterprise T5240 server. Using a smaller server lowers the cost of deploying an HA configuration by implementing a second server as in the Phase 2 HA test configuration. 40
44 Baseline Configurations Expected performance characteristics are based on proof-of-concept test implementations and are provided as is without warranty of any kind. The entire risk of using information provided herein remains with the reader and in no event shall Oracle be liable for any direct, consequential, incidental, special, punitive or other damages including without limitation, damages for loss of business profits, business interruption or loss of business information. Based on the testing described in this paper, the remainder of this section outlines recommended hardware configurations as a starting point for small, medium and large deployments. For optimal sizing information, contact your local Oracle representative. Small HA Configuration up to 3,500 users For a highly available configuration supporting up to 3,500 concurrent users, the following hardware components should be considered: Storage Two Sun Storage 6180 arrays, each fully populated with 16 drives and an expansion tray fully populated with 16 drives, for a minimum total capacity of 128 TB. Additional expansion trays can be added to support further capacity requirements. Servers Two Sun SPARC Enterprise T5140 servers, each with 2 CPUs and 128GB of RAM. Medium HA Configuration up to 7,000 users For a medium-sized HA configuration supporting up to 7,000 users, these hardware components are recommended for deployment: Storage Two StorageTek 6140 arrays, configured to achieve a capacity of at least 128 TB. Expansion trays can be added to support additional capacity. Servers Two Sun SPARC Enterprise T5240 servers, each with 2 CPUs and 128GB of RAM. Large HA Configuration up to 14,000 users For a highly available configuration supporting up to 14,000 concurrent users, consider the following hardware components: Storage Two StorageTek 2540 Arrays, configured to achieve a capacity of at least 128 TB. Expansion trays can be added to support additional capacity. Servers Two Sun SPARC Enterprise T5440 servers, each with 2 CPUs and 128GB of RAM. Since the Sun SPARC Enterprise T5440 servers are quad-socketed, this configuration enables CPU expansion in support of additional applications or to enhance available processing resources. 41
45 Conclusion Virtualization allows Siebel CRM applications to be consolidated securely and effectively on a single server, offering many benefits over the use of multiple physical machines better resource utilization, smaller datacenter footprint, and lower power consumption. Oracle engineers set out to examine the impact of combining Web, Gateway/Application, and Database tiers for Siebel CRM applications on a single Sun SPARC Enterprise T5440 server from Oracle. Using Oracle Solaris Containers or Oracle VM Server for SPARC technologies to create virtualized environments with dedicated system resources, they determined that a single server could support up to 14,000 users under a complex business transaction workload derived from the PSPP benchmark. The advanced thread density of a single Sun SPARC Enterprise T5440 server allowed throughput to scale almost linearly for small, medium, and large user populations, at the same time achieving reasonable response times. The second phase of testing confirmed scalability of Siebel CRM workloads when HA technology is deployed in conjunction with virtualization technologies built into Sun SPARC Enterprise servers. By implementing Oracle Solaris Cluster HA products on two servers (which together offered the same number of threads as a single Sun SPARC Enterprise T5440 server), Oracle engineers observed good scalability using virtualized Siebel CRM tiers for up to 7,000 users. Thus, a clustered configuration of economical Sun SPARC Enterprise T5240 servers offers a scalable and resilient platform for deploying mission-critical Siebel CRM services. By taking advantage of the advanced thread density and scalability of Oracle s Sun SPARC Enterprise servers, customers can build fail-sale virtualized environments that enable remote failover, allowing IT managers to meet SLAs and satisfy stringent disaster recovery requirements for Siebel CRM applications. In configuring a server for a Siebel CRM deployment, Oracle consultants can help to define an effective architectural model, determine optimal sizing, decide what virtualization technologies to use, and recommend initial resource allocations. For more information on engaging experienced Oracle experts to design an agile Siebel CRM environment for your business, see the Web site 42
46 Appendix A Phase 1 Configuration of Containers In the first phase of testing, each Oracle Siebel CRM server ran on a non-global zone as follows: siebelweb for the Web server; siebelapp for the Gateway/Application servers siebeldb for the Database server Virtual CPUs (vcpus) and memory were allocated to the siebelweb and siebelapp zones. Only memory was allocated to the siebeldb zone, leaving the siebeldb zone to use necessary vcpus from the global zone. Since all database processes ran in the siebeldb non-global zone, there was a negligible consumption of CPU resources in the global zone during the test. The configuration of each zone is shown using the zonecfg command. Web Server # zonecfg -z siebelweb zonecfg:siebelweb> info zonename: siebelweb zonepath: /zones2/webserver brand: native autoboot: false bootargs: pool: limitpriv: scheduling-class: ip-type: shared inherit-pkg-dir: dir: /lib inherit-pkg-dir: dir: /platform inherit-pkg-dir: dir: /sbin inherit-pkg-dir: dir: /usr net: address: physical: nxge2 defrouter not specified dedicated-cpu: ncpus: 22 capped-memory: physical: 8G 43
47 Application Server # zonecfg -z siebelapp zonecfg:siebelapp> info zonename: siebelapp zonepath: /zones3/appserv brand: native autoboot: false bootargs: pool: limitpriv: scheduling-class: ip-type: shared inherit-pkg-dir: dir: /lib inherit-pkg-dir: dir: /platform inherit-pkg-dir: dir: /sbin inherit-pkg-dir: dir: /usr net: address: physical: nxge1 defrouter not specified dedicated-cpu: ncpus: 196 capped-memory: physical: 88G Database Server # zonecfg -z siebeldb zonecfg:siebeldb> info zonename: siebeldb zonepath: /zones/dbserver brand: native autoboot: false bootargs: pool: limitpriv: scheduling-class: ip-type: shared inherit-pkg-dir: dir: /lib inherit-pkg-dir: dir: /platform inherit-pkg-dir: dir: /sbin inherit-pkg-dir: dir: /usr net: address: physical: nxge3 defrouter not specified device match: /dev/dsk/c6t0d0s6 device match: /dev/dsk/c8t0d0s6 capped-memory: physical: 32G 44
48 Appendix B Phase 1 Configuration of Oracle VM Server for SPARC The ldm list command shows the three domains used for testing in Phase 1. # ldm list NAME STATE FLAGS CONS VCPU MEMORY primary active -n-cv SP 38 32G siebelapp active -n M siebelweb active -n G Details on the three domain configurations are given below. Primary Domain Domain Name: primary VARIABLES boot-device=/pci@400/pci@0/pci@1/scsi@0/disk@0,0:a disk net IO DEVICE PSEUDONYM OPTIONS pci@400 pci@500 pci@600 pci@700 pci pci pci pci VCC NAME PORT-RANGE primary-vcc VSW NAME MAC NET-DEV DEVICE MODE primary-vsw0 00:14:4f:fb:64:21 nxge3 switch@0 primary-vsw1 00:14:4f:fb:49:d2 nxge2 switch@1 VDS NAME VOLUME OPTIONS DEVICE primary-vds0 vol1 /dev/dsk/c3t40d1s2 primary-vds1 vol2 /dev/dsk/c2t40d1s2 VCONS NAME SERVICE PORT SP Based on measurements from the test, if the Database server had run in a Guest Domain instead of the Primary Domain, then some resources should be reassigned to its Guest Domain, but leaving at least 1 vcpu and 0.5 GB of RAM assigned to the Primary Domain. Siebel Application Server Domain 45
49 Domain Name: siebelapp VARIABLES auto-boot?=false NETWORK NAME SERVICE DEVICE MAC vnet2 00:14:4f:f8:8f:13 DISK NAME VOLUME TOUT DEVICE SERVER vdisk2 primary VCONS NAME SERVICE PORT siebelapp Siebel Web Server Domain Domain Name: siebelweb VARIABLES auto-boot?=false nvramrc=devalias vnet0 use-nvramrc?=true NETWORK NAME SERVICE DEVICE MAC vnet1 00:14:4f:fb:01:50 DISK NAME VOLUME TOUT DEVICE SERVER vdisk1 primary VCONS NAME SERVICE PORT siebelweb Appendix C Phase 2 Configuration of Zone Clusters In Phase 2 testing, engineers configured zone clusters for each Oracle Siebel CRM server instance as shown in Figure 15 (see page 24). The zone clusters were: websrv-zc for the Web server siebelgw-zc for the Gateway server 46
50 siebelsrv-zc for the Siebel application server dbsrv-zc for the Database server Below, the clzc command shows status information for the zone clusters and the clrg command shows status information for cluster resource groups. In subsequent pages, the clzc command displays configuration details for each zone cluster. # clzc status === Zone Clusters === --- Zone Cluster Status --- Name Node Name Zone HostName Status Zone Status siebelsrv-zc db tm Online Running boxi tm Online Running siebelgw-zc db tm Online Running boxi tm Online Running websrv-zc db tm Online Running boxi tm Online Running dbsrv-zc db tm Online Running boxi tm Online Running # clrg status -Z all === Cluster Resource Groups === Group Name Node Name Suspended Status siebelsrv-zc:siebelsrv-rg tm No Offline tm No Online siebelgw-zc:siebelgw-rg tm No Offline tm No Online websrv-zc:websrv-rg tm No Online tm No Offline dbsrv-zc:dbsrv-rg tm No Online tm No Offline Web Server # clzc show -v websrv-zc === Zone Clusters === Zone Cluster Name: zonename: zonepath: autoboot: brand: bootargs: pool: websrv-zc websrv-zc /zone/websrv-zc TRUE cluster <NULL> <NULL> 47
51 limitpriv: scheduling-class: ip-type: enable_priv_net: <NULL> <NULL> shared TRUE --- Solaris Resources for websrv-zc --- address: physical: net tm auto fs dir: /siebel/web special: /dev/global/dsk/d8s6 raw: /dev/global/rdsk/d8s6 type: ufs options: [] sysid name_service: DNS{domain_name=sfbay.sun.com name_server= } nfs4_domain: dynamic security_policy: NONE system_locale: C terminal: xterms timezone: US/Pacific physical: swap: swap: dir (0): dir (1): dir (2): dir (3): dir (1): dir (2): dir (3): dir (2): dir (3): dir (3): capped-memory 3G 4G capped-memory 4G inherit-pkg-dir /lib /platform /sbin /usr inherit-pkg-dir /platform /sbin /usr inherit-pkg-dir /sbin /usr inherit-pkg-dir /usr dedicated-cpu ncpus: 16 importance: 20 dedicated-cpu importance: 20 rctl name: zone.max-swap priv: privileged limit: action: deny 48
52 --- Zone Cluster Nodes for websrv-zc --- Node Name: physical-host: hostname: --- Solaris Resources for db --- db db tm net address: physical: nxge0 defrouter: <NULL> Node Name: physical-host: hostname: boxi boxi tm Solaris Resources for boxi --- net address: physical: nxge0 defrouter: <NULL> Gateway Server # clzc show -v siebelgw-zc === Zone Clusters === Zone Cluster Name: zonename: zonepath: autoboot: brand: bootargs: pool: limitpriv: scheduling-class: ip-type: enable_priv_net: siebelgw-zc siebelgw-zc /zone/siebelgw-zc TRUE cluster <NULL> <NULL> <NULL> <NULL> shared TRUE --- Solaris Resources for siebelgw-zc --- address: physical: net tm auto fs dir: /siebel/gateway special: /dev/global/dsk/d10s6 raw: /dev/global/rdsk/d10s6 type: ufs options: [] sysid name_service: DNS{domain_name=sfbay.sun.com name_server= } nfs4_domain: dynamic security_policy: NONE system_locale: C terminal: xterms timezone: US/Pacific 49
53 physical: swap: swap: dir (0): dir (1): dir (2): dir (3): dir (1): dir (2): dir (3): capped-memory 1G 1G capped-memory 1G inherit-pkg-dir /lib /platform /sbin /usr inherit-pkg-dir /platform /sbin /usr dir (2): dir (3): dir (3): inherit-pkg-dir /sbin /usr inherit-pkg-dir /usr dedicated-cpu ncpus: 2 importance: 20 dedicated-cpu importance: 20 rctl name: zone.max-swap priv: privileged limit: action: deny --- Zone Cluster Nodes for siebelgw-zc --- Node Name: physical-host: hostname: --- Solaris Resources for db --- db db tm net address: physical: nxge0 defrouter: <NULL> Node Name: physical-host: hostname: boxi boxi tm Solaris Resources for boxi --- net address: physical: nxge0 defrouter: <NULL> 50
54 Application Server # clzc show -v siebelsrv-zc === Zone Clusters === Zone Cluster Name: zonename: zonepath: autoboot: brand: bootargs: pool: limitpriv: scheduling-class: ip-type: enable_priv_net: siebelsrv-zc siebelsrv-zc /zone/siebelsrv-zc TRUE cluster <NULL> <NULL> <NULL> <NULL> shared TRUE --- Solaris Resources for siebelsrv-zc --- address: physical: net tm auto fs dir: /siebel/server special: /dev/global/dsk/d12s6 raw: /dev/global/rdsk/d12s6 type: ufs options: [] sysid name_service: DNS{domain_name=sfbay.sun.com name_server= } nfs4_domain: dynamic security_policy: NONE system_locale: C terminal: xterms timezone: US/Pacific physical: swap: swap: dir (0): dir (1): dir (2): dir (3): dir (1): dir (2): dir (3): dir (2): dir (3): dir (3): capped-memory 34G 43G capped-memory 43G inherit-pkg-dir /lib /platform /sbin /usr inherit-pkg-dir /platform /sbin /usr inherit-pkg-dir /sbin /usr inherit-pkg-dir /usr dedicated-cpu 51
55 ncpus: 70 importance: 20 dedicated-cpu importance: 20 rctl name: zone.max-swap priv: privileged limit: action: deny --- Zone Cluster Nodes for siebelsrv-zc --- Node Name: physical-host: hostname: --- Solaris Resources for db --- db db tm net address: physical: nxge0 defrouter: <NULL> Node Name: physical-host: hostname: boxi boxi tm Solaris Resources for boxi --- net address: physical: nxge0 defrouter: <NULL> Database Server # clzc show -v dbsrv-zc === Zone Clusters === Zone Cluster Name: zonename: zonepath: autoboot: brand: bootargs: pool: limitpriv: scheduling-class: ip-type: enable_priv_net: --- Solaris Resources for dbsrv-zc --- address: physical: dir: dbsrv-zc dbsrv-zc /zone/dbsrv-zc TRUE cluster <NULL> <NULL> <NULL> <NULL> shared TRUE net tm auto fs /oradata/redo 52
56 special: /dev/global/dsk/d9s6 raw: /dev/global/rdsk/d9s6 type: ufs options: [] fs dir: /oradata/control special: /dev/global/dsk/d13s6 raw: /dev/global/rdsk/d13s6 type: ufs options: [] fs dir: /oradata/data special: /dev/global/dsk/d7s6 raw: /dev/global/rdsk/d7s6 type: ufs options: [] sysid name_service: DNS{domain_name=sfbay.sun.com name_server= } nfs4_domain: dynamic security_policy: NONE system_locale: C terminal: xterms timezone: US/Pacific physical: swap: locked: swap: locked: locked: dir (0): dir (1): dir (2): dir (3): dir (1): dir (2): dir (3): capped-memory 24G 40G 24G capped-memory 40G 24G capped-memory 24G inherit-pkg-dir /lib /platform /sbin /usr inherit-pkg-dir /platform /sbin /usr dir (2): dir (3): dir (3): inherit-pkg-dir /sbin /usr inherit-pkg-dir /usr dedicated-cpu ncpus: 32 importance: 20 dedicated-cpu importance: 20 53
57 rctl name: zone.max-locked-memory priv: privileged limit: action: deny rctl name: zone.max-swap priv: privileged limit: action: deny --- Zone Cluster Nodes for dbsrv-zc --- Node Name: physical-host: hostname: --- Solaris Resources for db --- db db tm net address: physical: nxge0 defrouter: <NULL> Node Name: physical-host: hostname: boxi boxi tm Solaris Resources for boxi --- net address: physical: nxge0 defrouter: <NULL> Below, the clrs command reports resource status for dbsrv-zc zone cluster. # clrs status -Z dbsrv-zc === Cluster Resources === Resource Name Node Name State Status Message hasp-rs tm Online Online tm Offline Offline lh-rs tm Online Online - LogicalHostname online. tm Offline Offline db-rs tm Online Online tm Offline Offline lsr-rs tm Online Online tm Offline Offline 54
58 About the Authors Chad Prucha has over 20 years of professional computing experience ranging from coding to datacenter design. Much of his experience derives from work in Oracle s Sun Professional Services organization where he designed and led projects in telepresence, open source software, virtualization, and security. Chad makes an effort to train and certify in competing technologies and products in order to more fairly evaluate their qualities. He is most familiar working with academic, state government, manufacturing, and public utility clients where Information Technology seeks every possible efficiency. Chad also enjoys working with microcontrollers, hydroponics, and Stirling engines. Pedro Lay is an Enterprise Solutions Architect in Oracle s Sun Systems Technical Marketing Group. He has over 20 years of industry experience that spans application development, database and system administration, and performance and tuning efforts. Since joining Sun in 1990, Pedro has worked in various organizations including Information Technology, the Customer Benchmark Center, the Business Intelligence and Data Warehouse Competency Center, and the Performance Applications Engineering group. Acknowledgements The authors would like to recognize the following individuals for their contributions to this article: Gia-Khanh Nguyen, Oracle Solaris Cluster Engineering Michael D. Hernandez, Oracle Data Center Client Solutions Giri Mandalika, ISV engineering Uday Shetty, ISV engineering Jenny Chen, ISV engineering 55
59 References WEB SITES Oracle Sun SPARC Enterprise Servers Oracle Siebel CRM software oracle.com/applications/crm/siebel/index.html PAPERS Using Sun Systems to Build a Virtual and Dynamic Infrastructure, Using Solaris Cluster and Sun Cluster Geographic Edition with Virtualization Technologies Oracle VM Server for SPARC: Enabling A Flexible, Efficient IT Infrastructure Best Practices For Network Availability With Oracle VM Server for SPARC Sun Cluster Data Service for Siebel Guide for Solaris OS wikis.sun.com/display/blueprints/using+solaris+ Cluster+and+Sun+Cluster+Geographic+Edition pdf docs.sun.com 56
60 Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers June 2010 Oracle Corporation World Headquarters 500 Oracle Parkway Redwood Shores, CA U.S.A. Worldwide Inquiries: Phone: Fax: oracle.com Copyright 2008, 2010, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. UNIX is a registered trademark licensed through X/Open Company, Ltd. 0310
An Oracle White Paper November 2012. Increasing the Performance and Efficiency of Siebel CRM A Technical White Paper
An Oracle White Paper November 2012 Increasing the Performance and Efficiency of Siebel CRM A Technical White Paper Executive Overview... 1 Introduction... 2 Key Solution Technologies... 4 An Overview
An Oracle White Paper Released Sept 2008
Performance and Scalability Benchmark: Siebel CRM Release 8.0 Industry Applications on HP BL460c/BL680c Servers running Microsoft Windows Server 2008 Enterprise Edition and SQL Server 2008 (x64) An Oracle
How To Scale A Siebel Crm On A Single Server On A Sparc T-Series Server
An Oracle White Paper July 2013 Increasing the Performance and Efficiency of Siebel CRM A Technical White Paper Executive Overview... 3 Introduction... 3 Key Solution Technologies... 5 An Overview of Oracle
An Oracle White Paper Released October 2008
Performance and Scalability Benchmark for 10,000 users: Siebel CRM Release 8.0 Industry Applications on HP BL460c Servers running Red Hat Enterprise Linux 4.0 and Oracle 10gR2 DB on HP BL680C An Oracle
Copyright 2013, Oracle and/or its affiliates. All rights reserved.
1 Oracle SPARC Server for Enterprise Computing Dr. Heiner Bauch Senior Account Architect 19. April 2013 2 The following is intended to outline our general product direction. It is intended for information
ARCHITECTING COST-EFFECTIVE, SCALABLE ORACLE DATA WAREHOUSES
ARCHITECTING COST-EFFECTIVE, SCALABLE ORACLE DATA WAREHOUSES White Paper May 2009 Abstract This paper describes reference configuration and sizing information for Oracle data warehouses on Sun servers
Deploying XenApp 7.5 on Microsoft Azure cloud
Deploying XenApp 7.5 on Microsoft Azure cloud The scalability and economics of delivering Citrix XenApp services Given business dynamics seasonal peaks, mergers, acquisitions, and changing business priorities
SQL Server Consolidation Using Cisco Unified Computing System and Microsoft Hyper-V
SQL Server Consolidation Using Cisco Unified Computing System and Microsoft Hyper-V White Paper July 2011 Contents Executive Summary... 3 Introduction... 3 Audience and Scope... 4 Today s Challenges...
Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820
Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820 This white paper discusses the SQL server workload consolidation capabilities of Dell PowerEdge R820 using Virtualization.
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies
Informatica Data Director Performance
Informatica Data Director Performance 2011 Informatica Abstract A variety of performance and stress tests are run on the Informatica Data Director to ensure performance and scalability for a wide variety
<Insert Picture Here> Oracle VM and Cloud Computing
Oracle VM and Cloud Computing Paulo Bosco Otto Sales Consultant [email protected] The following is intended to outline our general product direction. It is intended for
Cloud Optimize Your IT
Cloud Optimize Your IT Windows Server 2012 The information contained in this presentation relates to a pre-release product which may be substantially modified before it is commercially released. This pre-release
An Oracle White Paper July 2011. Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide
Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide An Oracle White Paper July 2011 1 Disclaimer The following is intended to outline our general product direction.
Tableau Server Scalability Explained
Tableau Server Scalability Explained Author: Neelesh Kamkolkar Tableau Software July 2013 p2 Executive Summary In March 2013, we ran scalability tests to understand the scalability of Tableau 8.0. We wanted
HP ProLiant DL380 G5 takes #1 2P performance spot on Siebel CRM Release 8.0 Benchmark Industry Applications running Windows
HP ProLiant DL380 G5 takes #1 2P performance spot on Siebel CRM Release 8.0 Benchmark Industry Applications running Windows HP first rack server benchmark on Siebel HP Leadership The HP ProLiant DL380
Red Hat Network Satellite Management and automation of your Red Hat Enterprise Linux environment
Red Hat Network Satellite Management and automation of your Red Hat Enterprise Linux environment WHAT IS IT? Red Hat Network (RHN) Satellite server is an easy-to-use, advanced systems management platform
Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies
Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies Kurt Klemperer, Principal System Performance Engineer [email protected] Agenda Session Length:
Jitterbit Technical Overview : Microsoft Dynamics CRM
Jitterbit allows you to easily integrate Microsoft Dynamics CRM with any cloud, mobile or on premise application. Jitterbit s intuitive Studio delivers the easiest way of designing and running modern integrations
Sun CoolThreads Servers and Zeus Technology Next-generation load balancing and application traffic management
Sun CoolThreads Servers and Zeus Technology Next-generation load balancing and application traffic management < Rapid and unpredictable growth in Web services applications is driving the need for highly
OVERVIEW. CEP Cluster Server is Ideal For: First-time users who want to make applications highly available
Phone: (603)883-7979 [email protected] Cepoint Cluster Server CEP Cluster Server turnkey system. ENTERPRISE HIGH AVAILABILITY, High performance and very reliable Super Computing Solution for heterogeneous
Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers
WHITE PAPER FUJITSU PRIMERGY AND PRIMEPOWER SERVERS Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers CHALLENGE Replace a Fujitsu PRIMEPOWER 2500 partition with a lower cost solution that
An Oracle Benchmarking Study February 2011. Oracle Insurance Insbridge Enterprise Rating: Performance Assessment
An Oracle Benchmarking Study February 2011 Oracle Insurance Insbridge Enterprise Rating: Performance Assessment Executive Overview... 1 RateManager Testing... 2 Test Environment... 2 Test Scenarios...
ABSTRACT INTRODUCTION SOFTWARE DEPLOYMENT MODEL. Paper 341-2009
Paper 341-2009 The Platform for SAS Business Analytics as a Centrally Managed Service Joe Zilka, SAS Institute, Inc., Copley, OH Greg Henderson, SAS Institute Inc., Cary, NC ABSTRACT Organizations that
Red Hat Satellite Management and automation of your Red Hat Enterprise Linux environment
Red Hat Satellite Management and automation of your Red Hat Enterprise Linux environment WHAT IS IT? Red Hat Satellite server is an easy-to-use, advanced systems management platform for your Linux infrastructure.
Monitoring Databases on VMware
Monitoring Databases on VMware Ensure Optimum Performance with the Correct Metrics By Dean Richards, Manager, Sales Engineering Confio Software 4772 Walnut Street, Suite 100 Boulder, CO 80301 www.confio.com
Performance brief for IBM WebSphere Application Server 7.0 with VMware ESX 4.0 on HP ProLiant DL380 G6 server
Performance brief for IBM WebSphere Application Server.0 with VMware ESX.0 on HP ProLiant DL0 G server Table of contents Executive summary... WebSphere test configuration... Server information... WebSphere
Oracle TimesTen In-Memory Database on Oracle Exalogic Elastic Cloud
An Oracle White Paper July 2011 Oracle TimesTen In-Memory Database on Oracle Exalogic Elastic Cloud Executive Summary... 3 Introduction... 4 Hardware and Software Overview... 5 Compute Node... 5 Storage
IBM WebSphere Application Server Family
IBM IBM Family Providing the right application foundation to meet your business needs Highlights Build a strong foundation and reduce costs with the right application server for your business needs Increase
Veritas Cluster Server from Symantec
Delivers high availability and disaster recovery for your critical applications Data Sheet: High Availability Overview protects your most important applications from planned and unplanned downtime. Cluster
Siebel Business Process Framework: Workflow Guide. Siebel Innovation Pack 2013 Version 8.1/8.2 September 2013
Siebel Business Process Framework: Workflow Guide Siebel Innovation Pack 2013 Version 8.1/8.2 September 2013 Copyright 2005, 2013 Oracle and/or its affiliates. All rights reserved. This software and related
DELL s Oracle Database Advisor
DELL s Oracle Database Advisor Underlying Methodology A Dell Technical White Paper Database Solutions Engineering By Roger Lopez Phani MV Dell Product Group January 2010 THIS WHITE PAPER IS FOR INFORMATIONAL
Downtime, whether planned or unplanned,
Deploying Simple, Cost-Effective Disaster Recovery with Dell and VMware Because of their complexity and lack of standardization, traditional disaster recovery infrastructures often fail to meet enterprise
EMC VPLEX FAMILY. Continuous Availability and data Mobility Within and Across Data Centers
EMC VPLEX FAMILY Continuous Availability and data Mobility Within and Across Data Centers DELIVERING CONTINUOUS AVAILABILITY AND DATA MOBILITY FOR MISSION CRITICAL APPLICATIONS Storage infrastructure is
The Benefits of Virtualizing
T E C H N I C A L B R I E F The Benefits of Virtualizing Aciduisismodo Microsoft SQL Dolore Server Eolore in Dionseq Hitachi Storage Uatummy Environments Odolorem Vel Leveraging Microsoft Hyper-V By Heidi
Best Practices for Monitoring Databases on VMware. Dean Richards Senior DBA, Confio Software
Best Practices for Monitoring Databases on VMware Dean Richards Senior DBA, Confio Software 1 Who Am I? 20+ Years in Oracle & SQL Server DBA and Developer Worked for Oracle Consulting Specialize in Performance
Oracle Solutions on Top of VMware vsphere 4. Saša Hederić VMware Adriatic
Oracle Solutions on Top of VMware vsphere 4 Saša Hederić VMware Adriatic The Problem Where the IT Budget Goes 5% Infrastructure Investment 23% Application Investment 42% Infrastructure Maintenance Overwhelming
HP ProLiant BL460c achieves #1 performance spot on Siebel CRM Release 8.0 Benchmark Industry Applications running Microsoft, Oracle
HP ProLiant BL460c achieves #1 performance spot on Siebel CRM Release 8.0 Benchmark Industry Applications running Microsoft, Oracle HP ProLiant BL685c takes #2 spot HP Leadership» The HP ProLiant BL460c
Dell Desktop Virtualization Solutions Simplified. All-in-one VDI appliance creates a new level of simplicity for desktop virtualization
Dell Desktop Virtualization Solutions Simplified All-in-one VDI appliance creates a new level of simplicity for desktop virtualization Executive summary Desktop virtualization is a proven method for delivering
Windows Server 2008 R2 Hyper-V Live Migration
Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described
Virtualization with Microsoft Windows Server 2003 R2, Enterprise Edition
Virtualization with Microsoft Windows Server 2003 R2, Enterprise Edition Microsoft Corporation Published: March 2006 Abstract Virtualization in the volume server market is starting to see rapid adoption
Server Migration from UNIX/RISC to Red Hat Enterprise Linux on Intel Xeon Processors:
Server Migration from UNIX/RISC to Red Hat Enterprise Linux on Intel Xeon Processors: Lowering Total Cost of Ownership A Case Study Published by: Alinean, Inc. 201 S. Orange Ave Suite 1210 Orlando, FL
HP ProLiant BL460c takes #1 performance on Siebel CRM Release 8.0 Benchmark Industry Applications running Linux, Oracle
HP ProLiant BL460c takes #1 performance on Siebel CRM Release 8.0 Benchmark Industry Applications running Linux, Oracle HP first to run benchmark with Oracle Enterprise Linux HP Leadership» The HP ProLiant
Cisco Active Network Abstraction Gateway High Availability Solution
. Cisco Active Network Abstraction Gateway High Availability Solution White Paper This white paper describes the Cisco Active Network Abstraction (ANA) Gateway High Availability solution developed and
Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com
Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...
Business white paper. HP Process Automation. Version 7.0. Server performance
Business white paper HP Process Automation Version 7.0 Server performance Table of contents 3 Summary of results 4 Benchmark profile 5 Benchmark environmant 6 Performance metrics 6 Process throughput 6
AppSense Environment Manager. Enterprise Design Guide
Enterprise Design Guide Contents Introduction... 3 Document Purpose... 3 Basic Architecture... 3 Common Components and Terminology... 4 Best Practices... 5 Scalability Designs... 6 Management Server Scalability...
Architecting for the next generation of Big Data Hortonworks HDP 2.0 on Red Hat Enterprise Linux 6 with OpenJDK 7
Architecting for the next generation of Big Data Hortonworks HDP 2.0 on Red Hat Enterprise Linux 6 with OpenJDK 7 Yan Fisher Senior Principal Product Marketing Manager, Red Hat Rohit Bakhshi Product Manager,
An Oracle White Paper March 2013. Load Testing Best Practices for Oracle E- Business Suite using Oracle Application Testing Suite
An Oracle White Paper March 2013 Load Testing Best Practices for Oracle E- Business Suite using Oracle Application Testing Suite Executive Overview... 1 Introduction... 1 Oracle Load Testing Setup... 2
Liferay Portal Performance. Benchmark Study of Liferay Portal Enterprise Edition
Liferay Portal Performance Benchmark Study of Liferay Portal Enterprise Edition Table of Contents Executive Summary... 3 Test Scenarios... 4 Benchmark Configuration and Methodology... 5 Environment Configuration...
An Oracle White Paper January 2013. Improving SAS Customer Intelligence Solution Performance with Oracle SPARC SuperCluster
An Oracle White Paper January 2013 Improving SAS Customer Intelligence Solution Performance with Oracle SPARC SuperCluster Executive Overview... 1 Introduction... 2 SAS Grid Computing and Oracle SPARC
Scalability and BMC Remedy Action Request System TECHNICAL WHITE PAPER
Scalability and BMC Remedy Action Request System TECHNICAL WHITE PAPER Table of contents INTRODUCTION...1 BMC REMEDY AR SYSTEM ARCHITECTURE...2 BMC REMEDY AR SYSTEM TIER DEFINITIONS...2 > Client Tier...
Citrix XenDesktop Modular Reference Architecture Version 2.0. Prepared by: Worldwide Consulting Solutions
Citrix XenDesktop Modular Reference Architecture Version 2.0 Prepared by: Worldwide Consulting Solutions TABLE OF CONTENTS Overview... 2 Conceptual Architecture... 3 Design Planning... 9 Design Examples...
Scalability and Performance Report - Analyzer 2007
- Analyzer 2007 Executive Summary Strategy Companion s Analyzer 2007 is enterprise Business Intelligence (BI) software that is designed and engineered to scale to the requirements of large global deployments.
Siebel Installation Guide for UNIX. Siebel Innovation Pack 2013 Version 8.1/8.2, Rev. A April 2014
Siebel Installation Guide for UNIX Siebel Innovation Pack 2013 Version 8.1/8.2, Rev. A April 2014 Copyright 2005, 2014 Oracle and/or its affiliates. All rights reserved. This software and related documentation
Veritas Storage Foundation High Availability for Windows by Symantec
Veritas Storage Foundation High Availability for Windows by Symantec Simple-to-use solution for high availability and disaster recovery of businesscritical Windows applications Data Sheet: High Availability
Siebel Installation Guide for Microsoft Windows. Siebel Innovation Pack 2013 Version 8.1/8.2, Rev. A April 2014
Siebel Installation Guide for Microsoft Windows Siebel Innovation Pack 2013 Version 8.1/8.2, Rev. A April 2014 Copyright 2005, 2014 Oracle and/or its affiliates. All rights reserved. This software and
Microsoft Exchange Solutions on VMware
Design and Sizing Examples: Microsoft Exchange Solutions on VMware Page 1 of 19 Contents 1. Introduction... 3 1.1. Overview... 3 1.2. Benefits of Running Exchange Server 2007 on VMware Infrastructure 3...
Reducing the Cost and Complexity of Business Continuity and Disaster Recovery for Email
Reducing the Cost and Complexity of Business Continuity and Disaster Recovery for Email Harnessing the Power of Virtualization with an Integrated Solution Based on VMware vsphere and VMware Zimbra WHITE
Hard Partitioning and Virtualization with Oracle Virtual Machine. An approach toward cost saving with Oracle Database licenses
Hard Partitioning and Virtualization with Oracle Virtual Machine An approach toward cost saving with Oracle Database licenses JANUARY 2013 Contents Introduction... 2 Hard Partitioning Concepts... 2 Oracle
Windows Server 2008 R2 Hyper-V Live Migration
Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...
CA Cloud Overview Benefits of the Hyper-V Cloud
Benefits of the Hyper-V Cloud For more information, please contact: Email: [email protected] Ph: 888-821-7888 Canadian Web Hosting (www.canadianwebhosting.com) is an independent company, hereinafter
Purpose-Built Load Balancing The Advantages of Coyote Point Equalizer over Software-based Solutions
Purpose-Built Load Balancing The Advantages of Coyote Point Equalizer over Software-based Solutions Abstract Coyote Point Equalizer appliances deliver traffic management solutions that provide high availability,
With Red Hat Enterprise Virtualization, you can: Take advantage of existing people skills and investments
RED HAT ENTERPRISE VIRTUALIZATION DATASHEET RED HAT ENTERPRISE VIRTUALIZATION AT A GLANCE Provides a complete end-toend enterprise virtualization solution for servers and desktop Provides an on-ramp to
Technical Paper. Moving SAS Applications from a Physical to a Virtual VMware Environment
Technical Paper Moving SAS Applications from a Physical to a Virtual VMware Environment Release Information Content Version: April 2015. Trademarks and Patents SAS Institute Inc., SAS Campus Drive, Cary,
http://support.oracle.com/
Oracle Primavera Contract Management 14.0 Sizing Guide October 2012 Legal Notices Oracle Primavera Oracle Primavera Contract Management 14.0 Sizing Guide Copyright 1997, 2012, Oracle and/or its affiliates.
Technical Paper. Leveraging VMware Software to Provide Failover Protection for the Platform for SAS Business Analytics April 2011
Technical Paper Leveraging VMware Software to Provide Failover Protection for the Platform for SAS Business Analytics April 2011 Table of Contents About this Document... 3 Introduction... 4 What is Failover?...
Software Performance, Scalability, and Availability Specifications V 3.0
Software Performance, Scalability, and Availability Specifications V 3.0 Release Date: November 17, 2015 Availability: Daric guarantees a Monthly Uptime Percentage (defined below) of at least 99.95%, in
White Paper. Recording Server Virtualization
White Paper Recording Server Virtualization Prepared by: Mike Sherwood, Senior Solutions Engineer Milestone Systems 23 March 2011 Table of Contents Introduction... 3 Target audience and white paper purpose...
Legal Notices... 2. Introduction... 3
HP Asset Manager Asset Manager 5.10 Sizing Guide Using the Oracle Database Server, or IBM DB2 Database Server, or Microsoft SQL Server Legal Notices... 2 Introduction... 3 Asset Manager Architecture...
VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop
VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop Page 1 of 11 Introduction Virtual Desktop Infrastructure (VDI) provides customers with a more consistent end-user experience and excellent
OPTIMIZING SERVER VIRTUALIZATION
OPTIMIZING SERVER VIRTUALIZATION HP MULTI-PORT SERVER ADAPTERS BASED ON INTEL ETHERNET TECHNOLOGY As enterprise-class server infrastructures adopt virtualization to improve total cost of ownership (TCO)
VMware Certified Professional 5 Data Center Virtualization (VCP5-DCV) Exam
Exam : VCP5-DCV Title : VMware Certified Professional 5 Data Center Virtualization (VCP5-DCV) Exam Version : DEMO 1 / 9 1.Click the Exhibit button. An administrator has deployed a new virtual machine on
Virtualizing Exchange
Virtualizing Exchange Simplifying and Optimizing Management of Microsoft Exchange Server Using Virtualization Technologies By Anil Desai Microsoft MVP September, 2008 An Alternative to Hosted Exchange
Disaster Recovery Solutions for Oracle Database Standard Edition RAC. A Dbvisit White Paper
Disaster Recovery Solutions for Oracle Database Standard Edition RAC A Dbvisit White Paper Copyright 2011-2012 Dbvisit Software Limited. All Rights Reserved v2, Mar 2012 Contents Executive Summary... 1
...DYNAMiC INTERNET SOLUTiONS >> Reg.No. 1995/020215/23
GamCo Managed Dedicated Virtual Hosting Business growth in the number of users, application and data has left many organisations with sprawling servers, storage and supporting infrastructure crammed into
EMC VPLEX FAMILY. Continuous Availability and Data Mobility Within and Across Data Centers
EMC VPLEX FAMILY Continuous Availability and Data Mobility Within and Across Data Centers DELIVERING CONTINUOUS AVAILABILITY AND DATA MOBILITY FOR MISSION CRITICAL APPLICATIONS Storage infrastructure is
HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief
Technical white paper HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief Scale-up your Microsoft SQL Server environment to new heights Table of contents Executive summary... 2 Introduction...
EMC BACKUP-AS-A-SERVICE
Reference Architecture EMC BACKUP-AS-A-SERVICE EMC AVAMAR, EMC DATA PROTECTION ADVISOR, AND EMC HOMEBASE Deliver backup services for cloud and traditional hosted environments Reduce storage space and increase
WHITE PAPER 1 WWW.FUSIONIO.COM
1 WWW.FUSIONIO.COM WHITE PAPER WHITE PAPER Executive Summary Fusion iovdi is the first desktop- aware solution to virtual desktop infrastructure. Its software- defined approach uniquely combines the economics
System Requirements. Version 8.2 November 23, 2015. For the most recent version of this document, visit our documentation website.
System Requirements Version 8.2 November 23, 2015 For the most recent version of this document, visit our documentation website. Table of Contents 1 System requirements 3 2 Scalable infrastructure example
<Insert Picture Here> Refreshing Your Data Protection Environment with Next-Generation Architectures
1 Refreshing Your Data Protection Environment with Next-Generation Architectures Dale Rhine, Principal Sales Consultant Kelly Boeckman, Product Marketing Analyst Program Agenda Storage
HP SN1000E 16 Gb Fibre Channel HBA Evaluation
HP SN1000E 16 Gb Fibre Channel HBA Evaluation Evaluation report prepared under contract with Emulex Executive Summary The computing industry is experiencing an increasing demand for storage performance
AlphaTrust PRONTO - Hardware Requirements
AlphaTrust PRONTO - Hardware Requirements 1 / 9 Table of contents Server System and Hardware Requirements... 3 System Requirements for PRONTO Enterprise Platform Software... 5 System Requirements for Web
Scaling in a Hypervisor Environment
Scaling in a Hypervisor Environment Richard McDougall Chief Performance Architect VMware VMware ESX Hypervisor Architecture Guest Monitor Guest TCP/IP Monitor (BT, HW, PV) File System CPU is controlled
Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware
Introduction By leveraging the inherent benefits of a virtualization based platform, a Microsoft Exchange Server 2007 deployment on VMware Infrastructure 3 offers a variety of availability and recovery
The virtualization of SAP environments to accommodate standardization and easier management is gaining momentum in data centers.
White Paper Virtualized SAP: Optimize Performance with Cisco Data Center Virtual Machine Fabric Extender and Red Hat Enterprise Linux and Kernel-Based Virtual Machine What You Will Learn The virtualization
Rackspace Cloud Databases and Container-based Virtualization
Rackspace Cloud Databases and Container-based Virtualization August 2012 J.R. Arredondo @jrarredondo Page 1 of 6 INTRODUCTION When Rackspace set out to build the Cloud Databases product, we asked many
HP ProLiant BL685c takes #1 Windows performance on Siebel CRM Release 8.0 Benchmark Industry Applications
HP takes #1 Windows performance on Siebel CRM Release 8.0 Benchmark Industry Applications Defeats IBM System x3850 in performance The HP Difference The test system demonstrated that Oracle s Siebel CRM
MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION
Reference Architecture Guide MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION EMC VNX, EMC VMAX, EMC ViPR, and EMC VPLEX Microsoft Windows Hyper-V, Microsoft Windows Azure Pack, and Microsoft System
Contents Introduction... 5 Deployment Considerations... 9 Deployment Architectures... 11
Oracle Primavera Contract Management 14.1 Sizing Guide July 2014 Contents Introduction... 5 Contract Management Database Server... 5 Requirements of the Contract Management Web and Application Servers...
Running Oracle s PeopleSoft Human Capital Management on Oracle SuperCluster T5-8 O R A C L E W H I T E P A P E R L A S T U P D A T E D J U N E 2 0 15
Running Oracle s PeopleSoft Human Capital Management on Oracle SuperCluster T5-8 O R A C L E W H I T E P A P E R L A S T U P D A T E D J U N E 2 0 15 Table of Contents Fully Integrated Hardware and Software
System Requirements Version 8.0 July 25, 2013
System Requirements Version 8.0 July 25, 2013 For the most recent version of this document, visit our documentation website. Table of Contents 1 System requirements 3 2 Scalable infrastructure example
ORACLE OPS CENTER: PROVISIONING AND PATCH AUTOMATION PACK
ORACLE OPS CENTER: PROVISIONING AND PATCH AUTOMATION PACK KEY FEATURES PROVISION FROM BARE- METAL TO PRODUCTION QUICKLY AND EFFICIENTLY Controlled discovery with active control of your hardware Automatically
Symantec Storage Foundation High Availability for Windows
Symantec Storage Foundation High Availability for Windows Storage management, high availability and disaster recovery for physical and virtual Windows applications Data Sheet: High Availability Overview
