Virtualization for IBM i John Bizon jbizon@us.ibm.com
Agenda Virtualization How do you define virtualization? The benefits of virtualization PowerVM IBM i Virtualization IBM i Hosting VIOS Hosting Comparison of IBM i Virtualization Related Topics Consoles for Virtualization Virtual Partition Manager Externally attached disk Partition mobility 2
What is Virtualization? VIRTUALIZATION is a term that refers to the abstraction of computer resources. Some types of System i Virtualization Processor Memory Disk Storage Network 3
Virtualization is important to your boss CIOs select their ten most important visionary plan elements 76% of CIOs cited implementing a virtualized computing environment as part of their visionary plans to enhance competitiveness 4 4
Virtualization Increase Utilization Reduce CPU overcapacity Infrequent peak capacity needs Accurately sizing new workloads Headroom for unexpected growth Acquisition granularity Consolidate low use servers Test, development, QA, HA, DR Staging for new upgrades Benefits Lower hardware, environmentals Lower core based software costs Minimum I/O footprint for. Connectivity to different segments Availability Queuing Bandwidth Can often add new workloads without additional I/O footprint Virtual Images Hypervisor LAN 1 LAN 2 LAN 3 Storage 5
Virtualization Improve Quality of Service Decouple logical from physical Dynamically add/remove resources Reduce OS variation Simplify disaster recovery Utilize live LPAR migration OS OS OS OS Logical Resources Improve network performance Use low latency virtual networks Re-size instance to match changing requirements Up or down CPU, memory, and I/O all done independently Hypervisor Physical Resources 6
Virtualization Improve Flexibility and Time to Value Rapidly deploy new workloads without: Acquiring and racking new server Cabling Simplify automated provisioning Physical activities such as cabling are hard to automate Virtualization is key to dynamic infrastructure OS OS OS Hypervisor Re-purpose assets Virtual resources can be re-purposed to handle future requirements 7
PowerVM built on 40 years of virtualization leadership And virtualization innovation continues with PowerVM 1967 1973 1987 1999 2004 2007 2008 IBM develops hypervisor that would become VM on the mainframe IBM announces first machines to do physical partitioning IBM announces LPAR on the mainframe IBM announces LPAR on POWER IBM intro s POWER Hypervisor for System p and System i IBM announces POWER6, the first UNIX servers with Live Partition Mobility IBM announces PowerVM 8
PowerVM is the foundation for shared infrastructure Multi-OS support: UNIX, i and Linux Over 15,000 applications Share processor, memory and I/O across operating environments 9
PowerVM Editions are tailored to client needs PowerVM Editions Express Standard Enterprise PowerVM Editions offer a unified virtualization solution for all Power workloads PowerVM Express Edition Evaluations, pilots, PoCs Single-server projects PowerVM Standard Edition Production deployments Server consolidation Concurrent VMs Virtual I/O Server NPIV Suspend/Resume Shared Processor Pools Thin Provisioning Live Partition Mobility Active Memory Sharing Shared Storage Pools+ Active Memory Deduplication** Network Balancing Live Partition Mobility Performance Improvements 2 per server 10 per core (up to 1000) 10 per core (up to 1000) ** Requires efw7.4 PowerVM Enterprise Edition Multi-server deployments Cloud infrastructure 10
Jobs/min IBM Power Systems PowerVM on POWER7 delivers virtualization without limits with higher performance than VMware for the same virtual workloads 65% AIM7 Performance Benchmark Single VM Scaling (Scale-up) vsphere 4 on HP DL380 PowerVM on Power 750 PowerVM outperforms VMware by up to 65% on Power 750, running the same Linux workloads and virtualized resources* PowerVM runs workloads more efficiently than VMware, with far superior resource utilization, price/performance, resilience and availability 160000 140000 120000 100000 80000 60000 40000 20000 0 1vcpu 2vcpu 4vcpu 6vcpu 8vcpu Number of virtual CPUs HP DL380 G6 Power 750 11 * A Comparison of PowerVM and VMware Virtualization Performance, April 2010 http://www.ibm.com/systems/power/software/virtualization/whitepapers/compare_perf.html
PowerVM delivers superior flexibility to optimize IT resource utilization and improve responsiveness Flexibility Factors Dynamic virtual CPU changes in VM Dynamic memory changes in VM Dynamic I/O device changes in VM Direct access to I/O devices from within VM Integrated LPAR and WPAR support VMware ESX 3.5 (in VMware Infrastructure 3) VMware vsphere 4 & 5 PowerVM No Add (but not Remove) Yes No Add (but not Remove) Yes No Some Yes No Some (with VT-d enabled) Yes No No Yes Source: http://www.vmware.com/files/pdf/products/vsphere/vmware-what-is-new-vsphere5.pdf 12
PowerVM delivers superior security to help manage risk and maximize availability Risk Management Factors Implementation of virtualization technology Isolation of I/O drivers from hypervisor Built-in cross-platform virtualization support Live migration across processor generations VMware ESX 3.5 (in VMware Infrastructure 3) Third-party software add-on VMware vsphere 4 & 5 Third-party software add-on PowerVM Integrated into server firmware No No Yes (using VIOS) No No No Some (with Intel FlexMigration) Yes (using PowerVM Lx86) Yes (Power6-Power7) Source: http://www.vmware.com/files/pdf/products/vsphere/vmware-what-is-new-vsphere5.pdf 13
IBM i Virtualization Methods IBM i hosting IBM i client partition uses I/O resources from another IBM i host partition Best environment for a homogeneous IBM i environment Best option for Windows Integration on IBM i Familiar IBM i environment Can host AIX and Linux partitions Virtual I/O Server (VIOS) hosting IBM i client partition uses I/O resources from a VIOS host partition Best environment for IBM i, AIX and Linux Typically requires the least amount CPU NPIV, FCoE, Storage Systems, Partition Suspend/Resume i Host i Client VIOS Host i Client Hypervisor ` POWER6/7 Hypervisor POWER6/7 14
IBM i as a client IBM i 6.1.1 or 7.1 can be a client to an IBM i 6.1.1 or 7.1 hosting partition IBM i 6.1.1 or 7.1 can be a client to a VIOS hosting partition Requires POWER6 or POWER7 server Option 1 Option 2 IBM i IBM i VIOS IBM i VIOS IBM i Hypervisor POWER6/7 Hypervisor POWER6/7 Hypervisor POWER 6/7 Blade 15
IBM i Based Virtualization IBM i Based Virtualization as a client IBM i partition uses I/O resources hosted by another IBM i partition or by a VIOS partition Eliminates requirement to buy adapters and disk drives for each IBM i partition Requires POWER6 or POWER7 systems. Requires IBM i V6R1 or later Host partition can share physical or virtual Optical devices as well a storage IBM i IBM i Hypervisor POWER6/7 Adds to IBM i Storage Virtualization Capabilities OS/400 hosting of Linux partitions 2001 i5/os added hosting of AIX on POWER5 Integrated BladeCenter and System x servers running VMware, Windows, or Linux IBM i hosting IBM i on POWER6/7 16
Possible IBM i client implementation Previously many slots required for a partition Minimum requirements included :Console device, Load source Alternate restart device Now storage adapters can be virtualized. This can include disk storage, Optical drives and Ethernet LAN P1 IBM i P2 IBM i P1 IBM i P2 IBM i P3 IBM i P3 IBM i 17
Client partition becomes a cartridge install Client storage space can be pre-created and deployed as needed Pre-creation can include SLIC, OS, LPPS, PTFs, Applications, IDs, etc P2 P3 P1 18
What is the VIOS? A special purpose appliance partition Provide I/O virtualization Advanced partition virtualization enabler Available in 2004 Built on top of AIX, but not an AIX partition IBM i first attached to VIOS in 2008 with IBM i 6.1 VIOS is licensed with PowerVM 19
Virtualizing storage with IBM i or VIOS Single host provides access to SAN or internal storage AIX, IBM i, or Linux client partitions Protect data via RAID-5, RAID-6, or RAID-10 Redundant IBM i or VIOS hosts provide access SAN or internal storage AIX, IBM i, and Linux client partitions Client LPAR protects data via mirroring Two sets of disk and adapters Redundant VIOS hosts provide multiple paths to attached SAN storage AIX, IBM i, and Linux client partitions One set of disk 20
VIOS Virtualization Components 512B AIX/VIOS LUNs FC FC FC VIOS Host hdiskx LUNs DVD /dev/cd0 IVE SEA vtscsix vtoptx Virtual SCSI connection Virtual LAN connection IBM i Client OPTxx DDxx CMNxx DVD Virtual I/O Server (VIOS) Required to connect to open storage Part of PowerVM DASD Fibre Channel adapter(s) assigned to VIOS LPAR in HMC LUNs are 512B open storage Each SAN LUN virtualized directly to IBM i client Storage pools not used in VIOS for IBM i LUNs virtualized by creating virtual target SCSI devices Can use MPIO Virtual SCSI adapters Created in HMC Server SCSI adapter in VIOS, client SCSI adapter in IBM i Multiple pairs supported when HMC used Optical IBM i can use any DVD drive connected to a supported VIOS adapter DVD drive in VIOS virtualized directly (OPTxx) 21
Dual VIOS Hosts Supported VIOS Host hdiskx LUNs vtscsix IBM i Client DDxx VIOS Host hdiskx LUNs vtscsix Duplicate single VIOS environment Virtual SCSI client/server adapter pair Separate set of LUNs in 2 nd VIOS Same size and number of LUNs /dev/cd0 /dev/cd0 Adapter-level mirroring between 2 sets of disk used in client LPAR Mirroring between virtual SCSI client adapters DVD IVE SEA vtoptx OPTxx CMNxx DVD DVD IVE SEA vtoptx Client can withstand failure or scheduled downtime in either host Can be used for multiple clients Weigh against cost 22
N_Port ID Virtualization (NPIV) N_Port ID Virtualization (NPIV) provides direct Fibre Channel connections from client partitions to SAN resources, simplifying SAN management Physical fibre channel adapter (IOA) is owned by VIOS partition Supported with PowerVM Express, Standard, and Enterprise Edition Supports AIX 5.3, AIX 6.1, IBM i 6.1, IBM i 7.1, and Linux POWER6 or POWER7 systems with an 8Gb PCIe fibre channel adapter or 10Gb Fibre Channel over Ethernet (FCoE) adapter VIOS FC Adapter Virtual FC Adapter Virtual FC Adapter Power Hypervisor Enables use of existing storage management tools Simplifies storage provisioning (i.e. zoning, LUN masking) Enables access to SAN devices including tape libraries 23 IBM i requires: LIC 6.1.1 or LIC 7.1 DS5000 or DS8000 storage subsystem and/or Supported tape/tape library devices
CPU Utilization IBM Power Systems Comparisons: IBM i vs VIOS hosting LPAR CPU utilization Each host had ½-CPU and 2G memory 36 disks virtualized to 2 IBM i client LPARs 60 50 40 30 20 10 0 0 10000 20000 30000 40000 50000 60000 Client LPAR TPMs i host 1 i host 2 VIO host 1 VIO host 2 24
Disk Response Time (ms) IBM Power Systems Comparisons: IBM i vs VIOS hosting client IOPS Disk configuration in each host LPAR 4 5903 IOAs, 18 drives, R5+hot spare protection, one vscsi host, 6 vscsi LUNs. IBM i mirroring turned on in clients 25 LPAR1 hosted by i LPAR2 hosted by i LPAR1 hosted by VIO LPAR2 hosted by VIO 20 15 10 5 25 0 0 1000 2000 3000 4000 5000 6000 Client LPAR IOPS
Virtualization Comparison IBM i Host Virtual I/O Server Host Processor PowerVM CPU Virtualizaton PowerVM CPU Virtualizaton Memory Disk SSD Network DVD Tape Adapters Partition Mobility Skills 26
Virtualization Comparison IBM i Host Virtual I/O Server Host Processor PowerVM CPU Virtualizaton PowerVM CPU Virtualizaton Memory Advanced Memory Sharing Disk SSD Network DVD Tape Adapters Partition Mobility Skills 27
Virtualization Comparison IBM i Host Virtual I/O Server Host Processor PowerVM CPU Virtualizaton PowerVM CPU Virtualizaton Memory Disk SSD Network DVD Tape IBM i internal Storage IBM i native attached external storage Virtualize via Network Storage Space (vscsi) Advanced Memory Sharing VIOS internal storage VIOS attached external storage vscis or NPIV Adapters Partition Mobility Skills 28
Virtualization Comparison IBM i Host Virtual I/O Server Host Processor PowerVM CPU Virtualizaton PowerVM CPU Virtualizaton Memory Disk IBM i internal Storage IBM i native attached external storage Virtualize via Network Storage Space (vscsi) Advanced Memory Sharing VIOS internal storage VIOS attached external storage vscis or NPIV SSD Client is not aware of SSD s Client is aware of SSD s Network DVD Tape Adapters Partition Mobility Skills 29
Virtualization Comparison IBM i Host Virtual I/O Server Host Processor PowerVM CPU Virtualizaton PowerVM CPU Virtualizaton Memory Disk IBM i internal Storage IBM i native attached external storage Virtualize via Network Storage Space (vscsi) Advanced Memory Sharing VIOS internal storage VIOS attached external storage vscis or NPIV SSD Client is not aware of SSD s Client is aware of SSD s Network Proxy ARP or NAT and Layer-2 Bridge Layer-2 bridge DVD Tape Adapters Partition Mobility Skills 30
Virtualization Comparison IBM i Host Virtual I/O Server Host Processor PowerVM CPU Virtualizaton PowerVM CPU Virtualizaton Memory Disk IBM i internal Storage IBM i native attached external storage Virtualize via Network Storage Space (vscsi) Advanced Memory Sharing VIOS internal storage VIOS attached external storage vscis or NPIV SSD Client is not aware of SSD s Client is aware of SSD s Network Proxy ARP or NAT and Layer-2 Bridge Layer-2 bridge DVD Yes Yes Tape Adapters Partition Mobility Skills 31
Virtualization Comparison IBM i Host Virtual I/O Server Host Processor PowerVM CPU Virtualizaton PowerVM CPU Virtualizaton Memory Disk IBM i internal Storage IBM i native attached external storage Virtualize via Network Storage Space (vscsi) Advanced Memory Sharing VIOS internal storage VIOS attached external storage vscis or NPIV SSD Client is not aware of SSD s Client is aware of SSD s Network Proxy ARP or NAT and Layer-2 Bridge Layer-2 bridge DVD Yes Yes Tape Yes IBM i 7.1 TR2 Not aware of tape library robotics Yes VIOS 2.2 Need NPIV to support tape libraries Adapters Partition Mobility Skills 32
Virtualization Comparison IBM i Host Virtual I/O Server Host Processor PowerVM CPU Virtualizaton PowerVM CPU Virtualizaton Memory Disk IBM i internal Storage IBM i native attached external storage Virtualize via Network Storage Space (vscsi) Advanced Memory Sharing VIOS internal storage VIOS attached external storage vscis or NPIV SSD Client is not aware of SSD s Client is aware of SSD s Network Proxy ARP or NAT and Layer-2 Bridge Layer-2 bridge DVD Yes Yes Tape Yes IBM i 7.1 TR2 Not aware of tape library robotics Yes VIOS 2.2 Need NPIV to support tape libraries Adapters FCoE, 10GB Ethernet Partition Mobility Skills 33
Virtualization Comparison IBM i Host Virtual I/O Server Host Processor PowerVM CPU Virtualizaton PowerVM CPU Virtualizaton Memory Disk IBM i internal Storage IBM i native attached external storage Virtualize via Network Storage Space (vscsi) Advanced Memory Sharing VIOS internal storage VIOS attached external storage vscis or NPIV SSD Client is not aware of SSD s Client is aware of SSD s Network Proxy ARP or NAT and Layer-2 Bridge Layer-2 bridge DVD Yes Yes Tape Yes IBM i 7.1 TR2 Not aware of tape library robotics Yes VIOS 2.2 Need NPIV to support tape libraries Adapters FCoE, 10GB Ethernet Partition Mobility Skills Partition Suspend/Resume 34
Virtualization Comparison IBM i Host Virtual I/O Server Host Processor PowerVM CPU Virtualizaton PowerVM CPU Virtualizaton Memory Disk IBM i internal Storage IBM i native attached external storage Virtualize via Network Storage Space (vscsi) Advanced Memory Sharing VIOS internal storage VIOS attached external storage vscis or NPIV SSD Client is not aware of SSD s Client is aware of SSD s Network Proxy ARP or NAT and Layer-2 Bridge Layer-2 bridge DVD Yes Yes Tape Yes IBM i 7.1 TR2 Not aware of tape library robotics Yes VIOS 2.2 Need NPIV to support tape libraries Adapters FCoE, 10GB Ethernet Partition Mobility Partition Suspend/Resume Skills IBM i AIX and IBM i skills 35
Console Options Hardware Management Console IVM Integrated Virtualization Manager SDMC: Browser Interface Manage Multiple Systems New User Interface Enhanced Functionality Support 1000 LPARs Supports P6 & P7 36
Management Console Offering Highlights Multiple offerings for flexibility and ease of use for Power Systems virtualization and HW service HMC Hardware appliance Legacy 7042- CR6 Rack mount Firmware Systems Director Management Console Hardware appliance next-gen 7042- CR6 (7944-A2Y) System x3550 M3 Commercial 2.53GHz Intel Xeon E5630, Quad Core & Memory: 8 GB HDD: 2x 500MB = 1 TB Rack mount PowerVM IVM Part of PowerVM Focus on Blades & small servers Systems Director Management Console Software appliance next-gen Runs on VMware or KVM Customer supplied IBM x86 HW Part of Flex ITME 37
HMC SDMC Transition Roadmap Power 7 Servers Director Mgmt Console SDMC (release 1H 2011) Built on Director 6.2.1.2 HW/SW appliance on x86 Supports P6 & P7 servers Allow ample transition time Add enhancements as appropriate HMC HMC Transition Full function through 1H 2011 New I/O. Includes SR-IOV support No other virtualization enhancements HMC Maint Mode Service fixes IVM IVM Transition IVM supports POWER 7 today No further functional enhancements planned 38
Support for IBM Storage Systems with IBM i Table as of April 12, 2011 N Series @@ DS3200 DS3400 DS3500 DS3950 DS4700 DS4800 DS5020 Storwize V7000 DS5100 DS5300 DS6800 SVC XIV DS8100 DS8300 DS8700 DS8800 Rack / Tower Systems IBM i Version Hardware 5.4 / 6.1 / 7.1 POWER5/6/7 6.1 / 7.1 POWER6/7 Not DS3200#, Yes DS3500## 6.1 / 7.1 POWER6/7 6.1 / 7.1 POWER6/7 6.1 / 7.1 POWER6/7 5.4 / 6.1 POWER5/6/7 Not 7.1 ### POWER5/6/7 6.1 / 7.1 POWER6/7 6.1 / 7.1 POWER6/7 5.4 / 6.1 / 7.1 POWER5/6/7 5.4 / 6.1 / 7.1 POWER5/6/7 IBM i Attach IFS / NFS (NAS) VIOS VIOS VIOS Direct* or VIOS VSCSI and NPIV% Direct VIOS VIOS Direct or VIOS VSCSI and NPIV** Direct or VIOS VSCSI and NPIV** Power Blades IBM i Version Hardware IBM i Attach 6.1 / 7.1 POWER6/7 IFS / NFS (NAS) IFS (NAS) 6.1 / 7.1 POWER6/7 @, #, ## 6.1 / 7.1 POWER6/7 (BCH) 6.1 / 7.1 POWER6/7 (BCH) 6.1 / 7.1 POWER6/7 (BCH) Not supported 6.1 / 7.1 POWER6/7 (BCH) 6.1 / 7.1 POWER6/7 (BCH) VIOS VIOS VIOS VIOS n/a VIOS VIOS 6.1 / 7.1 POWER6/7 (BCH) VIOS NPIV** 6.1 / 7.1 POWER6/7 (BCH) VIOS NPIV** Notes - This table does not list more detailed considerations, for example required levels of firmware or PTFs required or configuration performance considerations - POWER7 servers require IBM i 6.1 or later - This table can change over time as addition hardware/software capabilities/options are added # DS3200 only supports SAS connection, not supported on Rack/Tower servers which use only Fibre Channel connections, supported on Blades with SAS ## DS3500 has either SAS or Fibre Channel connection. Rack/Tower only uses Fibre Channel. Blades in BCH support either SAS or Fibre Channel. Blades in BCS only uses SAS. ### Not supported on IBM i 7.1. But see SCORE System RPQ 846-15284 for exception support * Supported with Smart Fibre Channel adapters NOT supported with IOP-based Fibre Channel adapters ** NPIV requires Machine Code Level of 6.1.1 or later and requires NPIV capable HBAs (FC adapters) and switches @ BCH supports DS3400, DS3500, DS3950 & BCS supports DS3200, DS3500 @@ N Series can only be used as file server. No load source/boot support. Support only through IFS. No IBM i data base support % NPIV requires IBM i 7.1 TR2 (Technology Refresh 2) and latest firmware released May 2011 or later For more details, use the System Storage Interoperability Center: www.ibm.com/systems/support/storage/config/ssic/ Note there are currently some differences between the above table and the SSIC. The SSIC should be updated to reflect the above information 39
Virtual Partition Manager Originally for iseries POWER5 customers that want to get started with Linux IBM i Linux or IBM i Virtual SCSI Virtual Ethernet No-charge, included with IBM i IBM i based tool to create simple Linux partitions (HMC not required or present) Max one IBM i partition with up to 4 Linux partitions and 4 virtual Ethernets Linux partitions must use all virtual I/O DST-type interface to create/manage Dynamic LPAR not supported Uncapped partitions supported IBM i 7.1 TR3, the ability to create up to four IBM i partitions will be enabled in VPM 40
Partition Suspend/Resume Underlying technology required for Live Partition Mobility 41
Live Partition Mobility Live Parition Mobility available for AIX and Linux Requires PowerVM Enterprise Edition 42
Learn More About PowerVM PowerVM portal on IBM Web site PowerVM Client Success Stories http://www.ibm.com/systems/power/software/virtualization * Download from PowerVM portal or order a hard copy 43
Resources and references Techdocs http://www.ibm.com/support/techdocs (updates to this presentation, tips & techniques, white papers, etc.) PowerVM Virtualization on IBM System p: Introduction and Configuration Fourth Edition - SG24-7940 http://www.redbooks.ibm.com/abstracts/sg247940.html?open PowerVM Virtualization on IBM System p: Managing and Monitoring - SG24-7590 http://www.redbooks.ibm.com/abstracts/sg247590.html?open IBM System p Advanced POWER Virtualization (PowerVM) Best Practices - redp4194 http://www.redbooks.ibm.com/abstracts/redp4194.html?open Power Systems: Virtual I/O Server and Integrated Virtualization Manager commands (iphcg.pdf) http://publib.boulder.ibm.com/infocenter/systems/scope/hw/topic/iphcg/iphcg.pdf 44
The nd, Thank You! 45
Special notices This document was developed for IBM offerings in the United States as of the date of publication. IBM may not make these offerings available in other countries, and the information is subject to change without notice. Consult your local IBM business contact for information on the IBM offerings available in your area. Information in this document concerning non-ibm products was obtained from the suppliers of these products or other public sources. Questions on the capabilities of non-ibm products should be addressed to the suppliers of those products. IBM may have patents or pending patent applications covering subject matter in this document. The furnishing of this document does not give you any license to these patents. Send license inquires, in writing, to IBM Director of Licensing, IBM Corporation, New Castle Drive, Armonk, NY 10504-1785 USA. All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. The information contained in this document has not been submitted to any formal IBM test and is provided "AS IS" with no warranties or guarantees either expressed or implied. All examples cited or described in this document are presented as illustrations of the manner in which some IBM products can be used and the results that may be achieved. Actual environmental costs and performance characteristics will vary depending on individual client configurations and conditions. IBM Global Financing offerings are provided through IBM Credit Corporation in the United States and other IBM subsidiaries and divisions worldwide to qualified commercial and government clients. Rates are based on a client's credit rating, financing terms, offering type, equipment type and options, and may vary by country. Other restrictions may apply. Rates and offerings are subject to change, extension or withdrawal without notice. IBM is not responsible for printing errors in this document that result in pricing or information inaccuracies. All prices shown are IBM's United States suggested list prices and are subject to change without notice; reseller prices may vary. IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply. Any performance data contained in this document was determined in a controlled environment. Actual results may vary significantly and are dependent on many factors including system hardware configuration and software design and configuration. Some measurements quoted in this document may have been made on development-level systems. There is no guarantee these measurements will be the same on generallyavailable systems. Some measurements quoted in this document may have been estimated through extrapolation. Users of this document should verify the applicable data for their specific environment. Revised September 26, 2006 46
Special notices (cont.) IBM, the IBM logo, ibm.com AIX, AIX (logo), AIX 6 (logo), AS/400, Active Memory, BladeCenter, Blue Gene, CacheFlow, ClusterProven, DB2, ESCON, i5/os, i5/os (logo), IBM Business Partner (logo), IntelliStation, LoadLeveler, Lotus, Lotus Notes, Notes, Operating System/400, OS/400, PartnerLink, PartnerWorld, PowerPC, pseries, Rational, RISC System/6000, RS/6000, THINK, Tivoli, Tivoli (logo), Tivoli Management Environment, WebSphere, xseries, z/os, zseries, AIX 5L, Chiphopper, Chipkill, Cloudscape, DB2 Universal Database, DS4000, DS6000, DS8000, EnergyScale, Enterprise Workload Manager, General Purpose File System,, GPFS, HACMP, HACMP/6000, HASM, IBM Systems Director Active Energy Manager, iseries, Micro-Partitioning, POWER, PowerExecutive, PowerVM, PowerVM (logo), PowerHA, Power Architecture, Power Everywhere, Power Family, POWER Hypervisor, Power Systems, Power Systems (logo), Power Systems Software, Power Systems Software (logo), POWER2, POWER3, POWER4, POWER4+, POWER5, POWER5+, POWER6, POWER7, purescale, System i, System p, System p5, System Storage, System z, Tivoli Enterprise, TME 10, TurboCore, Workload Partitions Manager and X-Architecture are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol ( or ), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at www.ibm.com/legal/copytrade.shtml The Power Architecture and Power.org wordmarks and the Power and Power.org logos and related marks are trademarks and service marks licensed by Power.org. UNIX is a registered trademark of The Open Group in the United States, other countries or both. Linux is a registered trademark of Linus Torvalds in the United States, other countries or both. Microsoft, Windows and the Windows logo are registered trademarks of Microsoft Corporation in the United States, other countries or both. Intel, Itanium, Pentium are registered trademarks and Xeon is a trademark of Intel Corporation or its subsidiaries in the United States, other countries or both. AMD Opteron is a trademark of Advanced Micro Devices, Inc. Java and all Java-based trademarks and logos are trademarks of Sun Microsystems, Inc. in the United States, other countries or both. TPC-C and TPC-H are trademarks of the Transaction Performance Processing Council (TPPC). SPECint, SPECfp, SPECjbb, SPECweb, SPECjAppServer, SPEC OMP, SPECviewperf, SPECapc, SPEChpc, SPECjvm, SPECmail, SPECimap and SPECsfs are trademarks of the Standard Performance Evaluation Corp (SPEC). NetBench is a registered trademark of Ziff Davis Media in the United States, other countries or both. AltiVec is a trademark of Freescale Semiconductor, Inc. Cell Broadband Engine is a trademark of Sony Computer Entertainment Inc. InfiniBand, InfiniBand Trade Association and the InfiniBand design marks are trademarks and/or service marks of the InfiniBand Trade Association. Other company, product and service names may be trademarks or service marks of others. Revised February 9, 2010 47
Notes on benchmarks and values The IBM benchmarks results shown herein were derived using particular, well configured, development-level and generally-available computer systems. Buyers should consult other sources of information to evaluate the performance of systems they are considering buying and should consider conducting application oriented testing. For additional information about the benchmarks, values and systems tested, contact your local IBM office or IBM authorized reseller or access the Web site of the benchmark consortium or benchmark vendor. IBM benchmark results can be found in the IBM Power Systems Performance Report at http://www.ibm.com/systems/p/hardware/system_perf.html. All performance measurements were made with AIX or AIX 5L operating systems unless otherwise indicated to have used Linux. For new and upgraded systems, AIX Version 4.3, AIX 5L or AIX 6 were used. All other systems used previous versions of AIX. The SPEC CPU2006, SPEC2000, LINPACK, and Technical Computing benchmarks were compiled using IBM's high performance C, C++, and FORTRAN compilers for AIX 5L and Linux. For new and upgraded systems, the latest versions of these compilers were used: XL C Enterprise Edition V7.0 for AIX, XL C/C++ Enterprise Edition V7.0 for AIX, XL FORTRAN Enterprise Edition V9.1 for AIX, XL C/C++ Advanced Edition V7.0 for Linux, and XL FORTRAN Advanced Edition V9.1 for Linux. The SPEC CPU95 (retired in 2000) tests used preprocessors, KAP 3.2 for FORTRAN and KAP/C 1.4.2 from Kuck & Associates and VAST-2 v4.01x8 from Pacific-Sierra Research. The preprocessors were purchased separately from these vendors. Other software packages like IBM ESSL for AIX, MASS for AIX and Kazushige Goto s BLAS Library for Linux were also used in some benchmarks. For a definition/explanation of each benchmark and the full list of detailed results, visit the Web site of the benchmark consortium or benchmark vendor. TPC http://www.tpc.org SPEC http://www.spec.org LINPACK http://www.netlib.org/benchmark/performance.pdf Pro/E http://www.proe.com GPC http://www.spec.org/gpc VolanoMark http://www.volano.com STREAM http://www.cs.virginia.edu/stream/ SAP http://www.sap.com/benchmark/ Oracle Applications http://www.oracle.com/apps_benchmark/ PeopleSoft - To get information on PeopleSoft benchmarks, contact PeopleSoft directly Siebel http://www.siebel.com/crm/performance_benchmark/index.shtm Baan http://www.ssaglobal.com Fluent http://www.fluent.com/software/fluent/index.htm TOP500 Supercomputers http://www.top500.org/ Ideas International http://www.ideasinternational.com/benchmark/bench.html Storage Performance Council http://www.storageperformance.org/results Revised March 12, 2009 48
Notes on performance estimates rperf for AIX rperf (Relative Performance) is an estimate of commercial processing performance relative to other IBM UNIX systems. It is derived from an IBM analytical model which uses characteristics from IBM internal workloads, TPC and SPEC benchmarks. The rperf model is not intended to represent any specific public benchmark results and should not be reasonably used in that way. The model simulates some of the system operations such as CPU, cache and memory. However, the model does not simulate disk or network I/O operations. rperf estimates are calculated based on systems with the latest levels of AIX and other pertinent software at the time of system announcement. Actual performance will vary based on application and configuration specifics. The IBM eserver pseries 640 is the baseline reference system and has a value of 1.0. Although rperf may be used to approximate relative IBM UNIX commercial processing performance, actual system performance may vary and is dependent upon many factors including system hardware configuration and software design and configuration. Note that the rperf methodology used for the POWER6 systems is identical to that used for the POWER5 systems. Variations in incremental system performance may be observed in commercial workloads due to changes in the underlying system architecture. All performance estimates are provided "AS IS" and no warranties or guarantees are expressed or implied by IBM. Buyers should consult other sources of information, including system benchmarks, and application sizing guides to evaluate the performance of a system they are considering buying. For additional information about rperf, contact your local IBM office or IBM authorized reseller. ======================================================================== CPW for IBM i Commercial Processing Workload (CPW) is a relative measure of performance of processors running the IBM i operating system. Performance in customer environments may vary. The value is based on maximum configurations. More performance information is available in the Performance Capabilities Reference at: www.ibm.com/systems/i/solutions/perfmgmt/resource.html Revised April 2, 2007 49
Notes on HPC benchmarks and values The IBM benchmarks results shown herein were derived using particular, well configured, development-level and generally-available computer systems. Buyers should consult other sources of information to evaluate the performance of systems they are considering buying and should consider conducting application oriented testing. For additional information about the benchmarks, values and systems tested, contact your local IBM office or IBM authorized reseller or access the Web site of the benchmark consortium or benchmark vendor. IBM benchmark results can be found in the IBM Power Systems Performance Report at http://www.ibm.com/systems/p/hardware/system_perf.html. All performance measurements were made with AIX or AIX 5L operating systems unless otherwise indicated to have used Linux. For new and upgraded systems, AIX Version 4.3 or AIX 5L were used. All other systems used previous versions of AIX. The SPEC CPU2000, LINPACK, and Technical Computing benchmarks were compiled using IBM's high performance C, C++, and FORTRAN compilers for AIX 5L and Linux. For new and upgraded systems, the latest versions of these compilers were used: XL C Enterprise Edition V7.0 for AIX, XL C/C++ Enterprise Edition V7.0 for AIX, XL FORTRAN Enterprise Edition V9.1 for AIX, XL C/C++ Advanced Edition V7.0 for Linux, and XL FORTRAN Advanced Edition V9.1 for Linux. The SPEC CPU95 (retired in 2000) tests used preprocessors, KAP 3.2 for FORTRAN and KAP/C 1.4.2 from Kuck & Associates and VAST-2 v4.01x8 from Pacific-Sierra Research. The preprocessors were purchased separately from these vendors. Other software packages like IBM ESSL for AIX, MASS for AIX and Kazushige Goto s BLAS Library for Linux were also used in some benchmarks. For a definition/explanation of each benchmark and the full list of detailed results, visit the Web site of the benchmark consortium or benchmark vendor. SPEC http://www.spec.org LINPACK http://www.netlib.org/benchmark/performance.pdf Pro/E http://www.proe.com GPC http://www.spec.org/gpc STREAM http://www.cs.virginia.edu/stream/ Fluent http://www.fluent.com/software/fluent/index.htm TOP500 Supercomputers http://www.top500.org/ AMBER http://amber.scripps.edu/ FLUENT http://www.fluent.com/software/fluent/fl5bench/index.htm GAMESS http://www.msg.chem.iastate.edu/gamess GAUSSIAN http://www.gaussian.com ANSYS http://www.ansys.com/services/hardware-support-db.htm Click on the "Benchmarks" icon on the left hand side frame to expand. Click on "Benchmark Results in a Table" icon for benchmark results. ABAQUS http://www.simulia.com/support/v68/v68_performance.php ECLIPSE http://www.sis.slb.com/content/software/simulation/index.asp?seg=geoquest& MM5 http://www.mmm.ucar.edu/mm5/ MSC.NASTRAN http://www.mscsoftware.com/support/prod%5fsupport/nastran/performance/v04_sngl.cfm STAR-CD www.cd-adapco.com/products/star-cd/performance/320/index/html NAMD http://www.ks.uiuc.edu/research/namd HMMER http://hmmer.janelia.org/ http://powerdev.osuosl.org/project/hmmeraltivecgen2mod Revised March 12, 2009 50