PowerVM and VIOS for IBM i



Similar documents
IBM i PowerVM Virtualization 7.1 Update

IBM i Virtualization and Open Storage. Mike Schambureck IBM Lab Services Rochester, MN

How To Manage Energy At An Energy Efficient Cost

Energy Management in a Cloud Computing Environment

Cloud Computing with xcat on z/vm 6.3

Title. Click to edit Master text styles Second level Third level

SAS deployment on IBM Power servers with IBM PowerVM dedicated-donating LPARs

How to Deliver Measurable Business Value with the Enterprise CMDB

Shifting Gears: VMControl to PowerVC

Microsoft System Center 2012 SP1 Virtual Machine Manager with Storwize family products. IBM Systems and Technology Group ISV Enablement January 2014

IBM i Network Install using Network File System

Maquette DB2 PureScale

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

Cloud Infrastructure Management - IBM VMControl

Getting started with IBM i on an IBM Flex System compute node.

N_Port ID Virtualization

Capacity planning for IBM Power Systems using LPAR2RRD.

SAN Conceptual and Design Basics

HBA Virtualization Technologies for Windows OS Environments

What s the best disk storage for my i5/os workload?

IBM Systems Director Navigator for i5/os New Web console for i5, Fast, Easy, Ready

Maximo Business Intelligence Reporting Roadmap Washington DC Users Group

Session 1494: IBM Tivoli Storage FlashCopy Manager

Cloud Optimize Your IT

FOR SERVERS 2.2: FEATURE matrix

Oracle on System z Linux- High Availability Options Session ID 252

System z Batch Network Analyzer Tool (zbna) - Because Batch is Back!

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

Unleash the Performance of vsphere 5.1 with 16Gb Fibre Channel

Hitachi Unified Compute Platform (UCP) Pro for VMware vsphere

IBM XIV Gen3 Storage System Storage built for VMware vsphere infrastructures

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server

IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Virtual Connect

ORACLE OPS CENTER: PROVISIONING AND PATCH AUTOMATION PACK

Building the Virtual Information Infrastructure

PowerVC 1.2 Q Power Systems Virtualization Center

Practical Web Services for RPG IBM Integrated Web services for i

Red Hat enterprise virtualization 3.0 feature comparison

Migrating LAMP stack from x86 to Power using the Server Consolidation Tool

Performance and scalability of a large OLTP workload

Step by Step Guide To vstorage Backup Server (Proxy) Sizing

EMC DATA PROTECTION. Backup ed Archivio su cui fare affidamento

TSM for Virtual Environments Data Protection for VMware

Violin Memory Arrays With IBM System Storage SAN Volume Control

IBM Storwize Rapid Application Storage solutions

Session Title: Cloud Computing 101 What every z Person must know

z/vm Capacity Planning Overview

IBM and TEMENOS T24 workload optimization on the new IBM PureFlex System

Arwed Tschoeke, Systems Architect IBM Systems and Technology Group

IBM i25 Trends & Directions

Symantec Storage Foundation and High Availability Solutions 6.2 Virtualization Guide - AIX

With Red Hat Enterprise Virtualization, you can: Take advantage of existing people skills and investments

NETAPP WHITE PAPER USING A NETWORK APPLIANCE SAN WITH VMWARE INFRASTRUCTURE 3 TO FACILITATE SERVER AND STORAGE CONSOLIDATION

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V

EMC Business Continuity for Microsoft SQL Server 2008

<Insert Picture Here> Infrastructure as a Service (IaaS) Cloud Computing for Enterprises

The HBAs tested in this report are the Brocade 825 and the Emulex LPe12002 and LPe12000.

Virtualized High Availability and Disaster Recovery Solutions

EMC Invista: The Easy to Use Storage Manager

Microsoft Exchange Solutions on VMware

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION

IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud

Optimized Storage Solution for Enterprise Scale Hyper-V Deployments

OPTIMIZING SERVER VIRTUALIZATION

8Gb Fibre Channel Adapter of Choice in Microsoft Hyper-V Environments

Positioning the Roadmap for POWER5 iseries and pseries

Forecasting Performance Metrics using the IBM Tivoli Performance Analyzer

HP SN1000E 16 Gb Fibre Channel HBA Evaluation

An Oracle White Paper August Oracle VM 3: Server Pool Deployment Planning Considerations for Scalability and Availability

Virtualization what it is?

How To Write An Architecture For An Bm Security Framework

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

MS Exchange Server Acceleration

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

The Benefits of POWER7+ and PowerVM over Intel and an x86 Hypervisor

SPEED your path to virtualization.

Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies

IBM PureFlex System. The infrastructure system with integrated expertise

Using idoctorjob Watcher to find out complex performance issues

Open-E Data Storage Software and Intel Modular Server a certified virtualization solution

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops

Journey to the Private Cloud. Key Enabling Technologies

Java Application Performance Analysis and Tuning on IBM System i

Dell High Availability Solutions Guide for Microsoft Hyper-V

Best Practice of Server Virtualization Using Qsan SAN Storage System. F300Q / F400Q / F600Q Series P300Q / P400Q / P500Q / P600Q Series

Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION

Optimized Storage I/O for HPE Virtual Server Environments

IBM Storwize V7000 Unified and Storwize V7000 storage systems

Virtualizing the SAN with Software Defined Storage Networks

Redbooks Redpaper. IBM TotalStorage NAS Advantages of the Windows Powered OS. Roland Tretau

Evaluation of Enterprise Data Protection using SEP Software

IBM Systems and Technology Group Technical Conference

Virtualization of the MS Exchange Server Environment

Transcription:

PowerVM and VIOS for IBM i QUSER user group meeting 3/19/2013 Minneapolis,MN Gottfried Schimunek Senior Architect Application Design IBM STG Software Development Lab Services IBM ISV Enablement 3605 Highway 52 North Rochester, MN 55901 Tel 507-253-2367 Fax 845-491-2347 Gottfried@us.ibm.com

Acknowledgement Thanks to: Kris Whitney, Architect and lead developer - Power Virtual I/O,SAN,Comm Dev. 2

IBM s History of Virtualization Leadership A 40+ year tradition continues with PowerVM and VM Control 1967 IBM develops hypervisor that would become VM on the mainframe 3 1973 IBM announces first machines to do physical partitioning 1987 IBM announces LPAR on the mainframe 1999 IBM announces LPAR on POWER 2004 IBM intro s POWER Hypervisor for System p and System i 2007 TM 2008 IBM announces POWER6 Live Partition Mobility 2009 IBM announces PowerVM

PowerVM: Virtualization Without Limits Sold with more than 70% of Power Systems Improves IT resource utilization Reduces IT infrastructure costs Simplifies management 4

PowerVM Editions are tailored to client needs PowerVM Editions offer a unified virtualization solution for all Power workloads PowerVM Express Edition Evaluations, pilots, PoCs Single-server projects PowerVM Standard Edition Production deployments Server consolidation PowerVM Enterprise Edition Multi-server deployments Cloud infrastructure 5 PowerVM Editions Express Standard Enterprise Concurrent VMs 2 per server 20 per core** (up to 1000) 20 per core** (up to 1000) Virtual I/O Server NPIV Suspend/Resume Shared Processor Pools Shared Storage Pools Thin Provisioning Live Partition Mobility Active Memory Sharing ** Requires efw7.6 or higher

Evolution towards: Fully Virtualized, Simplified, Lower cost + Mobility of Virtual Machines + Manually intensive server/storage/network mobility management + VM-based Availability/Resilience Mgmt + Storage Pools OS provisioning + Hypervisor clustered fs access to virtual storage SW OS SW OS Virtl NW VM/LPAR VM/LPAR Virtualization vswitch Storage Network Compute Memory Dynamic resource optimization within a physical system VLAN External virtualized storage External virtualized switches Virtual IO Svr Shared Network Virtual Appliance Deployment Image Library Image App Image App Image OS App Image OS App OS OS Workload Mobility Shared Storage SW OS Virtl NW Virt Storage Virtual Server Virtual IO Svr SW OS Virtl NW VM/LPAR Virtual IO Svr Virtualization Compute + VM Security Appliance Virt Storage Virtual Machine lifecycle mgmt Virtual Server + Storage Pools with advanced capabilities (cloning, snapshot, thin provisioning, ) + Managing to QoS Policies (intelligent placement) Virtual IO SW OS + Workload-based Availability/Resilience Mgmt + Converged Datacenter Network fabric + IO virtualization and virtual switching Physical resource discovery / configuration / provisioning / update / system health +Mobility of workloads with automated and integrated server, network & storage provisioning Virt Storage Virtualization Memory Compute Memory Storage Network Established: Physical systems with local virtualization vswitch Network Storage Maturing: Multi-system virtualization managed across Physical Servers Optimized for. Availability Performance Energy Emerging: Virtual Appliances & Workload mobility within scalable and centrally managed System Pools (Ensembles) 6 03/17/13

Two I/O Server Options IBM i IBM i Hypervisor POWER6 Built into IBM i Host Disk, Optical, Tape 7 VIOS IBM i Hypervisor POWER6 VIOS Server Host Disk, Optical, Tape Consolidate Ethernet Traffic Bridge Ethernet Traffic Same technology as hosting AIX, Linux, and iscsi Attach external storage Advance Virtualization Functions

What is the VIOS? A special purpose appliance partition Provide I/O virtualization Advanced Partition Virtualization enabler 8 First GAed 2004 Built on top of AIX, but not an AIX partition IBM i first attached to VIOS in 2008 with the IBM i 6.1 VIOS is licensed with PowerVM

Why use the VIOS? I/O Capacity Utilization Storage Allocation Flexibility Ethernet Flexibility Memory Sharing Suspend/Resume Mobility 9

I/O Virtualization on POWER IO Bus Virtualization with Dedicated Adapters LPAR A IO Adapter Virtualization with VIO Server VIOS LPAR LPAR B LPAR A Physical Adapter DevDrv Physical Adapter DevDrv Increasing Adapter BW Physical Adapter DevDrv Virtual Virtual Adapter Adapter Server Server Virtual Adapter DevDrv LPAR B Virtual Adapter DevDrv & LPAR Hypervisor Density Virtual Fabric per Slot Hypervisor Func Func PCI adapter PCI adapter Port Port Func PCI adapter Port Fabric Fabric 10

IBM i + VSCSI (Classic) Source System 1 VIOS IBM i Client (System 1) IBM i Client (System 2) IBM i Client (System 3) System 2 FC HBA System 3 Assign storage to the physical HBA in the VIOS 6B22 6B22 6B22 Device Type Device Type Device Type Hostconnect is created as an open storage or AIX hosttype, Requires 512 byte per sector LUNs to be assigned to the hostconnect Cannot Migrate existing direct connect LUNs Hypervisor Many Storage options supported POWER6 with IBM i 6.1.1 11

Performance Does Virtualization Perform? Database ASP 16 Response Time in MS 14 12 10 8 6 4 2 0 0 10000 20000 30000 40000 50000 60000 OPS VIOS DS5K 12 DA DS5K

IBM i + NPIV ( Virtual Fiber Chanel ) Source System 1 VIOS System 2 IBM i Client (System 1) IBM i Client (System 1) IBM i Client (System 1) 8Gbs HBA System 3 Hypervisor assigns 2 unique WWPNs to each Virtual fiber Hostconnect is created as an iseries hosttype, Requires 520 byte per sector LUNs to be assigned to the iseries hostconnect on DS8K Virtual address example C001234567890001 Hypervisor POWER6 with IBM i 6.1.1 Can Migrate existing direct connect LUNS DS8100, DS8300, DS8700, DS8800, DS5100 and DS5300 SVC, V7000, V3700 supported 13 Note: an NPIV ( N_port ) capable switch is required to connect the VIOS to the DS8000 to use virtual fiber.

NPIV Concepts 14 Multiple VFC server adapters may map to the same physical adapter port. Each VFC server adapter connects to one VFC client adapter; each VFC client adapter gets a unique WWPN. Client WWPN stays the same regardless of physical port it is connected to. Support for dynamically changing the physical port to virtual port mapping. Clients can discover and manage physical devices on the SAN. VIOS can t access or emulate storage, just provides clients access to the SAN. Support for concurrent microcode download to the physical FC adapter

NPIV Configuration - Server Adapter Mappings 15

NPIV Performance NPIV vs Direct Attach (DS8300) 0.01 App licatio n Respon se Tim e 0.009 0.008 0.007 0.006 0.005 0.004 0.003 0.002 0.001 0 0 20 40 60 80 100 120 CPW Users npiv run2 16 direct attach

Introducing the VIOS Performance Advisor What is it? The VIOS advisor is a standalone application that polls key performance metrics for minutes or hours, before analyzing the results to produce a report that summarizes the health of the environment and proposes potential actions that can be taken to address performance inhibitors. How does it work? STEP 2) STEP 1) Download VIOS Advisor Run Executable STEP 3) View XML File VIOS Advisor VIOS Partition VIOS Partition Only a single executable is required to run within the VIOS The VIOS Advisor can monitor from 5min and up to 24hours Open up.xml file using your favorite web-browser to get an easy to interpret report summarizing your VIOS status. https://www.ibm.com/developerworks/mydeveloperworks/wikis/home?lang=en#/wiki/power%20systems/page/vios%20advisor 17

Sample Screenshot - SEA Performance interpretation combined with effective visual cues to alert clients about the state of the system and opportunities to optimize. INFORMATIVE: Context relevant data helpful in making adjustments OPTIMAL Current condition likely to deliver best performance WARNING: Current condition deviates from best practices. Opportunity likely exists for better performance. CRITICAL: Current condition likely causing negative performance impacts. 18

Customer Driven Features Your input matters More detailed Fibre channel adapter statistics to aid with resource planning. Additional Customer Driven Features Check for updates button Option to include timestamps in report name FC adapter, command element monitoring. Added server serial number to report. NEW NEW 19

VIOS Investigator New idoctor component released in May 2012 - Combines NMON data and a VIOS to IBM i disk mapping process to analyze VIOS performance. - Includes an NPIV data collection and analysis function. - Includes functions to display the VIOS configuration. - PerfPMR data collection and send to IBM support. - Free (except NPIV analysis functions requires JW license) Future plans V7000 support Documentation (see chapter 10) http://public.dhe.ibm.com/services/us/igsc/idoctor/idoctorv7r1.pdf 20

Analyzing NPIV: Advanced graphs - Total reads [VFC] 21

IBM i + NPIV ( Virtual Fiber ) with PowerHA SYSBAS IASP Source VIOS 1 IBM i Client 1 IBM i Client 2 VIOS 8Gbs HBA SYSBAS IASP Each port is assigned separate WWPNs by the Hypervisor Each port is seen as a separate adapter by IBM i so PowerHA reset it individually. Reduces the hardware for a single partition from 4 to 2 adapters for PowerHA 22 Hypervisor POWER6 with IBM i 6.1.1 Note, This configuration can support up to 64 IBM i partitions without adding any more adapters

PowerHA in the Virtual I/O Environment With VSCSI All Logical replication solutions supported including icluster PowerHA for i - Geographic mirroring PowerHA for i Storwize V7000 Metro and Global Mirror support (4Q2011) With NPIV 23 All Logical replication solutions supported including icluster DS8000 Metro Mirroring DS8000 Global Mirroring DS8000 Lun Level Switching SVC/V7000 Metro Mirroring SVC/V7000 Global Mirroring SVC/V7000 Lun Level Switching

Redundant VIOS with NPIV POWER6 IBM i Client VFC adapters IASP Step 1: configure virtual and physical FC adapters SYSBAS Server VFC adapters 1 VIOS Best Practice to make VIOS redundant or separate individual VIOS partitions where a single hardware failure would not take down both VIOS partitions. Step 2: configure SAN fabric and storage Zone Luns to the virtual WWPNs. Each DASD sees a path through 2 VIOS partitions VIOS Notes: Support up to 8 paths per LUN Not all paths have to go through separate VIOS partitions. 2 New multi-path algorithm in 7.1 TR2 Physical FC connections 24

VIOS Storage attach Three categories of storage attachment to IBM i through VIOS 1) Supported (IBM storage) - tested by IBM; IBM supports the solution and owns resolution deliver the fix 2) Tested / Recognized (3rd party storage including EMC and Hitachi) - IBM / storage vendor collaboration, solution was tested (by vendor, IBM, or both); CSA in place, states that IBM and storage vendor will work together to resolve the issue storage vendor will deliver the fix 3) - IBM will - IBM or - Other - not tested by IBM, maybe not have been tested at all No commitment / obligation to provide fix Category #3 (Other) was introduced in the last few years, other storage used to invalidate the VIOS warranty. IBM Service has committed to provide some limited level of problem determination for service requests / issues involving "other storage. To the extent that they will try to isolate it to being a problem within VIOS or IBM i, or external to VIOS or IBM i (ie. a storage problem). No guarantee that a fix will be provided, even if the problem was identified as a VIOS or IBM i issue 25

Support for IBM Storage Systems with IBM i Table as of Feb, 2013 Rack / Tower Systems IBM i Version Hardware IBM i Attach Power Blades DS3200 DS3400 DS3500 DCS3700 DS3950 6.1 / 7.1 POWER6/7 Not DS3200#, Yes DS3500## VIOS 6.1 / 7.1 IBM i Version POWER6/7 @, #, ## Hardware IBM i Attach VIOS DS4700 DS4800 DS5020 SVC Storwize V7000 V3700 V3500 DS5100 DS5300 XIV DS8100 DS8300 DS8700 DS8800 DS8870 6.1 / 7.1 POWER6/7 6.1 / 7.1 POWER6/7 6.1 / 7.1 POWER6/7 6.1 / 7.1 POWER6/7 5.4 / 6.1 / 7.1 POWER5/6/7 VIOS VIOS VSCSI and NPIV%% Direct* or VIOS VSCSI and NPIV% VIOS Direct or VIOS VSCSI and NPIV** 6.1 / 7.1 POWER6/7 (BCH) 6.1 / 7.1 POWER6/7 (BCH) 6.1 / 7.1 POWER6/7 (BCH) 6.1 / 7.1 POWER6/7 (BCH) 6.1 / 7.1 POWER6/7 (BCH) VIOS VIOS VIOS NPIV% VIOS VIOS NPIV** Notes - This table does not list more detailed considerations, for example required levels of firmware or PTFs required or configuration performance considerations - POWER7 servers require IBM i 6.1 or later - This table can change over time as addition hardware/software capabilities/options are added # DS3200 only supports SAS connection, not supported on Rack/Tower servers which use only Fibre Channel connections, supported on Blades with SAS ## DS3500 has either SAS or Fibre Channel connection. Rack/Tower only uses Fibre Channel. Blades in BCH support either SAS or Fibre Channel. Blades in BCS only uses SAS. ### Not supported on IBM i 7.1. But see SCORE System RPQ 846-15284 for exception support * Supported with Smart Fibre Channel adapters NOT supported with IOP-based Fibre Channel adapters ** NPIV requires Machine Code Level of 6.1.1 or later and requires NPIV capable HBAs (FC adapters) and switches @ BCH supports DS3400, DS3500, DS3950 & BCS supports DS3200, DS3500 % NPIV requires IBM i 7.1 TR2 (Technology Refresh 2) and latest firmware released May 2011 or later %% NPIV requires IBM i 7.1 TR6 (Technology Refresh 6) For more details, use the System Storage Interoperability Center: www.ibm.com/systems/support/storage/config/ssic/ Note there are currently some differences between the above table and the SSIC. The SSIC should be updated to reflect the above information 26

IBM PowerVM Virtual Ethernet Virtual I/O Server CMN (Phy) Shared Ethernet Adapter Part of the VIO server Logical device Bridges traffic to and from external networks Client 2 Shared Ethernet Adapter PowerVM Ethernet switch Part of PowerVM Hypervisor Moves data between LPARs Client 1 CMN (Vir) CMN (Vir) CMN (Vir) VLAN-Aware Ethernet Switch PowerVM Hypervisor Ethernet Switch Additional capabilities VLAN aware Link aggregation for external networks SEA Failover for redundancy 27

PowerVM Active Memory Sharing Reduce memory costs by improving memory utilization on Power Servers Supports over-commitment of logical memory with overflow going to a paging device POWER Server Intelligently flow memory from one partition to another for increased utilization and flexibility Dedicated Memory Shared Memory Memory from a shared physical memory pool is dynamically allocated among logical partitions as needed to optimize overall memory usage CPU Shared CPU Designed for partitions with variable memory requirements Virtual I/O Server Paging PowerVM Hypervisor AMS PowerVM Enterprise Edition on POWER6 and Power7 processor-based systems Partitions must use VIOS for I/O virtualization 28 * All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.

LPAR Suspend/Resume Customer Value Resource balancing for long-running batch jobs e.g. suspend lower priority and/or long running workloads to free resources. Planned CEC outages for maintenance/upgrades Suspend/resume may be used in place of or in conjunction with partition mobility. Suspend/resume may require less time and effort than manual database shutdown and restart, for example. Requirements: All I/O is virtualized HMC version 7 releases 7.3 FW: Ax730_xxx IBM i 7.1 TR2 VIOS 2.2.1.0 FP24 SP2 29

Live Partition Mobility Move a running partition from one Power7 server to another with no application downtime Movement to a different server with no loss of service Virtualized VirtualizedSAN SANand andnetwork NetworkInfrastructure Infrastructure Reduce planned downtime by moving workloads to another server during system maintenance Rebalance processing power across servers when and where you need it Live Partition Mobility requires the purchase of the optional PowerVM Enterprise Edition 30 30

Requirements Software HMC/Firmware version 7 releases 7.5 Firmware service pack 730_51, 740_40, or later PowerVM Enterprise Edition VIOS 2.2.1.4 Supported client operating systems IBM i 7.1 TR4 I/O All I/O through the VIOS VSCSI, NPIV, VE External Storage Same storage to both source and destination Power7 tower / rack Hardware Both source and destination on same Ethernet network 31 31

Live Partition Mobility Partition Mobility supported on POWER7 IBM i 7.1 TR4 Power7 System #1 Power7 System #2 Suspended IBM i Client Partition 1 M M M M M M M A en0 (if) vscsi0 ent1 HMC VLAN Hypervisor VASI vhost0 ent1 Mover Service vtscsi0 ent2 SEA fcs0 ent0 Shell IBM i Partition Client 1 Once Finish enough the migration memory Validate environment Create Create pages and Startremove shell virtual migrating havepartition been SCSI the for appropriate moved, on memory original target devices suspend system LPAR pagesthe resources source definitions system M M M M M M M en0 (if) A ent1 vscsi0 VLAN Hypervisor en2 (if) VIOS ent1 vhost0 VASI en2 (if) ent2 SEA vtscsi0 Mover Service VIOS ent0 fcs0 Storage Subsystem A 32 IBM Confidential

Performance Considerations Active partition migration involves moving the state of a partition from one system to another while the partition is still running. Partition memory state is tracked while transferring memory state to the destination system Multiple memory transfers are done until a sufficient amount of clean pages have been moved. Memory updates on the source system affect transfer time Reduce the partition s memory update activity prior to the migration Network speed affects the transfer time Use a dedicated network, if possible At least 1Gb speed Possibly use link aggregated ports for more bandwidth 33 33

Application impacts during migration In general, applications and the operating system are unaware that the partition is moved from one system to another. There are some exceptions to this: Collection Services; when the partition is starting to run on the target system, the Collection Services collector job will cycle the collection so correct hardware information is recorded on the target system. 34 34

PowerVM VIOS Shared Storage Pools Extending Storage Virtualization Beyond a Single System Storage Pool Storage Pool Storage Pool vscsi Classic storage virtualization Storage Pool vscsi NextGen clustered storage virtualization Storage pooled at VIOS for a single system Storage pool spans multiple VIOS s and servers Enables dynamic storage allocation Enabler for federated management Supports Local and SAN Storage, IBM and non-ibm Storage Location transparency Advanced capabilities Supports SAN and NAS, IBM and non-ibm Storage 35

VIOS 2.2 - Integrated Storage Virtualization Integrated Storage Virtualization increases Platform Value Client Benefits Automated storage provisioning Non-disruptive storage lifecycle management Few interactions between mgmt domains Simplified, integrated Director Mgmt Director integration on-line to deliver high level values Consolidated backup Advanced image management Storage integrated with Server Mgmt Consistent capabilities with different Infrastructures storagecapabilities across different storage Consistent = Decreased complexity and cost Server System Administrators Admin Storage System Administrators SAN Virtual Servers CPU Memory CPU Servers Memory Storage Integrated Server & Storage SynerStor Management Storage VIOS Admin VIOS Integrated Storage Capabilities IBM, EMC, Hitachi, Other SAN Storage pooling SAN Virtualization Migration File Virtualization Copy services Caching Thin provisioning Geo mirroring NAS Storage Aggregation Storage Mobility Snapshots & clones Servers 36 Thin Provisioning Physical Storage administration SOFS, NetApp, EMC, Other NAS Heterogeneous

VMControl Editions: Add value to PowerVM on Power Systems VMControl Virtualization Capabilities VMControl Express Edition Manage resources VMControl Standard Edition VMControl Enterprise Edition Automate virtual images Optimize system pools PowerVM Create/manage virtual machines (x86, PowerVM and z/vm) Virtual machine relocation Capture/import, create/remove standardized virtual images Deploy standard virtual images Maintain virtual images in a centralized library Create/remove system pools and manage system pool resources Add/remove physical servers within system pools 37 37

System Pools within IBM Systems Director Managing a pool of system resources with single systems simplicity IBM Systems Director VMControl Mobility System pools are being integrated as a new type of system with the IBM System Director tools, allowing the pool to be managed a single logical entity in the data center. Optimized for. Availability Performance Energy System Pool 38 A dashboard view for System pools will provide overall view of health and status of the pool and the deployed workloads. The dashboard will provide simplified monitoring and visualization of the aggregate capacity and utilization for the systems within a pool.

System Pool support for IBMi Images Technical Overview: Support the IBMi operating systems on the POWER platform. All system pool operations supported; deploy, capture, relocate, optimize. It is assumed that the image meets the hardware/ptf requirements when using the GUI to do a deploy. From the CLI/Rest Interfaces, you have the ability to mark that the image is not relocatable and it will not be moved. Hardware/Software Requirements: P7 hardware at firmware release 740.40 or 730.51, Managed by IBM Hardware Management Console V7R7.5.0M0 or later IBMi image at v7r1 TR4 or later and PTF SI45682 Reference Information: PTF information: http://www-912.ibm.com/a_dir/as4ptf.nsf/allptfs/si45682 Infocenter information: https://www.ibm.com/developerworks/mydeveloperworks/wikis/home?lang =en#/wiki/ibm i Technology Updates/page/Live Partition Mobility Restrictions: Today, we are unable to determine if the image has the proper pre-reqs. The relocation will fail during relocation if the pre-reqs are not met. 39

NPIV Support in System Pools Technical Overview: NPIV, or N PortID Virtualization is now fully supported in System Pools on the POWER platform Deploy to system pool for a single or mulit-disk VA is now supported. System Pools can contain both vscsi and NPIV attached disks and will be handled appropriate in relocation and optimization functions. Virtual Appliances can describe disks that are NPIV and vscsi attached (mixed) for both deploy and capture. Image Repositories can be hosted on NPIV backed attached storage. Note: The storage connectivity is not preserved in the VA during capture. Restrictions: NPIV only supported on SAN storage When editing disks for a VS you cannot switch from vscsi to NPIV IBM i only will support this with SVC, Storwise V7000, V3700 or Flex V7000 40

Analyst commentary on PowerVM with POWER7 A data center scaling out to a cloudsupporting infrastructure or supporting multiple applications placing varying demands on system resources, would have to purchase, deploy, provision, and maintain a good deal more hardware and software with a VMWare based solution to achieve the same workload productivity possible with PowerVM on POWER7. Barry Cohen, CTO, Edison Group 2012 41 41

PowerVM Client Success: GHY International Consolidating infrastructure benefits midsize business Business challenge: Predicting that international trade would increase as economic conditions improve, customs brokerage GHY International wanted to update its IT infrastructure to provide headroom for business growth. Solution: GHY International deployed an IBM Power 750, running IBM AIX, IBM i, and Linux on a single POWER7 system using IBM PowerVM and a separate IBM System x 3850 and VMware environment for Windows. Benefits: Enhanced scalability: IBM Power 750 delivers over four times the capacity of current server Easy manageability: A four-person IT team now spends just five percent versus 95 percent of its time on server management With PowerVM, we went from 95 percent to only 5% of our time managing or reacting to our environment. And saved the business hundreds of thousands of dollars in licensing and application fees. Nigel Fortlage, vice president of IT and CIO, GHY International Better energy efficiency: reduces electricity and cooling requirements with three operating systems running on one box 42 42

PowerVM on POWER7 delivers better scale-up and higher throughput performance than VMware vsphere AIM7 SingleVM Scale-up 131% vsphere4.1 +131% 500000 PowerVM advantage increases as we scale-up 400000 300000 200000 PowerVM is 103% better than vsphere 100000 4.1 and 131% better than vsphere 5.0. 0 vsphere 5.0 is no better than vsphere 4.1. Power 750 32 cores (8cores/chip) 43 43 vsphere5 600000 Jobs/m in PowerVM on Power 750 delivers superior scale-up efficiency that outperforms vsphere 5.0 by up to 131%, running the same workloads across virtualized resources. PowerVM +103% 1 2 4 8 16 32 # of vcpus HP Proliant DL580 G7 (Westmere EX) Xeon E7 4870 40 cores (10 cores/chip) * A Comparison of PowerVM and VMware vsphere(4.1&5.0) Virtualization Performance, January 2012 https://www14.software.ibm.com/webapp/iwm/web/signup.do?source=stg-web&s_pkg=us-en-po-ar-edison&s_cmp =web-ibm-po-_-ws-powervm

PowerVM on POWER7 delivers better scale-out and higher throughput performance than VMware vsphere 525% AIM7 Multiple VM scale-out (32 vcpus per VM) PowerVM PowerVM on Power 750 outperforms VMware by up to 525% when running multiple VM s and workloads. 600000 Jobs/Min 500000 PowerVM maximizes workload performance and all system resources. vsphere 5.0 has more cores but still can t compete with PowerVM. 400000 300000 200000 100000 0 Power 750 32 cores (8cores/chip) 44 44 vsphere5 8 VM HP Proliant DL580 G7 (Westmere EX) Xeon E7 4870 40 cores (10 cores/chip) * A Comparison of PowerVM and VMware vsphere(4.1&5.0) Virtualization Performance, January 2012 https://www14.software.ibm.com/webapp/iwm/web/signup.do?source=stg-web&s_pkg=us-en-po-ar-edison&s_cmp =web-ibm-po-_-ws-powervm

PowerVM and POWER7 deliver a level of integration unmatched by VMware and x86 PowerVM VMware vsphere 5.0/5.1 High Performance Built-in hypervisor means all industry-leading Power Systems benchmarks are fully virtualized Degrades x86 workload performance by up to 30% compared to bare metal Elastic Scalability Scales to support the most demanding mission-critical enterprise workloads Imposes constraints that limit virtualization to small/medium workloads Extreme Flexibility Dynamically reallocates CPU, memory, storage and I/O without impacting workloads Limited hot-add of CPU and memory, with high risk of workload failures Maximum Security Embedded in Power Systems firmware and protected by secure access controls and encryption Downloaded software exposes more attack surfaces, with many published vulnerabilities Designed in sync with POWER processor and platform architecture road maps Third-party add-on software utility, developed in isolation from processor or systems Client Needs Platform Integration 45 45

Learn more about PowerVM on the Web http://www.ibm.com/systems/power/software/virtualization ( or Google PowerVM and click I m Feeling Lucky) PowerVM resources include white papers, demos, client references and Redbooks 46 46

Resources and references Techdocs http://www.ibm.com/support/techdocs (presentations, tips & techniques, white papers, etc.) IBM PowerVM Virtualization Introduction and Configuration - SG24-7940 http://www.redbooks.ibm.com/abstracts/sg247940.html?open IBM PowerVM Virtualization Managing and Monitoring - SG24-7590 http://www.redbooks.ibm.com/abstracts/sg247590.html?open IBM PowerVM Virtualization Active Memory Sharing REDP4470 http://www.redbooks.ibm.com/abstracts/redp4470.html?open IBM System p Advanced POWER Virtualization (PowerVM) Best Practices REDP4194 http://www.redbooks.ibm.com/abstracts/redp4194.html?open Power Systems: Virtual I/O Server and Integrated Virtualization Manager commands (iphcg.pdf) http://publib.boulder.ibm.com/infocenter/systems/scope/hw/topic/iphcg/iphcg.pdf 47

Trademarks and Disclaimers 8 IBM Corporation 1994-2008. All rights reserved. References in this document to IBM products or services do not imply that IBM intends to make them available in every country. Trademarks of International Business Machines Corporation in the United States, other countries, or both can be found on the World Wide Web at http://www.ibm.com/legal/copytrade.shtml. Adobe, Acrobat, PostScript and all Adobe-based trademarks are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, other countries, or both. Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications Agency which is now part of the Office of Government Commerce. ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office. UNIX is a registered trademark of The Open Group in the United States and other countries. Cell Broadband Engine and Cell/B.E. are trademarks of Sony Computer Entertainment, Inc., in the United States, other countries, or both and are used under license therefrom. Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others. Information is provided "AS IS" without warranty of any kind. The customer examples described are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics may vary by customer. Information concerning non-ibm products was obtained from a supplier of these products, published announcement material, or other publicly available sources and does not constitute an endorsement of such products by IBM. Sources for non-ibm list prices and performance numbers are taken from publicly available information, including vendor announcements and vendor worldwide homepages. IBM has not tested these products and cannot confirm the accuracy of performance, capability, or any other claims related to non-ibm products. Questions on the capability of non-ibm products should be addressed to the supplier of those products. All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Some information addresses anticipated future capabilities. Such information is not intended as a definitive statement of a commitment to specific levels of performance, function or delivery schedules with respect to any future products. Such commitments are only made in IBM product announcements. The information is presented here to communicate IBM's current investment and development activities as a good faith effort to help with our customers' future planning. Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput or performance improvements equivalent to the ratios stated here. Prices are suggested U.S. list prices and are subject to change without notice. Contact your IBM representative or Business Partner for the most current pricing in your geography. 48