IBM i Virtualization and Open Storage. Mike Schambureck IBM Lab Services Rochester, MN



Similar documents
IBM i PowerVM Virtualization 7.1 Update

Getting started with IBM i on an IBM Flex System compute node.

PowerVM and VIOS for IBM i

Cloud Computing with xcat on z/vm 6.3

Session Title: Cloud Computing 101 What every z Person must know

Accelerate with ATS DS8000 Hardware Management Console (HMC) Best practices and Remote Support Configuration September 23rd, 2014.

What s the best disk storage for my i5/os workload?

IBM Systems and Technology Group Technical Conference

IBM i Network Install using Network File System

System z Batch Network Analyzer Tool (zbna) - Because Batch is Back!

Using Virtual Switches in PowerVM to Drive Maximum Value of 10 Gb Ethernet

SAN Conceptual and Design Basics

Java Application Performance Analysis and Tuning on IBM System i

SAS deployment on IBM Power servers with IBM PowerVM dedicated-donating LPARs

Systemverwaltung 2009 AIX / LPAR

PowerHA SystemMirror for AIX V6.1

SMP/E V3.5 Hands-on Lab: Learning to use SMP/E FIXCATs

Performance and scalability of a large OLTP workload

Arwed Tschoeke, Systems Architect IBM Systems and Technology Group

Forecasting Performance Metrics using the IBM Tivoli Performance Analyzer

Integrated Virtualization Manager ESCALA REFERENCE 86 A1 82FA 01

z/vm Capacity Planning Overview

The zevent Mobile Application

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server

Oracle on System z Linux- High Availability Options Session ID 252

IBM Systems Director Navigator for i5/os New Web console for i5, Fast, Easy, Ready

IBM i on a POWER blade (read-me first)

z/osmf Software Deployment Application- User Experience Enhancement Update

FICON Extended Distance Solution (FEDS)

How To Manage Energy At An Energy Efficient Cost

Energy Management in a Cloud Computing Environment

Microsoft System Center 2012 SP1 Virtual Machine Manager with Storwize family products. IBM Systems and Technology Group ISV Enablement January 2014

The Consolidation Process

Virtualization what it is?

How to Deliver Measurable Business Value with the Enterprise CMDB

Oracle Database Scalability in VMware ESX VMware ESX 3.5

HBA Virtualization Technologies for Windows OS Environments

Session 1494: IBM Tivoli Storage FlashCopy Manager

Cloud Infrastructure Management - IBM VMControl

N_Port ID Virtualization

IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Virtual Connect

SuSE Linux High Availability Extensions Hands-on Workshop

Redpaper. Virtual I/O Server Deployment Examples. Advanced POWER Virtualization on IBM System p. Front cover. ibm.com/redbooks

Long-Distance Configurations for MSCS with IBM Enterprise Storage Server

Servers and two Client with failover

Virtualization for IBM i

Maquette DB2 PureScale

Capacity planning for IBM Power Systems using LPAR2RRD.

PowerLinux introduction

Migrating LAMP stack from x86 to Power using the Server Consolidation Tool

Exam Name: i5 iseries LPAR Technical Solutions V5R3 Exam Type: IBM Exam Code: Total Questions: 132

PowerVC 1.2 Q Power Systems Virtualization Center

The HBAs tested in this report are the Brocade 825 and the Emulex LPe12002 and LPe12000.

Symantec Storage Foundation and High Availability Solutions 6.2 Virtualization Guide - AIX

Title. Click to edit Master text styles Second level Third level

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

Dell High Availability Solutions Guide for Microsoft Hyper-V

Virtualized High Availability and Disaster Recovery Solutions

Shifting Gears: VMControl to PowerVC

IBM Tivoli Storage FlashCopy Manager Overview Wolfgang Hitzler Technical Sales IBM Tivoli Storage Management

Emulex 8Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter IBM BladeCenter at-a-glance guide

How To Run A Power5 On A Powerbook On A Mini Computer (Power5) On A Microsoft Powerbook (Power4) On An Ipa (Power3) On Your Computer Or Ipa On A Minium (Power2

Setup for Failover Clustering and Microsoft Cluster Service

z/tpf FTP Client Support

iseries Logical Partitioning

Experiences with Using IBM zec12 Flash Memory

OPTIMIZING SERVER VIRTUALIZATION

Connecting Violin to AIX and PowerVM

Positioning the Roadmap for POWER5 iseries and pseries

Mainframe hardware course: Mainframe s processors

Monitoring Linux Guests and Processes with Linux Tools

DFSMS Basics: How SMS Volume Selection Works

Virtualization 101 for Power Systems

Windows Host Utilities Installation and Setup Guide

TSM for Virtual Environments Data Protection for VMware

M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2.

High Availability and MetroCluster Configuration Guide For 7-Mode

HP SN1000E 16 Gb Fibre Channel HBA Evaluation

QLogic 4Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter IBM BladeCenter at-a-glance guide

Exam : IBM : Iseries Linux Soluton Sales v5r3

The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment

SHARE in Pittsburgh Session 15591

Deploying SAP on Microsoft SQL Server 2008 Environments Using the Hitachi Virtual Storage Platform

IBM Power Systems Facts and Features POWER7 Blades and Servers October 2010

Alexander Paul IBM Certified Advanced Technical Expert (C.A.T.E.) for Power Systems Certified Cisco Systems Instructor CCSI #32044

Tip and Technique on creating adhoc reports in IBM Cognos Controller

Setup for Microsoft Cluster Service ESX Server and VirtualCenter 2.0.1

A Platform Built for Server Virtualization: Cisco Unified Computing System

Violin Memory Arrays With IBM System Storage SAN Volume Control

Configuration Maximums VMware Infrastructure 3

Java auf IBM z13 Ein Performance Update

IBM and TEMENOS T24 workload optimization on the new IBM PureFlex System

The use of Accelerator Appliances on zenterprise

Virtualization Performance Analysis November 2010 Effect of SR-IOV Support in Red Hat KVM on Network Performance in Virtualized Environments

Maximo Business Intelligence Reporting Roadmap Washington DC Users Group

Dell High Availability Solutions Guide for Microsoft Hyper-V R2. A Dell Technical White Paper

FlexArray Virtualization

Transcription:

IBM i Virtualization and Open Storage Mike Schambureck IBM Lab Services Rochester, MN

Partition Virtualization on POWER IO Virtualization with Dedicated Adapters IO Virtualization with a Hosting Server LPAR A LPAR B Server LPAR Physical Adapter DevDrv Physical Adapter DevDrv Increasing Adapter BW Physical Adapter DevDrv Virtual Adapter Server Virtual Adapter Server LPAR A Virtual Adapter Client LPAR B Virtual Adapter Client Hypervisor & LPAR Density per Slot Virtual Fabric Func Func Hypervisor PCI adapter Port PCI adapter Port Func PCI adapter Port Fabric Fabric

Partition Virtualization concepts / benefits Virtualization allows you to use the same physical adapter across several partitions simultaneously. For storage Disk Tape Optical For Ethernet Benefits: This reduces your hardware costs Better hardware utilization Take advantage of new capabilities

IBM i Host and Client Partitions: Overview Integrated Disks OR SAN NWSSTGs OPTxx IBM i Host Ethernet DVD DDxx Virtual SCSI connection Virtual LAN connection IBM i Client OPTxx CMNxx DDxx DVD DASD Hardware assigned to host LPAR in HMC Hosting server s DASD can be integrated or SAN DASD virtualized as NWSSTG objects tied to network server descriptions Optical DVD drive in host LPAR virtualized directly (OPTxx) Networking Network adapter and Virtual Ethernet adapter in host LPAR Virtual Ethernet adapter in client LPAR

VIO Server and Client Partitions: Overview Integrated Disks OR CD# VIOS Host hdisk## DVD Virtual SCSI connection IBM i Client OPT## DD## DVD DASD Hardware assigned to VIOS LPAR in HMC DASD can be integrated or SAN Hdisk# is virtualized as IBM i DD## devices Optical DVD drive in host VIOS LPAR virtualized directly (OPT##) Networking Network adapter and Virtual Ethernet adapter in VIOSLPAR Virtual Ethernet adapter in IBM i client LPAR SAN Ethernet Virtual LAN connection CMN##

IBM i Innovative Technology

Integrated Server Virtualization concepts / benefits Virtualization also allows IBM i to host x86 operating systems For storage Disk (also uses network storage spaces) Tape Optical For Ethernet Benefits: Take advantage of IBM i ease of use and legendary reliability Designed to pool resources and optimize their use across a variety of operating systems Centralize storage and server management Take advantage of IBM i save/restore interfaces for x86 data Object level (storage space) File level (Windows only)

Where Do I Start with Virtualization on IBM i on Power systems? Latest version at: http://www.ibm.com/systems/resources/systems_i_virtualization_open_storage.pdf http://www.ibm.com/systems/resources/systems_power_hardware_blades_i_on_blade_readme.pdf https://www.ibm.com/developerworks/community/wikis/home?lang=en#/wiki/ibm%20i%20technology%20updates/page/ibm%20i%20on%20a%20flex%20compute%20node

Virtual SCSI (vscsi): IBM i hosting IBM i or VIOS hosting IBM i System 1 Source Hosting Server IBM i Client (System 1) IBM i Client (System 2) IBM i Client (System 3) System 2 FC HBA System 3 6B22 6B22 6B22 Assign storage to the physical adapter in the hosting partition Device Type Device Type Device Type Requires 512 byte per sector LUNs to be assigned to the host Many Storage options supported Hypervisor As of POWER6 with IBM i 6.1.1

vscsi Storage Mapping Storage management allocation are done from both the external storage and the IBM i/vios Storage is assigned to the hosting IBM i/vios partition Hosting server Storage adapter IBM i Client Within the VIOS you map the hdisk# (lun) to the vhost corresponding to the client partition Within IBM i host, you map storage spaces (NWSSTG) to network server description (NWSD) tied to the client partition. NWSD vhostxxx VSCSI SERVER nwsstg nwsstg hdisk1 hdisk2 6B22 Device Type VSCSI Client Flexible disk sizes Load source requirements Hypervisor 16 disks per vscsi adapter. Just increased in i7.1 TR8/i7.2 to 32! As of POWER6 with IBM i 6.1.1

vscsi for optical Drive is assigned to the hosting partition Within the VIOS you map physical tape or optical or file backed virtual optical to the vhost corresponding to the client partition IBM i hosting automatically maps optical and tape resources to the client using the vscsi adapter VIOS has no tape library support with vscsi adapters. Must use VFC adapters. vhostxxx VSCSI SERVER Hosting server CD0 cd1 rmt1 Hypervisor As of POWER6 with IBM i 6.1.1 IBM i Client VSCSI Client OPT01

Create Virtual SCSI Client Adapter

Create the Virtual SCSI Server Adapter Update LPAR profile or perform Dynamic LPAR operation Specify IBM i LPAR Specify adapter ID used when creating the client adapter in IBM i

Assigning VIOS Storage to IBM i SAN Storage VIOS Max of 32* virtual devices per connection DDxx IBM i LPAR #1 vtscsixx vhost0 Virtual SCSI Connection vscsi vhost1 vtscsiyy DDxx IBM i LPAR #2 vscsi storage volumes VIOS: Create Virtual SCSI Server Adapters in VIOS (VIOS partition profile) VIOS: Create Virtual SCSI Client Adapters in Client IBM i partition profile VIOS: Assign storage volumes to IBM i client partitions (HMC or command line) IBM i: Initialize and Add Disks to ASP (from SST) * Requires i7.1 TR8 or i7.2

Use HMC Virtual Storage Management to view storage in VIOS

View on the HMC and VIOS Command Line

Virtual Storage Management Map Disk to IBM i client Option 2 VIOS Command Line mkvdev vadapter vhost0 vdev hdisk1

IBM i + NPIV ( Virtual Fiber Channel (vfc) ) System 1 Source System 2 VIOS 8Gbs HBA IBM i Client (System 1) IBM i Client (System 1) IBM i Client (System 1) System 3 Hypervisor assigns 2 unique WWPNs to each Virtual fiber Host on SAN is created as an iseries hosttype Requires 520 byte per sector LUNs to be assigned to the iseries host on DS8K Can Migrate existing direct connect LUNS DS8100, DS8300, DS8700, DS8800, DS5100, DS5300, V7000, SVC, V3700 and V3500 supported Virtual address example C001234567890001 Hypervisor As of POWER6 with IBM i 6.1.1 Note: an NPIV ( N_port ) capable switch is required to connect the VIOS to the SAN/tape library to use virtual fiber.

Requirements for NPIV with VIOS and IBM i Client Partitions Must use 8 Gb or 16 Gb fibre channel adapters on the Power System and assign to VIOS partitions Must use a fibre channel switch to connect Power System and Storage Server Fibre Channel switch must be NPIV-capable Storage Server must support NPIV as an attachment between VIOS and IBM i Coming up on another slide 19

NPIV Configuration - Limitations Single client adapter per physical port per partition Intended to avoid single point of failure Documentation only not enforced Maximum of 64 active client connections per physical port It is possible to map more than 64 clients to a single adapter port May be less due to other VIOS resource constraints 32K unique WWPN pairs per system platform Removing adapter does not reclaim WWPNs Can be manually reclaimed through CLI (mksyscfg, chhwres ) virtual_fc_adapters attribute If exhausted, need to purchase activation code for more Device Limitations Maximum of 128 visible target ports Not all visible target ports will necessarily be active Redundant paths to a single DS8000 node Device level port configuration Inactive target ports still require client adapter resources Maximum of 64 target devices Any combination of disk and tape Tape libraries and tape drives are counted separately

Create VFC Client Adapter in IBM i Partition Profile Need to check box Specify VIOS LPAR

Virtual WWPNs used to configure hosts on the storage server VFC Client Adapter Properties

Disk and Tape Virtualization with NPIV Assign Storage Use HMC to assign IBM i LPAR and VFC adapter pair to physical FC port

Disk and Tape Virtualization with NPIV Configure SAN Complete zoning on your switch using virtual WWPNs generated for the IBM i LPAR Configure a host connection on the SAN tied to the virtual WWPN Use storage or tape library UI and Redbook to assign LUNs or tape drives to the WWPN from the VFC client adapter in i LPAR

Redundant VIOS with NPIV Client VFC adapters VIOS IASP POWER6 IBM i 1 SYSBAS Server VFC adapters VIOS Step 1: configure virtual and physical FC adapters Best Practice to make VIOS redundant or separate individual VIOS partitions where a single hardware failure would not take down both VIOS partitions. Step 2: configure SAN fabric and storage Zone LUNs to the virtual WWPNs. Each DASD sees a path through 2 VIOS partitions Notes: Support up to 8 paths per LUN 2 Not all paths have to go through separate VIOS partitions. Physical FC connections

Connecting IBM i to VIOS storage - VSCSI vs. NPIV IBM i generic scsi disk generic scsi disk VSCSI IBM i EMC V7000 DS8000 NPIV VIOS SCSI VIOS VIOS FCP VIOS FC HBAs FC HBAs FC HBAs FC HBAs SAN SAN XIV DS3500 DS8000 V7000 All storage subsystems* and internal storage supported Some storage subsystems and some FC tape libraries supported Storage assigned to VIOS first, then virtualized to IBM i * See following charts for list of IBM supported storage devices Storage mapped directly to Virtual FC adapter in IBM i, which uses N_Port on FC adapter in VIOS 26

Support for IBM Storage Systems with IBM i Table as of April 2014 DS3200 DS3400 DS3500 DCS3700 DS3950 DS4700 DS4800 DS5020 SVC Storwize V7000 V3700 V3500 DS5100 DS5300 XIV DS8100 DS8300 DS8700 DS8800 DS8870 Rack / Tower Systems IBM i Version Hardware 6.1 / 7.1 POWER6/7 Not DS3200#, Yes DS3500## 6.1 / 7.1 POWER6/7 IBM i Attach VIOS VSCSI VIOS VSCSI 6.1 / 7.1 POWER6/7 Direct* or VIOS -- VSCSI and NPIV%% 6.1 / 7.1 POWER6/7 6.1 / 7.1 POWER6/7 5.4 / 6.1 / 7.1 POWER5/6/7 Direct* or VIOS VSCSI and NPIV% VIOS VSCSI Direct or VIOS VSCSI and NPIV** Power Blades IBM i Version Hardware 6.1 / 7.1 POWER6/7 @, #, ## 6.1 / 7.1 POWER6/7 (BCH) 6.1 / 7.1 POWER6/7 (BCH) 6.1 / 7.1 POWER6/7 (BCH) 6.1 / 7.1 POWER6/7 (BCH) 6.1 / 7.1 POWER6/7 (BCH) IBM i Attach VIOS VSCSI VIOS VSCSI VIOS VSCSI VIOS VSCSI and NPIV% VIOS VSCSI VIOS VSCSI and NPIV** IBM i Version 6.1 / 7.1 6.1 / 7.1 6.1 / 7.1 6.1 / 7.1 6.1 / 7.1 6.1 / 7.1 PureFlex Nodes Hardware POWER7/7+ POWER7/7+ IBM i Attach Behind V7000 Behind V7000 POWER7/7+ VIOS VSCSI For V7000 POWER7/7+ POWER7/7+ POWER7/7+ Behind V7000 Behind V7000 Behind V7000 IBM i Version 6.1 / 7.1 6.1 / 7.1 6.1 / 7.1 6.1 / 7.1 6.1 / 7.1 6.1 / 7.1 Hardware POWER7/7+ POWER7/7+ POWER7/7+ POWER7/7+ POWER7/7+ POWER7/7+ Flex Nodes IBM i Attach VIOS VSCSI VIOS VSCSI VIOS VSCSI or NPIV%% For V7000 / V3700 / SVC Native* or VIOS VSCSI and NPIV% VIOS VSCSI Direct or VIOS VSCSI and NPIV** Use Disk Magic to evaluate SAN s performance and configuration. Legend is on the next slide

Support for IBM Storage Systems with IBM i Notes - This table does not list more detailed considerations, for example required levels of firmware or PTFs required or configuration performance considerations - POWER7 servers require IBM i 6.1 or later - This table can change over time as addition hardware/software capabilities/options are added # DS3200 only supports SAS connection, not supported on Rack/Tower servers which use only Fibre Channel connections, supported on Blades with SAS ## DS3500 has either SAS or Fibre Channel connection. Rack/Tower only uses Fibre Channel. Blades in BCH support either SAS or Fibre Channel. Blades in BCS only uses SAS. ### Not supported on IBM i 7.1. But see SCORE System RPQ 846-15284 for exception support * Supported with Smart Fibre Channel adapters NOT supported with IOP-based Fibre Channel adapters ** NPIV requires Machine Code Level of 6.1.1 or later and requires NPIV capable HBAs (FC adapters) and switches @ BCH supports DS3400, DS3500, DS3950 & BCS supports DS3200, DS3500 % NPIV requires IBM i 7.1 TR2 (Technology Refresh 2) and latest firmware released May 2011 or later %% NPIV requires IBM i 7.1 TR6 (Technology Refresh 6) For more details, use the System Storage Interoperability Center: www.ibm.com/systems/support/storage/config/ssic/ Note there are currently some differences between the above table and the SSIC. The SSIC should be updated to reflect the above information.

IBM Power Systems Set Tagged I/O Specify Client SCSI Adapter for Load Source Client Adapter created on previous slides - Client SCSI or Client VFC Physical CD/DVD or Client SCSI adapter if virtualizing the device. 2013 IBM Corporation

IBM Power Systems PC5250 emulator is used for the IBM i console Just like the HMC uses 05/30/12 2013 IBM Corporation

IBM Power Systems 6B25 Adapter Look & Feel Similar in look & feel to other IOPless storage adapters Attached device resources have real hardware CCINs 2013 IBM Corporation

Virtual Ethernet PowerVM Hypervisor Ethernet switch Part of every Power server Moves Ethernet data between LPARs Can separate traffic based on VLANs Shared Ethernet Adapter Part of the VIO server Logical device Bridges traffic to and from external networks VLAN aware Link aggregation for external networks SEA Failover for redundancy IBM i Bridge Adapter Bridges traffic to and from external networks Introduced in i7.1 TR3 Hosting Server Phy Ethernet Switch Bridged Ethernet Adapter Virt Client 1 CMN (Vir) VLAN-Aware Ethernet Switch PowerVM Hypervisor Client 2 CMN (Vir)

SEA Failover Configuration for Redundant VIOS s VIOS VIOS Partition Partition ETH VETH VETH ETH VETH VETH VETH VETH SEA SEA Hypervisor Network Client network

SEA Failover and Link Aggregation Create a 2 nd VIOS Each VIOS has a SEA adapter* Each VIOS has a link aggregation A control channel is created between the 2 VIOS Note: One SEA adapter must have a lower priority at creation ** Failover and Redundancy VIOS 1A could be taken down for maintenance VIOS 1B would take over the network traffic A broken cable, or failed adapter for example would not disrupt Ethernet traffic Ent 3 Phy Ent 6 SEA Ent 5 Aggr VIOS 1A Ent 4 Phy Ent 1Gb2 Virt PVID = 1 Primary Ent 1Gb7 Virt PVID = 99 Control Channel VLAN 99 Standby Ent 4 Virt PVID = 99 VIOS 1B Ent 2 Virt PVID = 1 Ent 0 Phy Ent 5 SEA Ent 3 Aggr Ent 1 Phy IBM i Client CMN0 Virt PVID = 1 Hypervisor Switch Switch *The HMC must have Access External Networks checked for ent 2 virtual adapter on the VIOS s! **Only 1 virtual ethernet adapter used for SEA(ent2) can have a priority of 1 on the HMC.

HMC: Hosting partition bridge Ethernet adapter Note the setting Access external network Required for Shared Ethernet Adapter Client partitions use the same VLAN ID

Create Virtual Adapter in Client Partition Needs to match VLAN ID in VIOS

HMC: VIOS - Create Shared Ethernet Adapter

Select Physical Adapter for SEA

Create Virtual Adapter Control Channel VIOS1A A control channel is created to allow a primary VIOS to communicate with a secondary VIOS so that a failover can occur if the primary VIOS is unavailable The control channel is a virtual ethernet adapter pair (one on each VIOS) that is linked to the SEA on that VIOS Heartbeat messages are passed from the primary to the secondary VIOS over a separate VLAN (PVID) VIOS1B Control channel must be created before the failover SEA is created on the secondary VIOS Operation will fail if control channel doesn t exist

Create Failover SEA on Secondary VIOS

View of Both VLANs from the HMC VLAN 1 SEAs VLAN 99 Control Channel

Dual SEAs Another option is to create shared ethernet adapters (SEAs) in each VIOS and make them peers (not primary/secondary) This is also referred to as load sharing HMC does not support this feature yet so need to use VIOS command line Need to set ha_mode = sharing when creating the SEAs from the VIOS command line If changing existing SEAs that were previously set to primary/secondary, make sure you change the ha_mode attribute on the primary first chdev -dev entx -attr ha_mode = sharing entx is the name of the shared ethernet adapter

10 Gb Shared Ethernet Adapter Performance 10 Gb SEAs put a much greater load on VIOS than 1 Gb SEAs Current recommendation is 2 dedicated processors for VIOS partitions that virtualize 10 Gb SEAs Make sure large send attribute is turned on (at TCP layer) chdev -dev ent2 -attr large_send = yes -perm Make sure flow control attribute is turned on chdev -dev ent2 -attr flow_ctrl = yes -perm Also need to turn on flow control on the network switch

IBM i bridge adapters From the IBM i command line interface: Create an Ethernet line description for the physical Ethernet resource, and set its Bridge identifier to your chosen bridge name. Create an Ethernet line description for the selected virtual Ethernet resource, and set its Bridge identifier to the same bridge name. - The VE adapter must have the Use this adapter to access the external network selected. When both line descriptions are varied on, traffic is bridged between the two networks, and any other partitions with virtual Ethernet adapters on the same VLAN as the new virtual Ethernet resource can access the same network as the physical Ethernet resource.

Virtual Ethernet Limits Description Limit Maximum virtual Ethernet adapters per LPAR 256 Maximum number of VLANs per virtual adapter 21 VLAN (20 VID, 1 PVID) Number of virtual adapter per single SEA sharing a single physical network adapter 16 Maximum number of VLAN IDs 4094 Maximum number of physical adapters in a link aggregation 8 primary, 1 backup

Where do you have to run VIOS hosting IBM i? Power blades Power Compute Nodes

Take advantage of other VIOS capabilities

PowerVM Active Memory Sharing Reduce memory costs by improving memory utilization on Power Servers Supports over-commitment of logical memory with overflow going to a paging device Intelligently flow memory from one partition to another for increased utilization and flexibility Memory from a shared physical memory pool is dynamically allocated among logical partitions as needed to optimize overall memory usage Dedicated Memory POWER Server Shared Memory Virtual I/O Server Designed for partitions with variable memory requirements CPU Shared CPU Paging PowerVM Enterprise Edition on POWER6 and Power7 processor-based systems Partitions must use VIOS for I/O virtualization Make sure it s a good fit for you! PowerVM Hypervisor AMS

LPAR Suspend/Resume Customer Value Planned CEC outages for maintenance/upgrades Suspend/resume may be used in place of or in conjunction with partition mobility. Suspend/resume may require less time and effort than manual database shutdown and restart, for example. Resource balancing for long-running batch jobs e.g. suspend lower priority and/or long running workloads to free resources. Minimum Requirements: All I/O is virtualized HMC version 7 releases 7.3 FSP FW: Ax730_xxx IBM i 7.1 TR2 VIOS 2.2.1.0 FP24 SP2

Partition Suspend/Resume Power7 System #1 Suspended IBM i Client Partition 1 A vscsi0 M M M M M M C C C en0 (if) ent1 VLAN Validate environment for appropriate resources Ask partiton if it s ready for suspend Suspend Partition CPU and I/O Move Memory and CPU to Storage Pool Storage Subsystem ReservedStorage Pool LUN VASI vhost0 ent1 Hypervisor Partition Suspended A Mover Service vtscsi0 ent2 SEA en2 (if) fcs0 ent0 VIOS Partition Suspend/Resume supported on POWER7 IBM i 7.1 TR2

PowerVM Live Partition Mobility Move running partition from one system to another with almost no impact to end users Live Partition Mobility Requires POWER7 systems or later, PowerVM Enterprise, and all I/O must be through the Virtual I/O Server Requires IBM i 7.1 with TR4 or newer Movement of the OS and applications to a different server with no loss of service Virtualized storage and Network Infrastructure Potential Benefits Eliminate planned outages Balance workloads across systems Energy Savings

Requirements & Planning Source and destination must be mobility capable and compatible: Enhanced hardware virtualization capabilities. Identical or compatible processors. Compatible firmware levels. Source and destination must be LAN connected same subnet All resources (CPU, Memory, IO adapters) must be virtualized prior to migration. Hypervisor will handle CPU and Memory automatically, as required. Virtual IO adapters are pre-configured, and SAN-attached disks accessed through Virtual IO Server (VIOS) Source and destination VIOS must have symmetrical access to the partition s disks. e.g. no internal or VIOS LVM-based disks. OS is migration enabled/aware. Certain tools/ application middleware can benefit from being migration aware also. LPAR HMC LAN SAN Boot Paging Application Data

Live Partition Mobility Power7 System #1 Power7 System #2 Suspended IBM i Client Partition 1 Shell IBM i Partition Client 1 M M M M M M M M M M M M M M A en0 (if) Once Finish enough the migration memory Validate environment Create pages and Start remove shell virtual migrating have partition been SCSI the for appropriate moved, on memory original target devices suspend system LPAR pagesthe resources source definitions system en0 (if) A vscsi0 ent1 ent1 vscsi0 VLAN HMC VLAN Hypervisor Hypervisor VASI vhost0 ent1 ent1 vhost0 VASI Mover Service vtscsi0 ent2 SEA en2 (if) en2 (if) ent2 SEA vtscsi0 Mover Service fcs0 ent0 VIOS VIOS ent0 fcs0 Storage Subsystem A Partition Mobility supported on POWER7 IBM i 7.1 TR 4

Native attached Storage to IBM i No VIOS involved. Adapters are cabled to Fibre Channel (FC) switch(es). Switches include zoning from the SAN to the IBM i partition. Active paths are solid, Passive paths are dotted. Allows for failover recovery on loss of primary node. Requires i7.1 TR6 or newer. Supported SANs: DS8000/5100/5300 V7000 V3700 V3500 SVC

Direct attached Storage to IBM i No VIOS involved. Adapters are cabled directly to the SAN Active paths are solid, Passive paths are dotted. Allows for failover recovery on loss of primary node. This ties up host ports on the SAN (ie can t be shared) Requires i7.1 TR6 or newer. Supported SANs: DS8000/5100/5300 V7000 V3700 V3500 SVC

IBM i Virtualization Enhancements Virtualization by GA Date IBM i 7.1 IBM i 6.1 with 6.1.1 machine code Environment June 2014 -SRIOV native Ethernet support -Increase vscsi disks per host adapter to 32 Technology Refresh 8 X X - - ivirtualization ivirtualization, VIOS March 2013 - NPIV attach of SVC, Storwize V7000, V3700, V3500 Technology Refresh 6 X - - VIOS October 2012 -Large Receive offload for layer 2 bridging -PowerVM V2.2 Refresh with SSP and LPM updates Technology Refresh 5 X X - - - VIOS VIOS May 2012 -IBM i Live Partition Mobility -HMC Remote Restart PRPQ -Performance enhancement for zeroing virtual disk Technology Refresh 4 X X X - - - VIOS VIOS ivirtualization December 2011 -PowerVM V2.2 Refresh with SSP Enhancements Technology Refresh 3 X - VIOS 5 6

IBM i Virtualization Enhancements (continued) Virtualization by GA Date IBM i 7.1 IBM i 6.1 with 6.1.1 machine code Environment October 2011 -Ethernet layer-2 bridging -Mirroring with NPIV attached storage -VPM enhancements to create IBM i partitions -PowerVM NPIV attachment for DS5000 for blades -PowerVM V2.2 refresh with network load balancing Technology Refresh 3 X X X X X - - X (client only) - X ivirtualization VIOS ivirtualization VIOS VIOS May 2011 -Partition suspend and resume -IBM i to IBM i virtual tape support -PowerVM NPIV attachment of DS5000 Technology Refresh 2 X X X - Apar II14615 (client only) - VIOS ivirtualization VIOS December 2010 -PowerVM with shared storage pools Technology Refresh 1 X - VIOS September 2010 -Support for embedded media changers -Expanded HBA and switch support for NPIV on blades Technology Refresh 1 X X - - X ivirtualization VIOS 5 7

The End Thank you!

Trademarks The following are trademarks of the International Business Machines Corporation in the United States, other countries, or both. Not all common law marks used by IBM are listed on this page. Failure of a mark to appear does not mean that IBM does not use the mark nor does it mean that the product is not actively marketed or is not significant within its relevant market. Those trademarks followed by are registered trademarks of IBM in the United States; all others are trademarks or common law marks of IBM in the United States. For a complete list of IBM Trademarks, see www.ibm.com/legal/copytrade.shtml: *, AS/400, e business(logo), DBE, ESCO, eserver, FICON, IBM, IBM (logo), iseries, MVS, OS/390, pseries, RS/6000, S/30, VM/ESA, VSE/ESA, WebSphere, xseries, z/os, zseries, z/vm, System i, System i5, System p, System p5, System x, System z, System z9, BladeCenter The following are trademarks or registered trademarks of other companies. Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries. Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. in the United States, other countries, or both and is used under license therefrom. Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office. IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications Agency, which is now part of the Office of Government Commerce. * All other products may be trademarks or registered trademarks of their respective companies. Notes: Performance is in Internal Throughput Rate (ITR) ratio based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput improvements equivalent to the performance ratios stated here. IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply. All customer examples cited or described in this presentation are presented as illustrations of the manner in which some customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics will vary depending on individual customer configurations and conditions. This publication was produced in the United States. IBM may not offer the products, services or features discussed in this document in other countries, and the information may be subject to change without notice. Consult your local IBM business contact for information on the product or services available in your area. All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Information about non-ibm products is obtained from the manufacturers of those products or their published announcements. IBM has not tested those products and cannot confirm the performance, compatibility, or any other claims related to non-ibm products. Questions on the capabilities of non-ibm products should be addressed to the suppliers of those products. Prices subject to change without notice. Contact your IBM representative or Business Partner for the most current pricing in your geography. 59