IBM i on a POWER blade (read-me first)

Size: px
Start display at page:

Download "IBM i on a POWER blade (read-me first)"

Transcription

1 IBM i on a POWER blade (read-me first) Installation and Configuration Guide Mike Schambureck and Keith Zblewski and IBM i Lab Services,Rochester, MN April 2013 IBM i on an IBM POWER blade (read-me first) 1

2 Table of Contents 1 Overview and Concepts Logical Partitioning (LPAR) Overview of I/O concepts for IBM i on blade Review terminology Plan for necessary IP addresses Hardware details Supported environments POWER Blades IBM BladeCenter PS701 Express IBM BladeCenter PS700 Express IBM BladeCenter PS702 Express IBM BladeCenter PS703 Express IBM BladeCenter PS704 Express IBM BladeCenter JS23 Express IBM BladeCenter JS43 Express IBM BladeCenter JS IBM BladeCenter JS BladeCenter configuration and firmware updates Install the BladeCenter and blade server hardware Configure the Advanced Management Module (AMM) Initial AMM configuration (New chassis install) AMM user profiles Assigning an IP address to I/O modules AMM LAN configuration General Download instructions for Fix Central Download BladeCenter management module firmware Download BladeCenter Fibre Channel I/O module firmware Download BladeCenter Ethernet I/O module firmware Download BladeCenter SAS I/O module firmware Download the BladeCenter S DSM firmware Update the BladeCenter firmware Update the AMM firmware Update the firmware on the BladeCenter I/O modules Update the firmware on the BladeCenter S DSMs Installing and configuring an Intelligent Copper Pass-through Module (ICPM) Storage management concepts Storage concepts for POWER blade in BladeCenter H Storage concepts for JS12 and JS22 in BladeCenter H Storage concepts for JS23, JS43 and PSXX in BladeCenter H Storage concepts for POWER blade in BladeCenter S Storage concepts for JS12 and JS22 in BladeCenter S Storage concepts for JS23, JS43 and PS7xx in BladeCenter S...30 IBM i on an IBM POWER blade (read-me first) 2

3 4.3 Fibre Channel Configuration Best practices for BladeCenter H and Fibre Channel storage N-Port ID Virtualization (NPIV) Support statement for the U3 Storage Drawer LTO-5/6 Tape device Support statements and requirements for FC tape libraries NPIV on a POWER blade minimum requirements NPIV Supported SANs Fibre Channel over Converged Enhanced Ethernet (FCoCEE) Minimum requirements for FCoCEE Implementing FCoCEE Create LUNs for the IBM i partition(s) in BladeCenter H Multi-path I/O (MPIO) MPIO drivers SAS Configuration Best practices for BladeCenter S and SAS storage Best practices when using the SAS Connectivity Module Best practices when using the SAS RAID Controller Modules Configuring storage in BladeCenter S using the SAS Connectivity Module SAS I/O modules configurations Activate a pre-defined SAS I/O module configuration Create a custom SAS I/O module configuration Configuring SAS SAN Storage using a SAS Connectivity Module Configuring storage in BladeCenter S using the SAS RAID Controller Module SAS zoning for RSSM Configuring RSSM with Storage Configuration Manager Using the DVD drive in the BladeCenter with VIOS and IBM i USB 2.0 access Writing to DVD-RAM media in BladeCenter H VIOS and IBM i installation and configuration Obtain the VIOS and IBM i installation media and fixes Install PuTTY (optional) Install SDMC (optional) SDMC Discovery of POWER blades (optional) HMC Discovery of POWER blades Planning for and Installation of VIOS Memory recommendations for VIOS and the Power Hypervisor Processor recommendations for VIOS Using the media tray to install VIOS Preinstalled VIOS or non-hmc/sdmc install of VIOS Opening a console for VIOS using Serial-over-LAN (SOL) when using IVM Opening a console for VIOS using the AMM remote control interface Creating a VIOS partition using the HMC Creating a VIOS virtual server using SDMC Prerequiste Flexible Service Processor (FSP) IP configuration for HMC/SDMC access Opening a VIOS console using the HMC Opening a VIOS console using SDMC...49 IBM i on an IBM POWER blade (read-me first) 3

4 5.12 Powering on the blade from the AMM Powering on the blade from the HMC Powering on the blade from the SDMC System Reference Codes during boot Accessing the SMS menus Installing VIOS from DVD Installing VIOS from NIM Completing the install Mirroring of VIOS Configure networking in VIOS (if necessary) Update the system firmware on the SP of the POWER blade (if necessary) Update VIOS (if necessary) Updating VIOS using the VIOS CLI Updating VIOS using the HMC Updating VIOS using SDMC Update the microcode on the I/O expansion cards on the blade (if necessary) using the VIOS CLI Displaying the current microcode level of the expansion adapters on the blade: Manually downloading the latest available level of expansion adapter microcode: Manually updating the adapter microcode: Updating adapter firmware using HMC Updating adapter firmware using SDMC Verify disks for IBM i are reporting in VIOS Fibre Channel configuration commands Virtual Ethernet concepts and configuration Configuring VIOS Virtual Ethernet using the HMC Configuring VIOS Virtual Ethernet using the SDMC Configuring VIOS Virtual Ethernet using IVM Shared Ethernet Adapter (SEA) configuration Configure the Shared Ethernet Adapter (SEA) using the HMC Configure the Virtual Ethernet bridge for IBM i LAN console using IVM Configure the Virtual Ethernet bridge using the SDMC Configure the VIOS link aggregation IBM i install and configuration Create the IBM i partition using IVM Create the IBM i client partition using HMC Create the IBM i Virtual Server using SDMC Increasing the number of virtual adapters in the IBM i partition...63 Using the VIOS CLI Using the HMC Using the SDMC Creating multiple Virtual SCSI adapters per IBM i partition Creating multiple Virtual SCSI adapters using the VIOS CLI Creating multiple Virtual SCSI adapters using the HMC...64 IBM i on an IBM POWER blade (read-me first) 4

5 6.2.3 Creating multiple Virtual SCSI adapters using the SDMC Mapping storage to new Virtual SCSI adapters using the VIOS CLI Mapping storage to new Virtual SCSI adapters using the HMC Mapping storage to new Virtual SCSI adapters using the SDMC Removing Virtual SCSI adapters using the VIOS CLI Removing virtual SCSI adapters using SDMC Multiple Virtual SCSI adapters and virtual tape using IVM Configuring Virtual tape using the HMC Configuring Virtual tape using the SDMC End to end LUN mapping using HMC End to end LUN mapping using VIOS CLI End to end LUN mapping using SDMC NPIV configuration steps using the HMC/SDMC Install System i Access for Windows (IVM installed) Create the LAN console connection on the console PC (IVM install) Install IBM i using the IVM interface Installing IBM i using SDMC Configure mirroring in IBM i (if necessary) Disk protection for IBM i in BladeCenter H IBM i mirroring in BladeCenter S using SAS connectivity modules Identifying SAS disk units in different DSMs Configuring mirroring Install IBM i PTFs (if necessary) Post-install tasks and considerations Configure IBM i networking Configure Electronic Customer Support (ECS) over LAN How to perform IBM i operator panel functions (IVM) How to display the IBM i partition System Reference Code (SRC) history (IVM) IBM i on POWER blade considerations and limitations Moving the BladeCenter DVD drive to another blade using IVM Moving the BladeCenter DVD drive to another blade using an HMC Moving the BladeCenter DVD drive to another blade using an SDMC Moving the tape drive to another virtual server using an HMC Moving the tape drive to another virtual server using an SDMC Moving a tape device to another virtual server using the VIOS CLI Redundant VIOS partitons Backup and restore Overview of backup and restore for IBM i on a POWER blade Save and restore with a single LTO4/5 SAS tape drive Technical overview Support statements and requirements for tape drives Making the SAS tape drive available to VIOS Sharing a tape drive among multiple blades Assigning the tape drive to IBM i using IVM Error logs for tape virtualization Performing an IBM i save Performing an IBM i restore Performing a D-mode IPL from virtual tape Save and restore with a Fibre Channel-attached tape library...85 IBM i on an IBM POWER blade (read-me first) 5

6 8.3.1 Technical overview Support statements and requirements for FC tape libraries Creating the virtual FC configuration using IVM Creating the virtual FC configuration using SDMC Making the tape library available to IBM i Performing an IBM i save Performing an IBM i restore Performing a D-mode IPL from a FC tape library using IVM Backup and restore of VIOS and IVM Backing up VIOS to tape Restoring VIOS from a SAS tape backup...91 Appendix: i Edition Express for BladeCenter S Overview IBM i preinstall on BladeCenter S IBM i preinstall overview IBM i preinstall with the SAS Connectivity Module IBM i preinstall with the SAS RAID Controller Module Requirements Requirements for NSSM Requirements for RSSM Installation steps that must be performed by the implementing party DS4000 or DS5000 Copy Services and IBM i FlashCopy and VolumeCopy Test scenario FlashCopy and VolumeCopy support statements Enhanced Remote Mirroring (ERM) Test scenario ERM support statements Additional resources BladeCenter and blade servers Storage VIOS and IVM IBM i Trademarks and disclaimers IBM i on an IBM POWER blade (read-me first) 6

7 Abstract This read-me first document provides detailed instructions on using IBM i on a POWER blade. It covers prerequisites, supported configurations, preparation for install, hardware and software install, firmware updates and post-install tasks such as backups. The document also contains links to many additional information sources. 1 Overview and Concepts 1.1 Logical Partitioning (LPAR) Similar to other Power systems, POWER blades can be partitioned into separate environments, or logical partitions (LPARs). POWER blades support IBM i, AIX and Linux partitions. Any physical hardware the blade has access to is owned by a Virtual I/O Server (VIOS) LPAR, which virtualizes storage, optical and network resources to the other LPARs. An IBM i LPAR on the blade does not have direct access to any physical hardware on the blade or outside the BladeCenter chassis. IBM i is a client to VIOS, using a Virtual SCSI (VSCSI) connection in the Hypervisor firmware residing on the SP. In May of 2011, the Systems Director Management Console (SDMC) was announced with support for managing POWER blades. One key function that SDMC brings to POWER blades is the ability to manage redundant VIOS virtual servers (LPARs). More information is contained later on in this document. When SDMC is not used VIOS is always the first LPAR installed on a partitioned POWER blade. With SDMC that is no longer a requirement. As soon as VIOS is installed, other LPARs can be created using the Integrated Virtualization Manager (IVM) or the SDMC interface. IVM is part of VIOS and provides a browser interface to the blade for managing LPARs and I/O virtualization. The POWER blade does not support a Hardware Management Console (HMC). SDMC can be a hardware appliance, such as an HMC, or a software appliance that runs as a virtual machine. An LPAR on SDMC is referred to as a virtual server. 1.2 Overview of I/O concepts for IBM i on blade IBM i LPARs on a Power blade can use Fibre Channel or SAS storage. The type of storage used is determined by the I/O modules available in the BladeCenter chassis and the I/O expansion adapters present on the blade, and not by the blade machine type and model. All POWER6 and POWER7 processor-based blades are capable of connecting to both Fibre Channel or SAS storage. The JS23, JS43, PS7XX would use one of several CIOv adapters and/or the QLogic CFFh Fibre Channel/Ethernet adapter to connect to Fibre Channel storage; they would use the CIOv SAS pass-through adapter to connect to SAS storage. The JS12 and JS22 would use the QLogic CFFh Fibre Channel/Ethernet adapter to connect to Fibre Channel storage and the CFFv SAS adapter to connect to SAS storage. IBM i on a blade supports Fibre Channel and/or SAS storage (IBM DS3200 ) with BladeCenter H, and BladeCenter S internal SAS storage and/or DS3200 with BladeCenter S. IBM i supports the SAS RAID Controller I/O Module in BladeCenter S. Fibre Channel storage is not supported with BladeCenter S. IBM i on an IBM POWER blade (read-me first) 7

8 The storage is physically connected to VIOS using a supported expansion adapter for that blade. As soon as the FC or SAS LUNs, or SAS drives in the BladeCenter S are recognized by VIOS, they are directly virtualized to IBM i, so that each LUN or SAS drive appears as one drive within IBM i. IBM i is installed using the DVD drive in the chassis (virtualized by VIOS) or a media image file in VIOS. IBM i LPARs have two different 1Gb Ethernet connectivity options, both using VIOS virtualization. The first option is the embedded Host Ethernet Adapter (HEA) ports on the blade. HEA ports are not directly assigned to IBM i; instead they are physical assigned to VIOS, which provides a Virtual Ethernet Bridge for client partitions. The second option is the Ethernet ports on the QLogic CFFh Fibre Channel/Ethernet adapter. To take advantage of the Ethernet ports on this adapter you have to have the corresponding chassis hardware. In a BC-H chassis you would need a multiswitch interconnect module (MSIM) and an Ethernet switch combination in switch bays 7&8 or 9&10. In a BC-S chassis you would need an Ethernet switch in IO Bay 2 along with a request for price quotation (RPQ) to support this. The physical adapter ports are owned by VIOS, which bridges them to IBM i. In order to take advantage of the Virtual Ethernet bridge, IBM i must have at least one Virtual Ethernet adapter. One such adapter is created for each new IBM i partition by default. Operations Console (LAN) and the SDMC/HMC supplied console are the only console options. Operations Console Direct Attached, thin clients or twinaxial connections are not supported. IBM i partitions use the same network virtualization framework explained above for both LAN console and production TCP/IP traffic. It is recommended that two separate Virtual Ethernet adapters be created for those functions in each IBM i partition. The two adapters can then reside on the same Virtual LAN (VLAN), and therefore connect to the outside network using the same HEA or CFFh port. Alternatively, each Virtual Ethernet Adapter can be on a separate VLAN and using a different HEA or CFFh port to reach the external network. Refer to the Backup and restore section of this paper for an explanation on the save and restore options for IBM i on blade, as well as the procedures for saving and recovering VIOS. Figure 1 shows an example Power blade environment with two IBM i, one AIX and one Linux LPAR as clients of VIOS. IBM i on an IBM POWER blade (read-me first) 8

9 Figure 1: An example of a POWER blade environment with two IBM i, one AIX, and one Linux LPAR as clients of VIOS 1.3 Review terminology Terminology BladeCenter Advanced management module (AMM) I/O bay Description The chassis containing the blade servers, I/O modules, AMM, DVD-ROM drive, Power and fan modules. A control module residing in a special I/O bay in the BladeCenter. The AMM provides browser and command-line interfaces (CLIs) into the BladeCenter and can also provide KVM (keyboard, video, mouse) functions to JSXX blade servers. The KVM functions are not supported for the PS7XX blades. A slot for an I/O module (switch) inside the BladeCenter. A BladeCenter can have a mix of standard and high-speed switch bays. IBM i on an IBM POWER blade (read-me first) 9

10 I/O module (switch) Multi-switch Interconnect Module (MSIM) A switch residing in the BladeCenter that provides connectivity between the blade servers and external I/O device, using wiring in the BladeCenter midplane. A module that occupies both high-speed I/O bays 7 and 8 or 9 and 10. By placing a standard vertical module (normally residing in I/O bays 1-4) in an MSIM, the module can use the BladeCenter high-speed fabric. This allows a horizontal high-speed expansion card (CFFh) to connect through a vertical module. Blade server Service Processor (SP) System firmware I/O expansion card Combination Form Factor Horizontal (CFFh) I/O expansion adapter A standalone server residing in a blade slot in the BladeCenter. The SP on the POWER blade is similar to the SP (sometimes called FSP) on other POWER6 and POWER7 processor-based systems. It contains firmware to manage the hardware on the blade; the Power Hypervisor; and Partition Firmware (PFW). As with other POWER6 and POWER7 processorbased systems, this is the firmware on the SP. Sometimes called daughter card, this card is an I/O adapter that fits into a PCI Express (PCIe) or PCI-X slot on the blade and allows connectivity to external I/O devices through the BladeCenter midplane and I/O modules. A PCIe I/O adapter that allows access to external resources through I/O bays 7 10 in BladeCenter H, and bay 2 in BladeCenter S. Combination I/O Form Factor Vertical (CIOv) I/O expansion adapter: Combination Form Factor Vertical (CFFv) I/O expansion adapter: Adapter firmware A PCIe I/O adapter that allows access to external resources through I/O bays 3 and 4 in both BladeCenter H and BladeCenter S. A PCI-X I/O adapter that allows access to external resources through I/O bays 3 and 4 in both BladeCenter H and BladeCenter S. The firmware on the I/O expansion cards on the blade. Integrated Virtual Ethernet (IVE) ports Host Ethernet Adapter (HEA) ports Virtual I/O Server (VIOS) Similar to other POWER6 and POWER7 processorbased systems, the POWER6 and POWER7 processor-based blades include two embedded Gb Ethernet ports on the system I/O bridge chip. Another name for the IVE ports, more commonly used in technical documentation. A software that is located in a logical partition. This software facilitates the sharing of physical I/O resources between client logical partitions within the system. IBM i on an IBM POWER blade (read-me first) 10

11 Integrated Virtualization Manager (IVM): Virtual Ethernet adapter: Virtual SCSI adapter: Virtual Ethernet bridge: Logical Unit (LUN): Disk drive module (DDM): A browser interface installed with VIOS. It provides LPAR and virtualization management functions. A virtual network adapter created in the POWER Hypervisor that is part of an LPAR s hardware resources. On POWER blade, IBM i cannot be assigned physical network adapters. A virtual storage adapter created in the POWER Hypervisor that is part of an LPAR s hardware resources. On a POWER blade, a VSCSI client adapter is created in IBM i and a VSCSI server adapter is created in VIOS for storage virtualization. A VIOS function that allows Layer-2 bridging of a VLAN to an outside physical LAN. It is a required on a POWER blade to provide both LAN console and standard networking to IBM i. A volume created on a SAN system that appears as a single disk device to a server. A physical disk unit in a SAN system. Subsystem Device Driver Path Control Module (SDDPCM) A multipath I/O (MPIO) driver for certain storage subsystems installed on top of VIOS. Redundant Disk Array Controller (RDAC) A MPIO driver for IBM System Storage DS4000 or DS5000, which is included with VIOS. Serial-attached SCSI (SAS) Disk storage module (DSM) Hardware Management Console (HMC) Systems Director Management Console (SDMC) Host: Virtual server: Utility virtual server: A storage access protocol, which is the next generation of the [parallel] SCSI protocol. A disk bay in the BladeCenter S, currently capable of supporting six SAS or SATA drives. Two DSMs are supported in the BladeCenter S. With the late 2012 release of firmware, the next generation of management appliances for Power Systems from entry level servers and blades to high end servers, allowing for a single consistent approach to systems administration. A one time successor to HMC and IVM, but no longer marketed. A management appliance for Power Systems from entry level servers and blades to high end servers, allowing for a single consistent approach to systems administration. Another term for server or physical server, used by SDMC and Systems Director interfaces, Another term for logical partition, used by SDMC and Systems Director interfaces. Another term for VIOS used by SDMC IBM i on an IBM POWER blade (read-me first) 11

12 1.4 Plan for necessary IP addresses You should plan to assign IP addresses to the following components for a minimum IBM i on blade configuration. All the addresses below are typically configured on the same subnet. AMM (this IP address is already assigned on an existing BladeCenter) o The AMM IP address is a physical LAN IP address. It is used to remotely manage the BladeCenter and blade servers Ethernet I/O module (this IP address is already assigned on an existing BladeCenter) o This IP address is used to connect the Ethernet I/O module to the physical LAN, allowing any blades in the BladeCenter access to the LAN. There can be from 1 to 4 of these modules in a BladeCenter H chassis. VIOS/IVM o An IP address on the external LAN that is used to connect to both IVM and the VIOS command line HMC o SDMC o An IP address on the external LAN that is used to connect to the HMC if it is being used in place of the IVM to manage the Power blade. If the HMC is used in place of the IVM, an IP address is still needed for VIOS. A second IP address is needed on the private network to connect to the service processor of the Power blade. An IP address on the external LAN that is used to connect to the SDMC if it is being used in place of the IVM to manage the Power blade. If the SDMC is used in place of the IVM, an IP address is still needed for VIOS. A second IP address is needed on the private network to connect to the service processor of the Power blade. IBM i LAN console o An IP address on the external LAN that is used to provide 5250 console access to IBM i through a PC with the IBM System i Access for Microsoft Windows software. The address is assigned to the IBM i partition when the LAN console connection is first established. This interface is required when managing the Power blade using IVM. Note that this IP address is different from the VIOS IP address and the IP address later used for IBM i production TCP/IP traffic. Also note, if an HMC or SDMC is used, a LAN console is not required because the HMC or SDMC can provide the console access to IBM i. IBM i production interface o An IP address on the external LAN that is used for IBM i production network traffic. This address is configured after IBM i is installed using LAN console. It is recommended that the IBM i LAN console (if used) and production network interface use two separate Virtual Ethernet adapters in the IBM i partition PC for LAN console and IVM browser access o When the IBM i LAN console connection is first established, the console PC must be on the same subnet as the IBM i partition. As soon as the console connection is established, this restriction is removed SAS I/O module 1 o SAS I/O module 2 o An IP address on the external LAN that is used to connect to the SAS I/O module. This IP address is required in order to manage the SAS module configuration and assign SAS drives in the chassis to blades. An IP address on the external LAN that is used to connect to the SAS I/O module. This IP address is required in order to manage the SAS module configuration and assign SAS drives in the chassis to blades. A second SAS I/O module is optional in the BladeCenter S or H. SAS RAID controller module 1 IBM i on an IBM POWER blade (read-me first) 12

13 o This applies only to BladeCenter S. An IP address on the external LAN that is used to communicate specifically with the RAID subsystem in the module. When a pair of these I/O modules is used, an IP address is also required to be assigned to the RAID subsystem in addition to assigning an IP address to the SAS switch component of the module. SAS RAID controller module 2 o This applies only to BladeCenter S. An IP address on the external LAN that is used to communicate specifically with the RAID subsystem in the module. Two such modules must always be used; therefore, this IP address is required if the RAID SAS modules are installed. Fibre Channel switch modules o This applies only to BladeCenter H. An IP address on the external LAN that is used to communicate with a Fibre Channel switch module through the AMM. This IP address is required for switch configuration and firmware updates. From a network perspective, one significant difference between the BladeCenter H and BladeCenter S is that in the BladeCenter H, each embedded HEA port on the POWER blade connects to the outside LAN through a separate Ethernet module in the chassis. The first HEA port connects through I/O module bay 1 and the second one through I/O module bay 2. In the BladeCenter S, both HEA ports on the blade connect through I/O module bay 1. If the network ports on the QLogic CFFh Fibre Channel/Ethernet adapter are used in a BladeCenter H, the first one connects through I/O module bay 7 and the second through I/O module bay 9. If that expansion adapter is used in BladeCenter S, both ports connect through I/O module bay 2. Figure 2 shows a sample network configuration for a basic IBM i on blade installation: Figure 2: Sample network configuration for a basic IBM i on blade installation IBM i on an IBM POWER blade (read-me first) 13

14 2 Hardware details 2.1 Supported environments For a complete list of supported hardware, firmware and software for the IBM i on POWER blade environment, see the BladeCenter Interoperability Guide: 947.ibm.com/support/entry/portal/docdisplay?lndocid=MIGR NOTE: If the POWER blade(s) is/are going to co-exist with another type of blade server, such as x86-based blades, verify that the I/O switch module configuration of the chassis, along with existing blade I/O adapters, meets the I/O requirements of all blades. 2.2 POWER Blades IBM BladeCenter PS701 Express The IBM BladeCenter PS701 Express is a single-wide blade server based on the POWER7 processor. The PS701 contains one socket and eight POWER7 cores, which use IBM s 45-nm lithography and operate at 3 GHz. The PS701 also includes 4 MB of on-chip edram (enhanced Dynamic RAM) L3 cache per core (total of 32 MB per socket) and 256 KB of L2 cache per core. Up to sixteen DDR3 memory DIMMs are supported for a maximum of 128 GB. The blade server includes two embedded 1Gb Ethernet ports, onboard SAS and USB controllers, EnergyScale power management and an FSP-1 service processor. A maximum of one onboard Serialattached SCSI (SAS) drive is supported. The PS701 supports two types of I/O expansion adapters: Combination Form Factor Horizontal (CFFh) and Combination I/O Form Factor Vertical (CIOv). For IBM i, the PS701 is supported in IBM BladeCenter H and BladeCenter S. Figure 3 shows the PS701, identifying the major components: Figure 3: BladeCenter PS701 Express IBM i on an IBM POWER blade (read-me first) 14

15 2.2.2 IBM BladeCenter PS700 Express The IBM BladeCenter PS700 Express is a single-wide blade server based on the POWER7 processor. The PS700 is similar to the PS701, with only the following differences: The POWER7 socket contains four processor cores. Two memory interface modules are present Up to eight DDR3 memory DIMMs are supported for a maximum of 64 GB Two SAS drives are supported The blade SMP connector is not present The I/O options and chassis support for IBM i for the PS700 are the same as those for the PS IBM BladeCenter PS702 Express The IBM BladeCenter PS702 Express is a double-wide blade server based on the POWER7 processor. The PS702 consists of the PS701 main blade unit and a symmetric multiprocessing (SMP) expansion unit, which occupies an adjacent blade bay in an IBM BladeCenter. The PS702 has the following combined hardware characteristics: Two sockets and sixteen POWER7 cores operating at 3 GHz 4 MB of L3 cache per core (total of 32 MB per socket) and 256KB of L2 cache per core Up to 32 DDR3 memory DIMMs for a maximum of 256 GB 4 embedded 1Gb Ethernet ports Onboard SAS and USB controllers, EnergyScale power management and FSP-1 service processor on the main blade unit Up to two SAS onboard drives Up to twocffh and 2 CIOv I/O expansion adapters For IBM i, the PS702 is supported in BladeCenter H and BladeCenter S IBM BladeCenter PS703 Express The IBM BladeCenter PS703 Express is a single-wide blade server based on the POWER7 processor. The PS703 contains two sockets and sixteen POWER7 cores that operate at 2.4 GHz. Up to sixteen DDR3 memory DIMMs are supported for a maximum of 128 GB. The blade server includes 2 embedded 1Gb Ethernet ports, onboard SAS and USB controllers, IBM EnergyScale power management and an FSP-1 service processor. A maximum of one onboard Serialattached SCSI (SAS) drive is supported. Also an option for up to two solid state drives (SSDs) for up to 354 GB of storage. The PS703 supports two types of I/O expansion adapters: Combination Form Factor Horizontal (CFFh) and Combination I/O Form Factor Vertical (CIOv). For IBM i, the PS703 is supported in IBM BladeCenter H and BladeCenter S. Figure 4 shows the PS703, identifying the major components. IBM i on an IBM POWER blade (read-me first) 15

16 Figure 4: IBM BladeCenter PS703 Express with optional SSD IBM BladeCenter PS704 Express The IBM BladeCenter PS704 Express is a double-wide blade server based on the POWER7 processor. The PS704 consists of the PS703 main blade unit an SMP expansion unit, which occupies an adjacent blade bay in an IBM BladeCenter. The PS704 has the following combined hardware characteristics: Four sockets and thirty two POWER7 cores operating at 2.4 GHz Up to thirty two DDR3 memory DIMMs for a maximum of 256 GB Four embedded 1Gb Ethernet ports Onboard SAS and USB controllers, EnergyScale power management and FSP-1 service processor on the main blade unit Up to two SAS onboard drives or up to four onboard SSDs Up to two CFFh and two CIOv I/O expansion adapters For IBM i, the PS704 is supported in BladeCenter H and BladeCenter S. Note: The I/O adapter options for IBM i on PS700, PS701, PS702, PS703 and PS704 are the same as those on JS23 and JS IBM BladeCenter JS23 Express The IBM BladeCenter JS23 Express is a single-wide blade server based on the POWER6 processor. The JS23 contains two sockets and four POWER6 cores, which use IBM s enhanced 65-nm lithography and operate at 4.2 GHz. The JS23 also includes 32 MB of shared L3 cache per socket and 4 MB of dedicated L2 cache per core. Up to eight DDR2 memory DIMMs are supported for a maximum of 64 GB. The blade server includes two embedded 1Gb Ethernet ports, onboard SAS and USB controllers, IBM EnergyScale power management and an FSP-1 service processor. A maximum of one onboard SAS or SSD drive is supported. The JS23 supports two types of I/O expansion adapters: Combination Form Factor Horizontal (CFFh) and the new Combination I/O Form Factor Vertical (CIOv). For IBM i, the JS23 is supported in IBM i on an IBM POWER blade (read-me first) 16

17 IBM BladeCenter H and BladeCenter S. Figure 5 shows the JS23, identifying the major components. Figure 5 IBM BladeCenter JS23 Express IBM BladeCenter JS43 Express The IBM BladeCenter JS43 Express is a double-wide blade server based on the POWER6 processor. The JS43 consists of the JS23 main blade unit an SMP expansion unit, which occupies an adjacent blade bay in an IBM BladeCenter. The JS43 has the following hardware characteristics: Four sockets and eight POWER6 cores operating at 4.2 GHz 32 MB of L3 cache per socket and 4 MB of L2 cache per core Up to sixteen DDR2 memory DIMMs for a maximum of 128 GB Four embedded 1Gb Ethernet ports Onboard SAS and USB controllers, IBM EnergyScale power management and FSP-1 service processor on the main blade unit Up to two SAS or SSD onboard drives Up to two CFFh and 2 CIOv I/O expansion adapters For IBM i, the JS43 is supported in BladeCenter H and BladeCenter S. Figure 6 shows only the SMP expansion unit of the JS43, identifying the major components. IBM i on an IBM POWER blade (read-me first) 17

18 Figure 6 IBM BladeCenter JS43 Express IBM BladeCenter JS22 The JS22 POWER blade is a 4-core blade server based on the POWER6 processor. The JS22 fits in a standard IBM BladeCenter chassis and has an integrated Service Processor (SP), two Gigabit Ethernet ports, SAS and USB controllers and a SAS disk drive. The embedded Ethernet ports are Integrated Virtual Ethernet (IVE) ports, also present on other POWER6-based servers. Additional I/O is provided by CFFh and CFFv expansion cards, which allow connections to external storage and tape through switches in the BladeCenter chassis. IBM i on the JS22 is supported in BladeCenter H and BladeCenter S. Figure 7 shows the JS22, identifying the major components. IBM i on an IBM POWER blade (read-me first) 18

19 Figure 7 IBM BladeCenter JS IBM BladeCenter JS12 The JS12 POWER blade is a 2-core blade server based on the POWER6 processor. Its hardware is very similar to that of the JS22, with several important differences: There is a single POWER6 socket with two processor cores The processor cores operate at 3.8 GHz, instead of the JS22 s 4 GHz Two integrated SAS drives are supported on the blade Four additional memory DIMM slots are supported, for a total of eight slots The JS12 includes the same SP, embedded Gigabit Ethernet (IVE) ports, CFFv and CFFh I/O expansion slots, and embedded SAS and USB controllers. The JS12 is capable of supporting twice the number of memory DIMMs because of the new, shorter DIMM design, which allows the DIMMs to be plugged in vertically. IBM i on the JS12 is supported in BladeCenter H and BladeCenter S. Figure 8 shows the JS12, identifying the major components. IBM i on an IBM POWER blade (read-me first) 19

20 Figure 8 IBM BladeCenter JS12 Note that all implementation instructions in the rest of the paper apply to JS12, JS22, JS23, JS43, PS700, PS701, PS702, PS703 and PS704 unless explicitly stated otherwise. IBM i on an IBM POWER blade (read-me first) 20

21 3 BladeCenter configuration and firmware updates 3.1 Install the BladeCenter and blade server hardware The first step in preparing the BladeCenter is to install the BladeCenter and blade server hardware. This might include installing any management modules, power modules, and I/O modules in the BladeCenter. The BladeCenter might have these components already installed if an additional blade server is being added to an already functioning BladeCenter. Before installing the blade servers in the BladeCenter, any blade server options must be installed. This might include additional processors (if x86 blades), additional memory and I/O expansion cards. For POWER blade servers, any required CFFh, CIOv and/or CFFv expansion adapters are installed at this time, depending on the chassis and storage used. Refer to the blade server and expansion card documentation that came with the option for details on installing each one. After installing the blade server options, you can install the blade server in the BladeCenter chassis. Refer to BladeCenter and blade server documentation for details on how to install the BladeCenter and blade server components. After installing all the blade server options, installing the blade servers into the BladeCenter, and installing the BladeCenter modules, the BladeCenter can be connected to the power outlets. Note: If you need to add a new blade server to existing chassis, consider updating the chassis component s firmware at this time. 3.2 Configure the Advanced Management Module (AMM) Initial AMM configuration (New chassis install) At this time, the AMM needs to have an Ethernet cable plugged into its Ethernet port. Plug the other end of this cable into the Ethernet connector of a computer where you will open a browser session to the AMM. You need to perform the following steps on a browser on the computer that is connected to the AMM: Set the IP address to one in the same subnet as the AMM s default IP address of such as and set the subnet mask to Ensure that the BladeCenter s AC power cords are plugged into an appropriate Power source to provide Power for the management module. Allow about 30 seconds after performing this step for the management module to boot. You can ping the address to confirm communications to the AMM. Open a web browser on the computer connected to the AMM. In the address or URL field, type the IP address ( is the default) of the AMM to which you want to connect. If you can not communicate to the AMM and you have tried another cable, and rebooted the PC, there is a pin hole reset option at the bottom of the AMM. Take a paper clip and insert it into the pin hole, push it in and hold for a few seconds. The lights on the AMM should flicker and the fans in the chassis might reset. Resetting the AMM this way returns all configuration values to the factory defaults, including the enabled external ports of the IO modules. In the Enter Password window that is displayed, type the user name and password. The management module has a default user name of USERID and password of PASSW0RD (where 0 is a zero, not the letter O). Both of the values are case sensitive. It is recommended to change the password during this initial configuration Select a timeout value on the next page and click continue. A startup wizard runs that can be used for the following steps: IBM i on an IBM POWER blade (read-me first) 21

22 3.2.2 AMM user profiles To create additional user profiles on the AMM: Click Login Profiles under MM Control Click a login ID currently marked as not used Enter the user profile and password two times (both values are case sensitive.) Increase the number of concurrent accesses to 2 Select the user profile s desired role and click Save Assigning an IP address to I/O modules To enable access to the chassis switches through the AMM interface, you should assign each switch an IP address on the same subnet as the AMM is using. To configure these addresses: Log into the AMM browser UI with an administrator ID Expand IO Module tasks Click on Configuration There is a tab across the top of the screen for each switch. Select a tab and configure the new IP information, then click Save. The change takes affect immediately. Note: If using the SAS RAID module, make sure to change the network settings for both the SAS switch component and the RAID subsystems AMM LAN configuration To configure the AMM so that it is accessible on the local network: Click Network Interfaces under MM Control Enter a hostname for the AMM. If the AMM is going to use DHCP, choose the option Enabled Obtain IP Config from DHCP server in the DHCP drop-down menu If the AMM is going to use a static IP address, choose the option Disable Use static IP configuration, then enter the IP address, subnet mask and gateway IP address Click Save. A restart of the AMM is required to make the changes take affect. This option is shown lower down on the navigation pane. 3.3 General Download instructions for Fix Central You need to perform the procedures in this section on a computer using a common web browser, while accessing the downloads web page at: 947.ibm.com/support/entry/portal/Downloads Start by clicking the Fix Central link in the middle of the page From the Product Group list, select BladeCenter From the Product list select the BladeCenter chassis model (the chassis selected will automatically fill in a Product list). You must choose the product list item that matches your chassis model (Look on the AMM under Hardware VPD). From the Operating system list select All Click the Continue link at the bottom of the page Download BladeCenter management module firmware To download BladeCenter management module firmware: IBM i on an IBM POWER blade (read-me first) 22

23 Follow the General Download instructions for Fix Central then return here. From the Selected Fixes for BladeCenter page, look in the set of links for the Management Module link and click on it. Note the most recent releases of the AMM firmware available for download. Select the checkbox next to that level. Click on the link for the README text file, verify that the POWER blade(s) and x86 blade(s) you are installing are in the Supported systems list. Print a copy for use as a reference when actually performing the update Continue on the Selected Fixes web page for other chassis component firmware below Download BladeCenter Fibre Channel I/O module firmware Follow the General Download instructions for Fix Central then return here. From the Selected Fixes for BladeCenter page. Click the Fibre link. Find and select the appropriate link for the Fibre Channel I/O module installed in the BladeCenter chassis (most often Brocade or QLogic). Select the checkbox next to that firmware. Click on the link for the README text file and print a copy for use as a reference when performing the update. Verify that the update will match the type of switch you have installed. Note, when you get to the point of the actual download the link may lead to the I/O module vendor s Web site: o For Brocade, fill in the export compliance form and accept the user s license. Then download the file marked similar to Fabric OS v5.3.0a for PC o For Cisco, click on the latest available release, named similar to Cisco MDS 9000 SAN-OS Software Release 3.2, then click the Download Software link. A registration with Cisco is required to download the update file. o For QLogic, find the table named Fibre Channel Switch Module Firmware and download the latest version of the firmware marked similar to QLogic 4Gb 6- Port Fibre Channel Switch Module for IBM eserver BladeCenter Firmware When you have completed the download, continue on the Selected Fixes web page for other chassis component firmware below Download BladeCenter Ethernet I/O module firmware Follow the General Download instructions for Fix Central then return here. From the Selected Fixes for BladeCenter page. Click the Switches link. Find and select the appropriate link for the Ethernet I/O module installed in the BladeCenter chassis On the firmware update page, click on the link for the README text file and print a copy for use as a reference when performing the update Click on the browser s Back button to return to the previous page Next, click on the link of the firmware update to download the file. This file is used later to update the firmware When you have completed the download, continue on the Selected Fixes web page for other chassis component firmware below Download BladeCenter SAS I/O module firmware Follow the General Download instructions for Fix Central then return here. IBM i on an IBM POWER blade (read-me first) 23

24 From the Selected Fixes for BladeCenter page. Click the SAS link. Find the link to the most recent SAS Connectivity Module firmware and click it o Download the.zip file containing the firmware update and the corresponding README file If using the SAS RAID I/O module, find the link to the most recent SAS RAID Controller Module Matrix and click it o Click the Storage Configuration Manager (SCM) Firmware Update Package link o Download the.zip and README files Go back to the SAS Controller Module firmware page and also download the latest version of the Storage Configuration Manager (SCM) software; you will employ it later to configure your SAS zoning. If you are using the RAID SAS switch modules (RSSMs), you will also use the SCM to define arrays and volumes. If you are using an external SAS storage area network (SAN), download from Fix Central the DS Storage Manager client application associated with that SAN. When you have completed the download, continue on the Selected Fixes web page for other chassis component firmware below Download the BladeCenter S DSM firmware Follow the General Download instructions for Fix Central section then return here. Find the link to the most recent Disk Storage Module (DSM) Firmware and click it Download the.zip file containing the firmware update and the corresponding README file When you have completed the download, continue on the Selected Fixes web page for other chassis component firmware below. 3.4 Update the BladeCenter firmware Update the AMM firmware You can begin this procedure from any AMM Web browser window Click Firmware Update under MM Control on the navigation pane on the left On the Update MM Firmware window, click Browse and navigate to the location (usually on a local PC) where you downloaded the management module firmware update. The file or files will have the.pkt extension. Select the file and click Open. Follow the special instructions in the README text file, if any. The full path of the selected file is displayed in the Browse field To start the update process, click Update A progress indicator opens as the file is transferred to temporary storage on the AMM. A confirmation window is displayed when the file transfer is complete. Do not leave the page while the transfer is taking place. Verify that the file shown on the Confirm Firmware Update window is the one you want to update. If not, click Cancel To complete the update process, click Continue. A progress indicator opens as the firmware on the AMM is flashed. Do not leave the page while the transfer is taking place. A confirmation window is displayed when the update has successfully completed The README file text might direct you to restart the AMM after completing the.pkt file update. If so, click Restart MM on the navigation pane on the left side of the window Click OK to confirm the reset. The web browser window will then close. A new web browser window will have to be started and signed onto to continue IBM i on an IBM POWER blade (read-me first) 24

25 3.4.2 Update the firmware on the BladeCenter I/O modules Each of the I/O module s software needs to be updated at this time. The procedure varies depending on the manufacturer of the I/O module. Refer to the README file downloaded earlier along with the I/O module to complete this task. Make sure you are using the instructions for VIOS or AIX in each README; avoid instructions that refer to SANsurfer on Linux or Windows. Note: In some test cases, using an Intelligent Copper Pass thru Module (ICPM) in a BladeCenter S chassis to supply network connectivity with the VLAN 4095 ID from the AMM has resulted in an inability to communicate with the RAID SAS Switch Modules (RSSM). Recommendation: Cabling ports 7 and 14 makes them live. The ICPM senses a link for ports 7 and 14, and allows you to Telnet to both RSSMs from the AMM using VLAN Update the firmware on the BladeCenter S DSMs Use the instructions in the README file that you downloaded with the firmware. If the BladeCenter S contains two DSMs, make sure to update the firmware on both. Note that to update the DSM firmware, you need to log into one of the SAS I/O modules. The default userid for a SAS I/O module is USERID and the default password is PASSW0RD (note the number zero). To log into a SAS I/O module, start a browser session to the IP address assigned to it. If an IP address has not been assigned to a SAS I/O module yet, follow the instructions from the prior section Installing and configuring an Intelligent Copper Pass-through Module (ICPM) Refer to the instructions in the ICPM Installation Guide W4483.doc/44r5248.pdf Note: In some test cases, using an Intelligent Copper Pass thru Module (ICPM) in a BladeCenter S chassis to supply network connectivity with the VLAN 4095 ID from the AMM has resulted in an inability to communicate with the RAID Controllers. Recommendation: Cabling ports 7 and 14 makes them live. The ICPM senses a link for ports 7 and 14, and allows you to Telnet to both RAID Controllers from the Advanced Management Module using VLAN IBM i on an IBM POWER blade (read-me first) 25

26 4 Storage management concepts If you are need a better understanding of storage area networking concepts, refer to the Storage area networks 101 section of the IBM i Virtualization and Open Storage Read-me First at: The type of storage IBM i can use on a POWER blade depends on the BladeCenter in which the blade is installed, the I/O modules available in the chassis and the expansion adapters present on the blade. IBM i on all POWER blades supports Fibre Channel and SAS storage (DS3200/3500/3700) connected to BladeCenter H and SAS storage only in BladeCenter S (internal drives and/or DS3200/3500/3700). IBM i on all POWER blades supports the SAS RAID Controller modules (if installed, there must be two) for accessing the disks installed in the BladeCenter S; those modules are not supported in BladeCenter H. Fibre Channel storage in BladeCenter S is not supported for IBM i. This section will examine storage concepts for BladeCenter H first, followed by those for BladeCenter S. Network Attached Storage (NAS) through iscsi to VIOS is not supported for IBM i as installation disks. Refer to the System Storage Interoperation Center (SSIC) for the supported combinations of adapters, switches and storage area networks (SANs): start_over=yes 4.1 Storage concepts for POWER blade in BladeCenter H In the POWER blade environment, IBM i partitions do not have direct access to any physical I/O hardware on the blade, in the chassis or outside the BladeCenter. In BladeCenter H, disk storage is provided by attaching LUNs on a Fibre Channel or SAS storage subsystem to VIOS, then directly virtualizing them to IBM i using the Integrated Virtualization Manager (IVM) or the Systems Director Management Console (SDMC). DVD access for IBM i installation is provided by assigning the DVD-ROM drive in the BladeCenter media tray to a blade, which makes the optical drive available to VIOS. The drive is then directly virtualized to IBM i by assigning it in IVM/SDMC. Direct virtualization of the LTO4/5 SAS-attached tape drive is also supported: the drive is attached to a SAS I/O module in the chassis. As soon as it is available in VIOS, the tape drive is assigned to an IBM i partition in IVM/SDMC or through the VIOS command line. NPIV attached tape drives are discussed later in section Save and restore with a Fibre Channelattached tape library IBM i on an IBM POWER blade (read-me first) 26

27 4.1.1 Storage concepts for JS12 and JS22 in BladeCenter H Figure 9 Overview of storage, optical and tape virtualization for IBM i on JS12 and JS22 in BladeCenter H VIOS accesses Fibre Channel storage through the combination form factor horizontal (CFFh) expansion card on the blade. The adapter has 2 x 4Gb Fibre Channel ports and 2 x 1Gb Ethernet ports. Before reaching the external SAN, disk I/O operations first travel through the BladeCenter midplane; then through a Multi-switch Interconnect Module (MSIM) and a SAN I/O module inside the MSIM. The MSIM resides in slots 7 and 8, or 9 and 10 in the BladeCenter H chassis. The MSIM allows the standard, or vertical, SAN module inside it to connect to a highspeed, or horizontal, CFFh card on the blade. With the 2-port CFFh card, two MSIMs with one SAN module in each are supported in the BladeCenter H for redundancy in the SAN connection. When configuring Fibre Channel LUNs for IBM i (through VIOS) in this environment, the host connection on the SAN system must include the world-wide port name (WWPN) of one or both ports on the CFFh card. If the POWER blade is inserted in the chassis, the WWPNs can be observed in the AMM using the following steps: Log into the AMM browser UI with an administrator ID Click Hardware VPD Locate the blade with which you are working The CFFh card will appear as a High Speed Expansion Card with a description of CFFH_EF HSEC. Click it Click the Ports tab Under High Speed Expansion Card Unique IDs, WWN1 is the WWPN of the first FC port and WWN2 is the WWPN of the second FC port. Record these values for later use during SAN zoning. When configuring LUNs for IBM i (virtualized by VIOS as vscsi LUNs), they should be created as 512-byte, AIX LUNs, not as 520-byte IBM i LUNs. This implies that an AIX Host kit is installed on the SAN. VIOS accesses the 512-byte LUNs as described above and then virtualizes them to IBM i through a Virtual SCSI connection between the two partitions. The Virtual SCSI server adapter in VIOS and Virtual SCSI client adapter in IBM i are created automatically when the LUNs are assigned to IBM i in IVM. For SDMC, you must manually IBM i on an IBM POWER blade (read-me first) 27

28 create them. The Virtual SCSI client adapter driver allows IBM i to access 512-byte virtual disks. For each 4-kilobyte memory page, nine 512-byte sectors are used, instead of eight; the ninth sector is used to store the 8-byte headers from the preceding eight sectors. VIOS accesses SAS storage (DS3200/3500/3700) using the CFFv SAS expansion adapter. The CFFv card requires at least one SAS I/O module in bay 3, but can use two for redundancy, in bays 3 and 4. Similar to Fibre Channel, the host connection on the SAS storage subsystem must include the SAS IDs of the two ports on the expansion adapter. To find out those IDs in the AMM, you need to perform the following steps: Log into the AMM browser UI with an administrator ID Click Hardware VPD Locate the blade with which you are working The CFFv SAS card will appears as an Expansion Card with a description of SAS Expansion Option. Click it. Click the Ports tab Under Expansion Card Unique IDs, WWN1 is the SAS ID of the first port and WWN2 is the SAS ID of the second port. Record these values for later use during SAN zoning. There is at least one Virtual SCSI connection between VIOS and each IBM i partition, which is also used for IBM i access to the USB DVD-ROM drive in the chassis. The IVM web interface creates a single Virtual SCSI client adapter for each IBM i partition. For SDMC, you must manually create it. The Virtual SCSI connection allows a maximum of sixteen disks and sixteen optical devices in IBM i. This means that by default, a maximum of sixteen LUNs can be virtualized by VIOS per IBM i partition using only the IVM Web interface. Additional Virtual SCSI client adapters can be created in an IBM i partition using the VIOS command line. For SDMC, you can manually create more adapter pairs. Note that even if only sixteen LUNs are assigned to an IBM i partition, each LUN does not necessarily represent a single physical disk arm. IBM i (through VIOS) takes advantage of the SAN system s ability to create a LUN using a RAID rank/array with multiple physical drives (DDMs). Each virtualized LUN appears to IBM i as a separate disk drive Storage concepts for JS23, JS43 and PSXX in BladeCenter H Figure 10 Overview of storage, optical and tape virtualization for IBM i on JS23, JS43, PS700, PS701 and PS7XX in BladeCenter H. IBM i on an IBM POWER blade (read-me first) 28

29 Both Fibre Channel and SAS storage concepts for JS23, JS43, and PS7xx blades in BladeCenter H are similar to those for JS12 and JS22, with the following differences: To connect to Fibre Channel storage, there are several Combination I/O (CIOv) and one CFFh expansion adapter options. CIOv and CFFh Fibre Channel cards can be used together on the same blade to provide adapter redundancy. CIOv adapters connect to I/O modules in bays 3 and 4 CIOv Fibre Channel cards appear as Expansion Card with a description of Fibre Channel EC under Hardware VPD in the AMM CIOv Fibre Channel adapters WWPNs are identified as WWN1 and WWN2 on the Ports tab of the Hardware VPD page for the adapter To connect to SAS storage, JS23, JS43 and PS7xx use the CIOv SAS pass-through adapter. As with Fibre Channel CIOv cards, it connects through I/O modules in bays 3 and 4 The CIOv SAS pass-through card appears as Expansion Card with a description of SAS Conn Card under Hardware VPD in the AMM. Note that the SAS IDs of the SAS ports are not shown on the AMM for the PS703 and PS704 blades. Refer to the Configuring SAS SAN Storage using a SAS Connectivity Module section for the steps to determine these IDs. Note: Ensure that you look at the combinations of adapters and switch modules for all of the blades in your chassis. This is especially important when adding a blade to an existing chassis. 4.2 Storage concepts for POWER blade in BladeCenter S As with the BladeCenter H, IBM i does not have physical access to storage when running in the BladeCenter S. In this case, storage is provided in the chassis itself in the form of one or two Disk Storage Modules (DSMs), each containing up to six SAS drives and/or SAS-attached external storage in the IBM DS3200/3500/3700. While SATA drives can also be placed in the DSMs and are supported for IBM i, they are not recommended because of their performance and reliability characteristics. The minimum SAS configuration in the BladeCenter S is one DSM with one SAS drive or one SAS Connectivity Module and DS3200/3500/3700. IBM i supports both types of SAS I/O modules that can be placed in BladeCenter S: the SAS Connectivity Module (switch functionality only) and the SAS RAID Controller Module (switch and RAID functionality). However, the two types of SAS modules allow for different storage support: The SAS Connectivity Module does not support hardware RAID for the drives in the chassis and supports attachment to DS3200/3500/3700, which provides its own RAID functions. o A Disk Storage Module (DSM) is required to be installed in the chassis, even if there are no disks drives installed in it. The SAS RAID Controller Module supports hardware RAID for the drives in the chassis, but does not support attachment to DS3200/3500/3700 Note that whether RAID functionality is provided by DS3200/3500/3700 through the non-raid SAS module or by the RAID SAS module, it is independent of the SAS expansion adapter on the blade. Neither the CFFv SAS expansion card nor the CIOv SAS expansion card supports RAID itself. Neither card has any read or write cache. A Disk Storage Module (DSM) is required to be installed in the chassis in order to virtualize a SAS tape to a client partition Storage concepts for JS12 and JS22 in BladeCenter S IBM i on an IBM POWER blade (read-me first) 29

30 Figure 11: Overview of storage, optical and tape virtualization for IBM i on JS12 and JS22 in BladeCenter S JS12 and JS22 use the CFFv SAS expansion adapter to access the following storage resources: SAS drives in BCS and/or LUNs from DS3200/3500/3700 if there is at least one non- RAID SAS module present in the chassis LUNs from one or more RAID arrays internal to the chassis require both RAID SAS modules to be present As soon as SAS drives or LUNs have been assigned to the POWER blade, they become available in VIOS as hdiskx devices. VIOS then virtualizes each hdiskx disk directly to the IBM i client partition(s), exactly as with LUNs in the BladeCenter H. Each virtualized SAS drive or LUN is recognized and used in IBM i as a DDxx physical drive. If both internal drives are ordered on the JS12, they is recognized by VIOS as hdisk0 and hdisk1. The internal drives on the blade are used for VIOS and are not virtualized to IBM i client LPARs. Access to the DVD-ROM drive in the BladeCenter S is also provided by VIOS, as with the BladeCenter H Storage concepts for JS23, JS43 and PS7xx in BladeCenter S BladeCenter JS23, JS43 and PS7xx follow the same storage concepts as the JS12 and JS22 in BladeCenter S, with the exception of the CFFv expansion adapter; instead, they use the CIOv expansion card. Figure 12 presents an overview of storage, optical and tape virtualization for IBM i on JS23, JS43 and PS7xx in BladeCenter S. IBM i on an IBM POWER blade (read-me first) 30

31 Figure 12: Overview of storage, optical and tape virtualization for IBM i on JS23, JS43 and PS7xx in BladeCenter S 4.3 Fibre Channel Configuration Best practices for BladeCenter H and Fibre Channel storage When configuring LUNs for IBM i (virtualized by VIOS), follow the best practices outlined in chapter 18 of the latest Performance Capabilities Reference manual, available here: Note that some of its recommendations apply only to IBM i using virtual storage outside of the blade environment. In addition to the guidelines in the Performance Capabilities Reference manual, follow these additional recommendations: Use Fibre Channel (FC) or SAS disk drives (and not SATA or FATA) to create the RAID ranks/arrays for production IBM i workloads Use 15K RPM drives for medium and heavy I/O IBM i workloads, and 10K RPM drives for low I/O workloads When creating a host connection to the WWPN of the Fibre Channel card on the blade, specify at most two specific host ports. Do not create the connection so that the Fibre Channel adapter can connect to all host port on the storage subsystem, which is the default for some subsystems. Properly zone the switches between the adapter ports and the SAN host ports N-Port ID Virtualization (NPIV) VIOS uses N_Port ID Virtualization (NPIV) to allow IBM i direct access to a SAN through an NPIV-capable adapter owned by VIOS. NPIV is a FC technology that enables a single port on a FC adapter to be presented to the SAN as an N-number of independent ports with different WWPNs. NPIV-capable adapters on IBM Power servers and blades allow up to 256 virtual FC ports to be assigned to a single physical FC port. On Power servers and blades, VIOS always owns and manages the FC adapter. To leverage NPIV, an IBM i LPAR must have a virtual FC client adapter created, which connects to a virtual FC server adapter in VIOS. However, the virtual FC client adapter does not allow IBM i to access LUNs already assigned to VIOS in the SAN. Instead, the virtual FC server adapter in IBM i on an IBM POWER blade (read-me first) 31

32 VIOS is mapped to a FC port on the physical NPIV-capable adapter. This allows the client virtual FC adapter in IBM i direct access to the physical port on the FC adapter, with VIOS having a pass-through role, unlike with VSCSI. The logical drives or LUNs in the SAN for IBM i use are not mapped to the WWPNs of the physical ports on the NPIV adapter and it does not become available in VIOS first. Instead, when the virtual FC client adapter in the IBM i LPAR is created, two virtual WWPNs are generated by the PowerVM Hypervisor. The SAN LUNs are zoned directly to the first of the two WWPNs on the virtual FC client adapter in IBM i. The second WWPN is used to facilitate Live Partition Mobility. The PowerVM Hypervisor on a Power server or blade has the default capability to create 32,000 virtual WWPNs. When virtual FC client adapters are deleted, WWPNs are not reused. If all of the default 32,000 WWPNs are used, the client must obtain an enablement code from IBM, which allows the creation of a new set of 32,000 WWPNs Support statement for the U3 Storage Drawer LTO-5/6 Tape device This storage drawer supports an LTO-5/6 Fibre Channel attached tape device. This device is supported using NPIV attachment through VIOS to an IBM i client partition. Continue reading in this section for mornpiv configurationdetails Support statements and requirements for FC tape libraries There are three main hardware components of the NPIV-based SAN support an 8 Gb Fibre Channel NPIV-capable adapter on the POWER blade, an NPIV-capable FC switch module in BladeCenter H and a supported FC-connected tape media library. NPIV is not supported in BladeCenter S. Please refer to the combination of the BladeCenter Interoperability Guide (BIG) 947.ibm.com/support/entry/portal/docdisplay?lndocid=MIGR and the IBM System Storage Interoperation Center (SSIC): to see what combinations of adapters, blade servers and operating systems are supported. FC Intelligent Pass-through Modules (IPMs) are not supported for this solution. The FC switch module must be in full-fabric mode, which is designated with the above feature codes for different numbers of licensed ports. In most cases, the IPM shares the same hardware with the 10- and 20-port full-fabric switch; therefore, the IPM can be upgraded to one of the supported FC switch modules above by purchasing an additional license. Note that while all supported FC adapters and most supported FC switch modules operate at 8Gb/s, the rest of the SAN does not have to operate at that speed. Additional FC switches outside of the chassis can operate at 4Gb/s or 2Gb/s. While this difference in throughput will have an effect on performance, it will not prevent NPIV from working. Similarly, only the first FC switch connected to the NPIV adapter must be NPIV-enabled. For this solution, that first FC switch is one of the supported switch modules above. For the current list of supported tape libraries and associated operating system levels please refer to the DevelopWorks web page for IBM Removable Media on IBM i: https://www.ibm.com/developerworks/mydeveloperworks/wikis/home?lang=en#/wiki/ib M%20Removable%20Media%20on%20IBM%20i NPIV on a POWER blade minimum requirements The following are the minimum hardware, firmware and software requirements for NPIV support for IBM i on a Power blade: IBM i on an IBM POWER blade (read-me first) 32

33 One supported FC SAN (refer to the next section) One supported FC I/O module o If using the CFFh adapter on the POWER blade, one #3239 Multi-switch Interconnect Module is also required. o Verify that the switch module firmware level is current. One supported FC expansion adapter per blade o Verify that the adapter firmware level is current. o If using the CFFh adapter on the POWER blade, one #3239 Multi-switch Interconnect Module is also required. FC cables to connect the FC I/O module in the chassis to the tape library For the current list of supported tape libraries and associated operating system levels please refer to the DevelopWorks web page for IBM Removable Media on IBM i: https://www.ibm.com/developerworks/mydeveloperworks/wikis/home?lang=en#/ wiki/ibm%20removable%20media%20on%20ibm%20i o o POWER blade service processor firmware version can be verified with the lsfware command on the VIOS command line. VIOS levels can be verified with the ioslevel command on the VIOS command line NPIV Supported SANs IBM i hosted by VIOS supports NPIV attachment of IBM System Storage DS8000 system logical drives. These logical drives must be formatted as IBM i 520 byte sector drives. IBM announced the ability to connect the IBM DS5100/53000 storage subsystem through NPIV to IBM i on Power servers in April 2011, that solution is now supported for IBM i on POWER blades Fibre Channel over Converged Enhanced Ethernet (FCoCEE) In October 2009, IBM announced the availability of FCoCEE for IBM i, AIX and Linux workloads on POWER blades. FCoCEE is a new industry standard that allows a single converged adapter in a server to send and receive both Fibre Channel (FC) and Ethernet traffic over a 10-Gb Ethernet (10GbE) fabric. The benefits of this technology are: Fewer adapters per server Fewer switches and therefore lower energy consumption in the datacenter A single converged fabric for both FC and Ethernet traffic, enabling lower infrastructure costs and simplified management FCoCEE works by encapsulating a FC frame in a new type of Ethernet frame. The standard also allows for quality of service (QoS) prioritization for both FC and Ethernet traffic. For a complete general overview of FCoCEE, consult the Fibre Channel Industry Association s Website at All POWER blades support FCoCEE through a Converged Network Adapter (CNA). Refer to the BladeCenter Interoperability Guide (BIG) for the supported adapter and switch combinations at: For IBM i the CNA is owned by VIOS, as with other blade server expansion adapters. VIOS then provides disk and network virtualization to any IBM i, AIX and Linux client LPARs through the CNA. Note that the converged adapter does not have both FC and Ethernet ports. Instead, the adapter sends and receives both 8Gb FC and 10Gb Ethernet traffic through its two 10GbE ports using FCoCEE frames. Traffic from the CNA is routed to either: IBM i on an IBM POWER blade (read-me first) 33

34 A 10GbE pass-through module in I/O bays 7 and/or 9, and from there to a separate FCoCEE-capable top-of-the-rack (TOR) switch outside the BladeCenter H. The TOR switch is capable of routing both types of traffic 10GbE and 8Gb FC to separate networks connected to it. Figure 13 illustrates this FCoCEE configuration for POWER blades in BladeCenter H. Or to a FCoE capable switch module in I/O bay 7 and/or 9. This switch would work with FC switch(es) in I/O bay 5 and/or 6 to send the FC data to the SAN fabric. The same FCoE capable switch would send 10 Gb Ethernet packets out to other external switch(es). I don t have a figure showing this combination. Or to a FCoE capable switch module in I/O bay 7 and/or 9 that has both FC and 10 Gb ports to send the FC data to the SAN fabric and to send 10 Gb Ethernet packets out to other external switch(es). I don t have a figure showing this combination. Figure 13: One example of a FCoCEE configuration for POWER blades in BladeCenter H Originally, POWER blades supported only Virtual SCSI (VSCSI) disk virtualization through FCoCEE. N_Port ID Virtualization (NPIV) through FCoCEE is supported as of 4Q GbE network connectivity for POWER blades is supported. FCoCEE is not supported in BladeCenter S for any blade servers, including POWER blades Minimum requirements for FCoCEE The following minimum hardware and software requirements must be met in order to use FCoCEE for IBM i on POWER blades: IBM BladeCenter JS12, JS22, JS23, JS43 or PS7xx One 10 Gb CNA for IBM BladeCenter per blade One 10Gb Ethernet pass-through module for IBM BladeCenter (P/N 46M6181) in combination with one supported TOR FCoCEE switch OR a 10 Gb FCoE capable switch in I/O bay 7 or 9 in combination with a FC switch in I/O bay 5 or 6. OR a 10 Gb FCoE capable switch in I/O bay 7 or 9 that handles both FC and Ethernet. Power blade service processor firmware 350_xxx from October 2009 or later VIOS 2.2 or higher to support IBM i. NOTE: The PS703 and PS704 require versions VIOS_ FP24-SP02. IBM i on an IBM POWER blade (read-me first) 34

35 IBM i or higher Note that the QLogic CNA can also be used solely as a 10GbE expansion adapter with no FC frames encapsulated in the 10GbE traffic. In that case, the CNA can connect to the 10Gb Ethernet Pass-through Module for IBM BladeCenter (P/N 46M6181) in I/O bay 7 or 9 and then to an external Ethernet switch; or to the BNT Virtual Fabric 10Gb Switch Module for IBM BladeCenter (P/N 46C7191) in I/O bay 7 or 9. Refer to the BladeCenter Interoperability Guide (BIG), specifically the FCoE Configuration Matrix for JS/PS Blades table for current configuration options at: 947.ibm.com/support/entry/portal/docdisplay?lndocid=MIGR Implementing FCoCEE Configuration of virtual disk and networking for IBM i on blade through FCoCEE is similar to that for regular Fibre Channel-attached storage and Ethernet network adapters. As described earlier, the TOR FCoCEE switch provides standard 8Gb FC connectivity to supported storage subsystems. The host ports used on the external storage and the 10GbE ports on the QLogic CNA must be in the same zone, which is created on the TOR switch. The storage subsystem is then configured by creating AIX/VIOS LUNs and mapping them to the WWPNs on the CNA. To locate the WWPNs on the CNA, follow the same steps as for any other blade FC expansion adapter: Open a browser session to the AMM and sign in with an administrator user ID Click Hardware VPD Locate the correct blade by slot number and click the high-speed expansion card designated Ethernet HSEC Click Ports The CNA s WWPNs are displayed under High Speed Expansion Card Unique IDs, whereas its MAC addresses are displayed under High Speed Expansion Card MAC Addresses As soon as the LUNs from the storage subsystem are available in VIOS, they are assigned to an IBM i LPAR using IVM/SDMC. IBM recommends using the best-practice FC storage settings in VIOS described in the Best practices for BladeCenter H and Fibre Channel storage section. The 10GbE FCoCEE ports on the CNA report in VIOS as both two fcsx (FC port) and two entx (Ethernet port) devices, similar to the following: fcs0 Available 10Gb FCoE PCIe Blade Expansion Card ( f01) fcs1 Available 10Gb FCoE PCIe Blade Expansion Card ( f01) ent2 Available 10 Gb Ethernet PCI Express Dual Port Adapter ( ) ent3 Available 10 Gb Ethernet PCI Express Dual Port Adapter ( ) The Ethernet ports can be virtualized to an IBM i client LPAR using the same method as for Host Ethernet Adapter (HEA) or other Ethernet ports, as described in the Configure networking in VIOS (if necessary) or Configure the Virtual Ethernet bridge for IBM i LAN console using IVM sections. 4.4 Create LUNs for the IBM i partition(s) in BladeCenter H To create LUNs for IBM i (virtualized by VIOS) on DS3200/3400/3500, follow the instructions in chapter 8 of the IBM System Storage DS3000: Introduction and Implementation Guide IBM i on an IBM POWER blade (read-me first) 35

36 (SG247065) from IBM Redbooks available at: To create LUNs for IBM i (virtualized by VIOS) on DS3950, DS4700, DS4800, DS5020, DS5100 or DS5300, follow the instructions in chapter 3 of the IBM Midrange System Storage Implementation and Best Practices Guide (SG246363), from IBM Redbooks available at: To create LUNs for IBM i (virtualized by VIOS) on DS8000, follow the instructions in section 3 of the IBM System Storage DS8000 Series: Architecture and Implementation (SG246786) from IBM Redbooks available at: To create LUNs for IBM i (virtualized by VIOS) on the SAN Volume Controller (SVC), follow the instructions in section 10 of the Implementing the IBM System Storage SAN Volume Controller V4.3 (SG246423) from IBM Redbooks available at: To create LUNs for IBM i (virtualized by VIOS) on the XIV storage subsystem, follow the instructions in Chapter 4 of the IBM XIV Storage System: Architecture, Implementation, and Usage (SG247659) from IBM Redbooks available at: To create LUNs for IBM i (virtualized by VIOS) on the Storwize V7000 storage subsystem, follow the instructions in Chapter 5 of the Implementing the IBM Storwize V7000 in IBM Redbooks available at: To create LUNs for IBM i (virtualized by VIOS) on the Storwize V3700 storage subsystem, follow the instructions in Chapter 5 of the Implementing the IBM Storwize V3700 at: 4.5 Multi-path I/O (MPIO) MPIO refers to the capability of an operating system to use two separate I/O paths to access storage, typically Fibre Channel or SAS. For IBM i on blade, that capability resides with VIOS, because IBM i is not directly accessing the SAN. Additionally, IBM i does not currently support the ability to have two paths to the virtual disk units (LUNs) in VIOS from a virtual client/server perspective. JS12 and JS22 can achieve a level of redundancy with FC storage by using both ports on the CFFh adapter and a pair of MSIMs and FC switch modules. The I/O redundancy from the two ports on the adapter can be extended by connecting to separate external FC switches or directly to separate ports on the storage subsystem. Similarly, both ports on the CFFv SAS adapter and two SAS modules can be used to achieve a level of redundancy with SAS storage. JS23, JS43, and PS7xx support this level of MPIO by using both ports on a CFFh or CIOv adapter. These blade servers can also achieve adapter-level redundancy with Fibre Channel storage by employing both a CFFh and a CIOv card simultaneously. To complete such a configuration, two MSIMs and two FC modules in I/O bays 8 and 10 would be required, as well as two FC modules in bays 3 and 4. The JS43, PS702 and PS704 are double wide blades that can support redundant adapter installation SAS MPIO is available in the BC-S chassis using CFFv/CIOv adapter(s) and SAS switch modules in I/O bays 3 and 4. IBM i on an IBM POWER blade (read-me first) 36

37 4.5.1 MPIO drivers VIOS includes a basic MPIO driver, which has been the default instead of the RDAC driver since November The MPIO driver can be used with any storage subsystem that VIOS supports and is included in a default install. In this case, configuration is required only on the storage subsystem in order to connect a single set of LUNs to both of the ports on a FC adapter owned by VIOS. VIOS also supports the Subsystem Device Driver Path Control Module (SDDPCM) for certain storage subsystems. Examples of supported subsystems include the Storwize V7000, SAN Volume Controller (SVC) and IBM DS8000. To find out whether a particular storage system supports SDD-PCM for VIOS, refer to its interoperability matrix on the SDDPCM Web site: Note that there are separate support statements for AIX and VIOS. If SDDPCM is supported on your storage subsystem for VIOS, download and install the driver following the instructions in the Multipath Subsystem Device Driver User's Guide at the same location. Refer to the System Storage Interoperation Center (SSIC) for the supported MPIO drivers at: start_over=yes 4.6 SAS Configuration Best practices for BladeCenter S and SAS storage For performance information on IBM i in BladeCenter S, refer to section 18 of the latest Performance Capabilities Reference manual,at: Note: IBM i is architected to perform better with more concurrent IOs to more disk drives. Do not architect your solution just on the total storage size available Best practices when using the SAS Connectivity Module Because the SAS Connectivity Module does not support RAID, there are two possibilities for disk protection as soon as SAS drives in the chassis have been assigned to the POWER blade and are available in VIOS: Use Logical Volume Manager (LVM) mirroring in VIOS, create a volume group from the available SAS drives, then create logical volumes and present those to IBM i Directly virtualize each SAS drive to IBM i and use mirroring in IBM i The recommendation is to use mirroring in IBM i. Using logical volumes extends the path of each I/O request in VIOS by involving the LVM layer. Mirroring in IBM i allows the use of existing IBM i disk management skills. For a higher level of redundancy, it is strongly recommended to assign the same number of SAS drives in each DSM and mirror between them in IBM i. SAS drives in the BladeCenter S are assigned to blades by changing the configuration of the SAS I/O module. In the case of using LUNs from DS3200/3400/3500, they are already benefiting from RAID protection in the external storage subsystem. Each LUN should be virtualized directly to IBM i in VIOS. IBM i on an IBM POWER blade (read-me first) 37

38 To create LUNs for IBM i (virtualized by VIOS) on DS3200/3400/3500, follow the instructions in chapter 8 of the IBM System Storage DS3000: Introduction and Implementation Guide (SG247065) in the IBM Redbooks at: Best practices when using the SAS RAID Controller Modules The recommendation to directly virtualize LUNs to IBM i instead of using storage pools in VIOS applies in this case, as well. Furthermore, it is strongly recommended that LUNs for IBM i on a POWER blade are created on a separate RAID array in the chassis. The goal is to avoid disk arm contention between IBM i and other production workloads on the same or other blades in the BladeCenter S. While this approach lessens the flexibility of RAID array and LUN creation using the RAID SAS module, it helps ensure the necessary response time and throughput demanded by IBM i production workloads. As with other storage subsystems, the number of physical drives in the RAID array also plays a significant role, with a higher number of drives improving performance. Note: Use of the SAS RAID Controller modules is restricted to just the disks in the BC-S chassis. You can not connect an external SAN with these modules. You can refer to the IBM SAS RAID Controller Module Installation and user guide at: er.io_sasraid.doc/rssmiug.pdf Configuring storage in BladeCenter S using the SAS Connectivity Module SAS I/O modules configurations The configuration of the SAS I/O module(s) in the BladeCenter S determines which SAS drives in the DSM(s) are assigned to which blades. This configuration is kept on non-volatile storage in the module(s). If two modules are present, it is necessary to change the SAS drive assignment in only one of them; the changes is replicated to the second one. Eight pre-defined configurations and four user-defined configurations are stored on a SAS I/O module. To use a pre-defined configuration, it has to be activated by interacting with the module. User-defined configurations are blank until explicitly created. The following tables summarize the pre-defined configurations for a single SAS I/O module and two SAS I/O modules respectively. Pre-defined configurations for a single SAS I/O module: Pre-defined configurations for two SAS I/O modules: IBM i on an IBM POWER blade (read-me first) 38

39 Notice that the same four configurations of drives and blades are used for both a single SAS I/O module and two modules. Any of the four configurations can be used for IBM i in BladeCenter S. However, keep in mind the further mirroring required in IBM i. Therefore, pre-defined configuration Number 02 or number 03 will likely not be applicable to the majority of IBM i in BladeCenter S implementations, because it provides for only one mirrored pair of drives in IBM i. The choice among the remaining configurations depends on the number of blades (Power and x86) in the chassis. For example, configuration number 04 or number 05 provides the greatest number of arms for a single IBM i partition on the blade (12); however, it does not leave any drives in the chassis for other blades. One approach to addressing the disk requirement for x86 blades in this case is to use the two supported embedded drives on the x86 blade, provided they allow for sufficient disk performance for the x86 application. If ordered, the two embedded SAS drive on the POWER blade are used to install and mirror VIOS. Tape virtualization does not impact the choice of a pre-defined zone configuration: All pre-defined configurations assign all external ports to all of the blades. Therefore, an administrator can connect a tape drive to any external port on a SAS I/O module and VIOS will recognizes and virtualize the drive to IBM i; provided that the blade has a CFFv or CIOv SAS expansion adapter. The same tape drive can be shared across multiple blade servers, including x86 blades, one at a time Activate a pre-defined SAS I/O module configuration If one of the pre-defined drive configurations fits the storage requirements of the IBM i in BladeCenter S implementation, the only required action to assign drives to the blade is to activate that configuration. The simplest method is to use an already familiar interface a browser session to the AMM: Start a browser session to the AMM of the BladeCenter S and sign in with USERID or another administrator ID Click Configuration under Storage Tasks Click the SAS I/O module you want to configure. If two modules are present, it is necessary to activate the selected configuration on only one Show all possible configurations. Click the option button next to the pre-defined configuration you want to activate. Click the Activate Selected Configuration button at the bottom of the right screen pane The selected number of SAS drives is now available to VIOS on the POWER blade to virtualize to IBM i. There are three other methods to activate a pre-defined configuration: A Telnet CLI to the SAS I/O module IBM i on an IBM POWER blade (read-me first) 39

40 A browser session directly to the module The IBM Storage Configuration Manager GUI from a PC However, using the AMM is the most straightforward way to just activate a pre-defined configuration. The AMM does not have the capability to create a custom user configuration for the SAS drives in the chassis Create a custom SAS I/O module configuration If none of the pre-defined configurations meets the storage requirements of the IBM i in BladeCenter S implementation, a custom drive configuration must be created by changing one of the four available user-defined configuration templates on the SAS I/O module(s). Two interfaces are available for creating a custom configuration: A Telnet CLI to the SAS I/O module The Storage Configuration Manager GUI from a PC It is recommended to use the Storage Configuration Manager (SCM), unless you are already familiar with the SAS I/O module command line. SCM can be downloaded following the steps in the General Download instructions for Fix Central section. Click the Others link and follow the download steps. To install Storage Configuration Manager (SCM) and create a custom SAS I/O module configuration, follow the instructions in chapter 4.4 of the Redpiece Implementing the IBM BladeCenter S Chassis (REDP-4357), available here: When installing SCM, choose to install only the SAS Module in BCS option. Because custom SAS zone configurations are created from a blank slate, special care must be taken if VIOS is going to virtualize tape to IBM i. All blades that require access to the tape drive must specifically be configured to access the external port on the I/O module to which the tape drive is connected. Note: if the SAS I/O module configuration is changed after VIOS has been installed on a POWER blade, the cfgdev command must be run on the VIOS command line or from IVM Hardware Inventory to detect any changes in the drive configuration Configuring SAS SAN Storage using a SAS Connectivity Module The SAS Connectivity Module supports connections to either a DS3200/3500/3700 SAS SAN (the SAS RAID Controller Module does not support). As part of your SAS zoning in the prior sections, you need to select or configure a zone that includes a port(s) on the SAS Connectivity Module that is cabled to the SAN s host port. Any blade that needs to access the SAN needs access to that port(s). Additionally, the SAN host configuration needs to know the SAS adapter port s IDs (similar to a WWPN) to map the storage to a specific blade server. For a JSXX, PS700, PS701 or PS702 blade, use the AMM web interface to see the Hardware vital product data (VPD) of the blade slot. There is a ports tab that contains the SAS IDs. For a PS703 or a PS704 you have to access the SAS Connectivity Module s web interface through the AMM. Expand I/O Module tasks- >Configuration->Select the I/O Bay (usually 3, then 4) ->Advanced Options->Start Web Session -> Start session. Sign onto the web session as USERID as the user ID and PASSW0RD (0 not an o). Select Monitor SAS Module and you should see a list of blades by IBM i on an IBM POWER blade (read-me first) 40

41 slot number, the Address field is the SAS ID, so you can use that ID to associate as a host in the SAN. Figure 14 Sample screen shot of SAS addresses Repeat this process for I/O Bay 4, if in use to get the SAS ID for the second port on the adapter Configuring storage in BladeCenter S using the SAS RAID Controller Module The SAS RAID controller module, also known as the RAIDed SAS Switch Module (RSSM), provides RAID support for IBM i when using only the drives in a BladeCenter S. Before continuing with the configuration steps, review the best practices for this environment in the Best practices when using the SAS RAID Controller Modules section. There are two general steps involved in assigning RAID storage in the chassis to a POWER blade; As soon as it is available to the blade and VIOS, the LUNs are virtualized to IBM i as described in the Create the IBM i partition using IVM section. The two high-level steps are: Use the correct SAS switch zoning configuration so that the POWER blade is allowed access to the RAID subsystem in the RSSM Create the appropriate configuration in the RAID subsystem and assign the new LUNs to the POWER blade SAS zoning for RSSM With a new BladeCenter S, the correct SAS zone to allow all blades access to the RAID subsystem in the RSSM is already in place: the default pre-defined zone configuration gives all six blade slots access to the RAID controller and to all external SAS ports. As with the non-raid SAS switch modules (NSSMs), the SAS zone configuration is replicated between the two RSSMs. If a change is required, it need be made only to one of the two RSSMs. To check whether the default zone configuration is active on the RSSMs, use the AMM browser interface: Log into the AMM with an administrator ID Click Storage Tasks and then Configuration Ensure that the Pre-defined Config 10 option is active on both RSSMs A user-defined configuration can also be used to limit the blades that can access the RAID subsystem or a certain external SAS port. As with NSSM, the Storage Configuration Manager (SCM) is used to manage zoning in the RSSM. Additionally, SCM is the interface used to create the RAID configuration. Refer to the next section on using SCM with RSSM Configuring RSSM with Storage Configuration Manager You can follow the steps in General Download instructions for Fix Central section and then return here. From the list of links for download, click Other. Download the latest version of SCM from this page. Install the interface using the instructions in chapter 4.4 of the Implementing the IBM BladeCenter S Chassis (REDP-4357), available at: IBM i on an IBM POWER blade (read-me first) 41

42 Make sure to perform a full installation with all the management options. As soon as SCM is started, you need to perform the following steps to add your RSSMs to the interface: Sign in with a local Windows ID and password. You are not signing onto the RSSMs yet. Expand BC-S SAS RAID Module and then Health Click All Resources Click Add SAS RAID Modules Enter the IP address of the SAS switch component of the RSSM in I/O bay 3 o Both the SAS switch and the RAID subsystem in both the RSSMs must have external IP addresses assigned. Refer to the Assigning an IP address to I/O modules section for instructions Enter an ID and password for both the SAS switch and the RAID subsystem of the RSSM in I/O bay 3 o The default user name and password for the SAS switch are USERID and PASSW0RD respectfully. o The default user name and password for the RAID subsystem are USERID1 and PASSW0RD respectfully. Optionally, enter a nickname for these RSSMs Click OK In most cases, it is not necessary to create a custom SAS zone configuration. If you do need to create one, use the steps immediately below (otherwise, skip to the next set of bullets): Use the option to select the SAS switch in bay 3 From the list, select Configuration, then SAS Zoning Follow the instructions in chapter 4.4 of the Implementing the IBM BladeCenter S Chassis (REDP-4357) Redpiece, available at: To create the RAID array and LUN configuration, perform the following steps: On the All Resources screen, select the RAID subsystem From the list, select Configuration, then Storage Click the Storage Pools tab. Storage pools are the RAID arrays you can create in the chassis using the RSSM Click Create Storage Pool Select Manually choose drives (advanced) and click Next Select the correct SAS drives (preferably in both DSMs) and RAID level for this storage pool and then click Next Create the correct number of LUNs (volumes) with the correct capacity and add them to the list of new volumes. Then click Next All blades that contain a CFFv or CIOv SAS expansion card and are allowed to access the RAID subsystem is identified in the Select Hosts area o If your blade is not identified, click Discover Hosts o If the blade still does not appear, check the SAS zoning configuration and also check whether the SAS adapter on the blade is installed and operational Click your blade in the Select Hosts area Select all the LUNs that should be attached to that blade in the Select Volumes area IBM i on an IBM POWER blade (read-me first) 42

43 Click Map Volumes. This action will map the selected LUNs to both ports on the SAS expansion card. Note: The maximum number of LUNs per blade adapter port is eight! Click Next and then click Finish If VIOS is already running on the blade, use the cfgdev command on the VIOS command line to recognize the new LUNs. 4.7 Using the DVD drive in the BladeCenter with VIOS and IBM i VIOS virtualizes the DVD drive in the media tray of the BladeCenter to IBM i. When the media tray is assigned to the blade using the steps described in the Error! Reference source not found. section, the DVD drive becomes physically available only to VIOS. It is then assigned to an IBM partition using the IVM GUI as described in section (cd0 is the name of the physical DVD drive) or through the virtual storage management interface using an HMC or SDMC. When the DVD drive must be reassigned to another IBM i LPAR on the same blade, the OPTxx device in IBM must first be varied off. Then the cd0 device can be assigned to the second LPAR using the same method described in the Moving the BladeCenter DVD drive to another blade using IVM section. When the DVD drive must be reassigned to another IBM i LPAR on a different blade, or after it has been used by a different blade and must be assigned back to the same LPAR, you need to perform the following steps: Vary off the OPTxx device in IBM i (if it was assigned to a POWER blade and an IBM i LPAR) Use the AMM browser interface to assign the media tray to the correct blade, as described in the Error! Reference source not found. section Telnet to VIOS on the correct blade and sign in with padmin (as the user ID) Type cfgdev and press Enter Assign the DVD drive in the media tray (cd0) to the correct IBM i partition on the blade, as described in the section Moving the BladeCenter DVD drive to another blade using IVM. Note the HMC and SDMC do not support moving a tape device. Use the VIOS CLI following the steps in section Moving a tape device to another virtual server using the VIOS CLI. Vary on the OPTxx device in IBM i Note that you must run the cfgdev command every time the DVD drive is assigned to a POWER blade, in order for VIOS to recognize that the device is present again USB 2.0 access VIOS uses a USB 2.0 driver for physical access to the DVD drive. As a result, all client LPARs using the DVD drive as a virtual optical device from VIOS can perform faster reads and writes operations. One benefit of this enhancement is shorter install duration for the IBM i Licensed Internal Code (LIC) and operating environment, as well as Program Temporary Fixes (PTFs) Writing to DVD-RAM media in BladeCenter H In October 2009, VIOS gained the capability to write to DVD-RAM media when using Feature code 4154, UltraSlim Enhanced SATA DVD-RAM Drive, in BladeCenter H or BladeCenter S. For IBM i as client of VIOS all DVD-RAM operations that are supported by IBM i are available when using this drive. This improvement allows IBM i to perform small saves and restores (up to 4.7 GB) using this DVD drive. IBM i on an IBM POWER blade (read-me first) 43

44 5 VIOS and IBM i installation and configuration 5.1 Obtain the VIOS and IBM i installation media and fixes VIOS is part of IBM PowerVM Editions (formerly known as Advanced Power Virtualization) and is required in the IBM i on POWER blade environment. Work with your local sales channel to ensure that PowerVM (Standard or Enterprise Edition) and the latest fix pack are part of the POWER blade order. Refer to the BladeCenter Interoperability Guide (BIG), for current configuration options at: Work with your local sales channel to obtain the IBM i install media and the latest PTF package. You can refer to the supported environments page to verify that you have the minimum supported release of IBM i at: IBM i PTFs can also be obtained from IBM Fix Central: 5.2 Install PuTTY (optional) As mentioned above, IVM/SDMC is used for both LPAR and virtual resource management in this environment. IVM/SDMC requires networking to be configured in VIOS. To install and configure VIOS, and later to save or restore it, a Telnet session to the AMM or to VIOS is required. Use the PuTTY application any time a Telnet session is mentioned in this paper. PuTTY provides better terminal functionality than the Telnet client included with Windows and can be downloaded at no cost the following URL: Install PuTTY on the same PC used for LAN console and the browser connection to IVM. 5.3 Install SDMC (optional) SDMC can be used in place of IVM to provide virtualization management including the management of Virtual Servers or LPARs, including those on a Power blade in the IBM BladeCenter. The SDMC provides similar functionality as IBM Systems Director Express Edition for Power and can be installed as a hardware appliance or a software appliance. NOTE: Before you choose to use SDMC to manage your server, be aware that the steps to resort back to using IVM to manage your server are quite involved and could result in partition definition loss. When SDMC is ordered as a hardware appliance, it comes pre-installed on IBM x86 hardware. It consists of a virtual image (guest) that resides on a Linux host. Installation DVDs are shipped with the SDMC in the event you need to reinstall the appliance. When SDMC is used as a software appliance, it is only supported when installed on IBM x86 hardware. The software appliance also consists of a host/guest system and the guest configuration must be installed as a virtual machine with selected kernel based virtual machine (KVM) or VMware vsphere Hypervisors. If using SDMC as a software appliance, complete the install using the guide in Chapter 2 of the IBM Systems Director Management Console Introduction and Overview redbook at IBM i on an IBM POWER blade (read-me first) 44

45 5.4 SDMC Discovery of POWER blades (optional) SDMC supports the discovery of POWER blades. Discovery is the process in which the Systems Director support in the SDMC is used to identify and establish connections with network level resources that it can manage. SDMC discovery of servers and blades is a two step process system discovery and request access. After a system has been discovered by the SDMC, it is displayed under Hosts on the Resources tab of the SDMC welcome page. Requesting access is an authentication step to ensure you have correct authorization access to a system. Follow the steps in Section of the IBM Systems Director Management Console Introduction and Overview at Redbooks: to discover blade resources and request access to them. 5.5 HMC Discovery of POWER blades The HMC and the Flexible Service Processor (FSP) on the Power blade need to be configured on the same subnet or sunets that are routable to one another. See section Prerequiste Flexible Service Processor (FSP) IP configuration for HMC/SDMC access. for more details. Once the HMC and FSP can communicate with one another, you can discover the Power blade by following this sequence from the HMC interface: Left click on the Systems Management link on the left hand navigation pane. Select the checkbox to the left of the word Servers in the right hand pane. In the bottom right hand pane expand Connections and click on Add Managed System. In the new pane, enter the IP address of the FSP and its password. Click OK. The HMC will attempt to discover the FSP and access it. If successful, the FSP will be added to the list of Servers. Click on the word Servers to see the added FSP/blade. 5.6 Planning for and Installation of VIOS Memory recommendations for VIOS and the Power Hypervisor A minimum of 4 GB of memory is recommended for VIOS with an IBM i client. You should plan on leaving 512 MB of memory unassigned for use by the system firmware (the POWER Hypervisor). To observe the amount of memory the Hypervisor is currently using, click View/Modify Partitions in IVM and check the Reserved firmware memory value. Note: There is no equivalent way to view the amount of memory used by the POWER Hypervisor through the SDMC Processor recommendations for VIOS For most IBM i on blade configurations, it is recommended to configure VIOS with at least 0.5 processor units and one assigned (or desired) virtual processor. If VIOS is hosting three or more IBM i or other partitions, consider changing its processor allocation to one dedicated processor. Note that you must reboot VIOS to change from shared to dedicated processors. IBM i on an IBM POWER blade (read-me first) 45

46 Note: The Workload Estimator (WLE) can be used to estimate the VIOS that is hosting multiple workloads by using the following link: 304.ibm.com/systems/support/tools/estimator/index.html 5.7 Using the media tray to install VIOS VIOS can be installed from a DVD image. VIOS can also be installed from a Network Installation Manager (NIM) server, but that is notdescribed in this paper. If you are going to install VIOS from DVD, you must assign the media tray in the BladeCenter to the new POWER blade. You can either press the MT button on the front of the blade itself or use the AMM browser interface: Click Remote Control under Blade Tasks Click the Start Remote Control button Select the POWER blade you want to install from the Media Tray list. Close the Remote Control web interface. Place the VIOS DVD in the media tray 5.8 Preinstalled VIOS or non-hmc/sdmc install of VIOS The instructions in this section explain how to install VIOS on the POWER blade. VIOS may be pre-installed on the internal SAS drive of your blade. In that case, you can skip to the Configure networking in VIOS (if necessary) section. However, you should use the instructions in subsections under Error! Reference source not found., Opening a console for VIOS using Serial-over-LAN (SOL) and Powering on the blade from the AMM to open a VIOS console and Power on the blade. After approximately 5 minutes, if a VIOS login prompt appears on the console, the install was performed by IBM. If VIOS was not pre-installed, proceed with the following instructions: The recommended location for VIOS install differs by blade. The JS12, JS23 and PS700 provides the option for two integrated SAS drives on the blade. It is recommended to order both integrated drives on the JS12, install VIOS on the first SAS drive and mirror the entire install to the second drive. But this is not a requirement. A SAN LUN can also be used for VIOS. In the case of using the JS12 in the BladeCenter S, this approach allows more of the drives in the chassis to be assigned to IBM i. The JS22 supports only a single SAS drive. If using the JS22 in the BladeCenter H, it is recommended to install VIOS on LUNs on the SAN and use two MSIMs and two Fibre Channel I/O modules for redundancy. After the POWER blade is installed in the BladeCenter, only its Service Processor (SP) is powered on. It communicates with the AMM over an internal management network in the BladeCenter through the Ethernet switch in IO Bay 1. The AMM also provides an ASCII console to the first partition on the POWER blade (VIOS) using a Serial-over-LAN (SOL) connection. Refer to the next section and the Opening a console for VIOS using the AMM remote control interface section for more information Opening a console for VIOS using Serial-over-LAN (SOL) when using IVM To open a console for the VIOS install, start a Telnet session to the AMM s IP address. Log in with the same AMM user ID as for the browser interface. IBM i on an IBM POWER blade (read-me first) 46

47 Type env -T blade[x] and press Enter, where x is your blade slot number in the BladeCenter. Note that the brackets are part of the command. Then type console. The console will appear inactive until the blade is powered on and Partition Firmware (PFW) on it starts. NOTE: For SOL to function with an Intelligent Copper Pass-thru Module (IPCM): o The IPCM must be installed in IO Bay 1. o The external port corresponding to the blade server must have an Ethernet cable attached to it with a link to an upstream switch. For example, to support SOL for the blade server in bay 2, you must plug an Ethernet cable into RJ-45 port 2 of the ICPM and have this Ethernet cable connected to an upstream switch. Note that the SOL console for VIOS does not need to be restarted if the VIOS partition is rebooted. However, the console will time out and disconnect after 5 minutes of inactivity Opening a console for VIOS using the AMM remote control interface The AMM includes a Remote Control function that can be used for some Power blades, but there are the following restrictions: The Remote Control function of the AMM can provide a console for VIOS on non-ps7xx blades only (the PS7xx blades do not have the required hardware component for remote control installed on them). The Remote Control console must be used if the BladeCenter is using a Copper Passthrough Module (CPM). The CPM was an earlier version of the Intelligent Copper Pass-thru Module (ICPM). o The CPM can not be used with PS7xx blades (see bullet 1 above) use the SOL console mentioned in the prior section. To open a console to the blade through Remote Control on the AMM: Connect to the AMM with the browser interface and sign in with an administrator ID Click Remote Control under Blade Tasks Click the Start Remote Control button This starts a Java application, which is dependent upon the level of Java on your on your PC. The levels currently supported are listed on the Remote Control pane. From the KVM list, select the correct POWER blade To type into the Remote Control console, click anywhere in the actual console window. To exit the data-entry mode, press the left Alt key 5.9 Creating a VIOS partition using the HMC POWER blades that are going to host IBM i and be managed by an HMC require VIOS to be the first partition on the blade. The first install you do on the blade is for VIOS. First follow the steps in section Prerequiste Flexible Service Processor (FSP) IP configuration for HMC/SDMC access. Then on the HMC, ensure that you have the activation code entered to allow VIOS partition configuration. Click on the Managed Server list and select the POWER blade by clicking in the checkbox to the left of the blade server name. Look under Configuration and expand Create partition and select Virtual IO Server. If you do not see VIOS as an option, then you have not entered your PowerVM activation code under the Capacity on Demand interface. Locate the code and enter it. Configure the CPU and memory requirements based on the prior sections of this document. IBM i on an IBM POWER blade (read-me first) 47

48 VIOS needs 5.10 Creating a VIOS virtual server using SDMC POWER blades that are going to host IBM i and are not being managed by SDMC require VIOS to be the first partition on the blade. The first install you do on the blade is for VIOS. As soon as VIOS has its IP configured, you can start a browser to that IP address and use IVM to manage the already installed VIOS partition. With SDMC, this need not be the case, or it can be. You need to choose which virtual server is for VIOS. Maybe you will have two for redundancy. So you need to create a virtual server for VIOS to be installed into. Refer to the Create the IBM i client partition using HMC The IBM i client partition configuration as a client of VIOS is the same as that for a client of an IBM i 6.1 host partition. Refer to the Creating an IBM i logical partition that uses IBM i virtual I/O resources using the HMC topic in the Logical Partitioning Guide at: Create the IBM i Virtual Server using SDMC section, but use the procedure to create a VIOS virtual server first. Then follow the steps in the Prerequiste Flexible Service Processor (FSP) IP configuration for HMC/SDMC access. The FSP on the Power blade will need to have an IP address assigned that is on the same IP subnet as the HMC in order for the HMC to communicate and manage the partitions on that blade server. To do this configuration, go to the AMM browser interface: Expand Blade tasks and click on Configuration. Click on the Management Network tab on the top of the screen. Select the Power blade by clicking on the name. Set a Static IPV4 address and click Save. This will set the IP address on the FSP. Note: making this change will allow access to the FSP through the switch in IO Bay 1, but it may break the VIOS TCP connection that will be configured using the same port on the blade (in VIOS terms, this is ent0). If this occurs, ensure that you configure the VIOS TCP connection by configuring an LHEA on the VIOS partition profile and then using that L-HEA port to configure VIOS s TCP/IP Opening a VIOS console using the HMC Another way to provide a console for VIOS is through the HMC. In order to access the console in this way over the first Ethernet port, you must first disable Serial over LAN (SOL) in the AMM by selecting Blade Tasks -> Configuration -> Serial Over LAN. Then select the blade you want to use and then select Disable Serial Over LAN from the list and then click Perform action. To start a console in HMC: Select the VIOS partition from the Resources list for the managed server. Click Operations -> Console Window -> Open Terminal Console. The console runs as a java application. Opening a VIOS console using SDMC section followed by either the Installing VIOS from DVD section or the Installing VIOS from NIM section to proceed with the VIOS install. Proceed with the Completing the install section and the sections after it Prerequiste Flexible Service Processor (FSP) IP configuration for HMC/SDMC access. The FSP on the Power blade will need to have an IP address assigned that is on the same IP subnet as the HMC in order for the HMC to communicate and manage the partitions on that blade server. To do this configuration, go to the AMM browser interface: IBM i on an IBM POWER blade (read-me first) 48

49 Expand Blade tasks and click on Configuration. Click on the Management Network tab on the top of the screen. Select the Power blade by clicking on the name. Set a Static IPV4 address and click Save. This will set the IP address on the FSP. Note: making this change will allow access to the FSP through the switch in IO Bay 1, but it may break the VIOS TCP connection that will be configured using the same port on the blade (in VIOS terms, this is ent0). If this occurs, ensure that you configure the VIOS TCP connection by configuring an LHEA on the VIOS partition profile and then using that L-HEA port to configure VIOS s TCP/IP Opening a VIOS console using the HMC Another way to provide a console for VIOS is through the HMC. In order to access the console in this way over the first Ethernet port, you must first disable Serial over LAN (SOL) in the AMM by selecting Blade Tasks -> Configuration -> Serial Over LAN. Then select the blade you want to use and then select Disable Serial Over LAN from the list and then click Perform action. To start a console in HMC: Select the VIOS partition from the Resources list for the managed server. Click Operations -> Console Window -> Open Terminal Console. The console runs as a java application Opening a VIOS console using SDMC Another way to provide a console for VIOS is through the SDMC. In order to access the console in this way over the first Ethernet port, you must first disable Serial over LAN (SOL) in the AMM by selecting Blade Tasks -> Configuration -> Serial Over LAN. Then select the blade you want to use and then select Disable Serial Over LAN from the list and then click Perform action. To start a console in SDMC: Select the virtual server from the Resources list Right click on the virtual server and click Operations -> Console Window -> Open Terminal Console 5.12 Powering on the blade from the AMM In your browser session to the AMM, click on Power/Restart under Blade Tasks on the left-hand side menu Select the checkbox on the checkbox next to the blade you want to power on, and then click on the Power On Blade link below You can monitor the SP and VIOS reference codes by clicking on Blade Service Data under Service Tools Select on the blade you are installing, and then click Refresh button to updates the reference codes shown 5.13 Powering on the blade from the HMC From the Managed Server list, select the blade and select Operations -> Power On 5.14 Powering on the blade from the SDMC On the Resources tab on the SDMC, select the blade from the list of hosts Select Actions -> [blade] -> Operations -> Power On IBM i on an IBM POWER blade (read-me first) 49

50 5.15 System Reference Codes during boot View the blade s system reference codes either from: o The SDMC Welcome page -> select the host for the POWER blade, Right click and select Service and Support-> Reference Code history OR o From the AMM->Service Tasks->choose the blade->service Information- >SRCs. OR o From the HMC -> Select the Servers link -> the SRC information is shown as a column in the pane Accessing the SMS menus On the console for VIOS, as soon as the PFW screen starts to load (refer to Figure 15), press 1 quickly to enter the System Management Services (SMS) menu If you miss the opportunity to press 1, PFW will attempt to boot the partition from the default boot device. If VIOS is not yet installed, the boot will fail and the SMS menu will eventually be displayed Figure 15 Partition Firmware screen 5.17 Installing VIOS from DVD On the main SMS menu, click 5.Select Boot Options Click 1.Select Install/Boot Device Click 3.CD/DVD Click 6.USB Select Option 1 to choose the USB CD-ROM Click 2.Normal mode Click 1.Yes to exit the SMS menus and start the VIOS install. Allow approximately 5 minutes for the install program to load from the media IBM i on an IBM POWER blade (read-me first) 50

51 The AMM remote control interface also supports a remote media option that can mount the DVD drive of a PC that is accessing the AMM browser interface. Or an.iso file on that PC s disk drive. Typically this device shows up as cd1 behind the media tray cd0 device. Another option is a USB attached DVD drive plugged into the media tray Installing VIOS from NIM On the main SMS menu, click the menu option 2.Setup Remote IPL (Initial Program Load) Select the port to use (for most scenarios, port 1) Click IPv4. Select 1.BOOTP as the Network Service Select 1.IP Parameters to fill in the necessary data. Fill in the following information for the IP Parameters by selecting the item number and typing in the data: o Client IP Address: The IP address you have chosen for this VIOS partition. o Server IP Address: The NIM server's IP address. o Gateway IP Address: The gateway IP for this VIOS partition. o Subnet Mask: The correct subnet mask for the network segment of which VIOS is going to be a part. After entering the necessary information correctly, press the Escape key to go back a page and click 3.Ping Test Click 1.Execute Ping Test to test the connection. If the ping test is successful, press the M key to go back to the SMS main menu and continue the install. (If the ping test is not successful, check the network settings on the BladeCenter) On the SMS main menu, click 5.Select Boot Options Click 1.Select Install/Boot Device Click 6.Network for the Device Type Click 1.BOOTP as the Network Service Select the port that you just configured above (for most scenarios, port 1) Click 2.Normal Mode Boot Click 1.Yes to start the install (this will exit the SMS menus and the install image will start downloading from the NIM server) 5.19 Completing the install This section applies to installing from DVD or NIM: As soon as the first "Welcome to the Virtual I/O Server" screen has disappeared, enter 1 on the prompt to confirm this terminal as the console and press Enter (the number will not appear on the screen) Enter the correct number option for the language you want to use during install On the Welcome to BOS menu, click Change/Show Installation Settings and Install Click option 2 to verify that the correct disk unit is selected for installation. The SAS drive internal to the blade is detected as hdisk0, whereas any LUNs attached to VIOS is detected as hdisk1, hdisk2 and so on Click option 0 twice to begin installation. After the installation has been completed and VIOS has been rebooted, log in with the default administrator user ID, padmin. You will be prompted to enter and verify a password for the padmin user ID. Hit enter on the prompt to accept licensing <a>. Enter license accept to accept the VIOS license agreement. IBM i on an IBM POWER blade (read-me first) 51

52 An issue that you may run into has to do with VIOS and the root and /usr filesystem sizes. The problem symptom is that a command simply halts, so that feedback from the system was not happening. A possible workaround is to increase the size of the root file system. From the VIOS command line interface, enter: o df / Expect to see output something like this: Filesystem 512-blocks Free %Used Iused %Iused Mounted on /dev/hd % % / If the Free amount is small, Use the chfs a size=###m / to increase the amount of space and to hopefully avoid this in the future Mirroring of VIOS Because VIOS provides storage, optical and networking I/O resources to IBM i, any IBM i LPAR on the blade depends on VIOS to be operational. Therefore, it is strongly recommended to employ disk protection for the VIOS installation. When IBM i is implemented on a POWER blade where there is a single internal SAS drive on the blade itself, that can be used for the VIOS installation. With JS22 in a BladeCenter H, the VIOS installation can then be mirrored to a LUN on the SAN. Alternatively, you can forgo the internal drive on the blade, and install VIOS completely on the SAN, leveraging its RAID disk protection. If VIOS is installed entirely on the SAN, it is strongly recommended to use MPIO to access the LUNs, as discussed in Multi-path I/O (MPIO) section. With JS22 in a BladeCenter S, VIOS can again be installed on the internal drive on the blade and mirrored to a SAS drive in the chassis; or installed on a SAS drive in one DSM and mirrored to a drive in the second DSM. In this case, it is recommended to install VIOS on the drive on the blade and mirror to a drive in the chassis, in order to use as few drives in the chassis as possible just for VIOS. When IBM i is implemented on a POWER blade where there are two drives available on the blade, those can be used for VIOS installation. In a BladeCenter H, VIOS can be installed on one drive on the blade and mirrored to the second; or installed entirely on the SAN. As with JS22, if VIOS is installed completely on the SAN, MPIO should be used. In a BladeCenter S, it is recommended to install VIOS on one drive on the blade and mirror to the second. This allows the drives in the chassis to be used for IBM i. Note that if mirroring is used, the correct method to achieve a redundant VIOS installation, regardless of the type and location of disks used, is to install VIOS on one drive and then mirror to the second. The incorrect method is to install VIOS on both drives at the same time. This latter method would not result in two copies of all VIOS files and two bootable drives. Note also that mirroring VIOS does not protect any IBM i or other client partition data; it protects only the system volume group, or storage pool, rootvg. After installing VIOS, perform the following steps to configure mirroring: Use the SOL console to VIOS and log in with padmin as the user ID. Identify a second available hdisk for the mirrored pair, such as hdisk1 Add that hdisk to the rootvg with the command, chsp add hdisk1. The command assumes rootvg if a storage pool is not specified Enable mirroring using the command, mirrorios f defer hdisk1. When possible, reboot VIOS to complete the mirroring. For an immediate reboot, omit the defer option The mirrorios command accomplishes three tasks: it creates a copy of all logical volumes (except for the system dump volume) in rootvg on the second hdisk, makes the second hdisk bootable and changes the VIOS bootlist to include it. After completing mirroring mirroring is complete, you can verify that all logical volumes in rootvg have a copy on both hdisks with: lsvg lv rootvg. Check for the number 2 in the PVs column IBM i on an IBM POWER blade (read-me first) 52

53 You can verify that the VIOS bootlist now includes the second hdisk with: bootlist mode normal ls. The output should include both the hdisks. If both hdisks are not shown on the bootlist, then do a bootlist -mode normal hdisk0 hdisk1 or whichever hdisks you want to use Configure networking in VIOS (if necessary) If you installed VIOS from a NIM server, basic networking is already configured. Note: If you previously configured the FSP s IPV4 address, it will be configured through the switch in IO Bay 1, but it takes the same port on the blade that VIOS typically uses (in VIOS terms, this is ent0). If this occurs, ensure that you configure the VIOS TCP connection by configuring an LHEA on the VIOS partition profile and then using that L-HEA port to configure VIOS s TCP/IP. If you installed VIOS from DVD, perform the following steps to configure basic networking in VIOS: ent0 ent1 lhea0 Log into the VIOS console through the AMM using the padmin user profile Use the lsdev command to identify the correct network device. In most cases, the first embedded Integrated Virtual Ethernet (or HEA) port is used. Note: On the PS703 and PS704, there is no longer an HEA. The built in Ethernet ports are shown as 2 port integrated Ethernet PCI-e adapter. Also, Ethernet ports on any expansion cards are configured before the built in ports, so they get lower entx values. On the following commands, use ent instead of hea. In that case, you can use a more specific command to find just the IVE/HEA ports: lsdev grep hea. You should see a result similar to: Available Logical Host Ethernet Port (lp-hea) Available Logical Host Ethernet Port (lp-hea) Available Logical Host Ethernet Adapter (l-hea) In this example, the first IVE/HEA network port is ent0. The network interface that corresponds to ent0 is en0. The first IVE/HEA port on your blade may have a different entx device name To configure the correct enx interface: enter cfgassist Move down to the VIOS TCP/IP Configuration and press Enter Select the enx port that you want to configure and press Enter Fill in the host name, IP address, network mask, gateway, domain name and DNS IP address values (as required) and press Enter. You should see the word running in the upper left hand corner. If successful, this will change to OK. If unsuccessful, error messages is shown. Press F3 or Esc-3 to return to the previous menu and make corrections, or press F10 or Esc-0 to exit to the command line. Test the configuration by pinging the gateway or another address that should be accessable. Use the CTL-C key combination to end the ping command. Write down the name of the physical adapter (such as ent0), as you will need it later to configure the virtual Ethernet bridge (as explained in the Configure the Virtual Ethernet bridge for IBM i LAN console using IVM) section. If you forgot to write it down, enter lstcpip num to find which interface was configured. IBM i on an IBM POWER blade (read-me first) 53

54 Several VIOS commands are available to check, remove or change the networking configuration. For example, to list the current VIOS network configuration, use the following command: lstcpip stored To check that routing is configured correctly, use the following command: lstcpip routtable To remove the VIOS network configuration and start again (this command should be executed on the SOL console for VIOS that the AMM provides), use the following command: rmtcpip f all Ensure that you use the f which forces the removal or the command may hang the session. Note: Entering the above command from a Telnet or PuTTY session will disconnect the session as they are using the TCP address as the destination for the session. To learn about all available options for these commands and the rest of the VIOS network configuration commands, refer the Network commands section of the VIOS command reference in the IBM Systems Information Center at: mmandslist.htm Update the system firmware on the SP of the POWER blade (if necessary) To display the current level of the system firmware, start a browser session to the AMM Click on Firmware VPD. The system firmware level is displayed under Blade Firmware Vital Product Data (EA350_103 in this example): Figure 16: sample display of Power blade firmware information from the AMM Use the steps in the General Download instructions for Fix Central section for accessing the Fix Central portal. On the list of links at the top of the page, click BIOS. The system firmware download link will be named similar to Firmware release - IBM BladeCenter JS22. Download the.img firmware file and the README file. FTP the firmware update file to VIOS, logging in with the padmin user ID. Use binary mode. Do not change the default FTP upload location (/home/padmin in this case) Telnet directly to VIOS and log in with the padmin user profile Note: This process will immediately take down the VIOS partition and any client partitions. Use the load firmware command to update the system firmware: ldfware file /home/padmin/<update file> The update process will shut down VIOS, update the service processor and reboot both the service processor and VIOS. Note that it may take up to 10 minutes for the AMM to correctly display the new version of the SP firmware. System firmware on a Power blade can also be updated through the SDMC, when it is being used as the management console. To view firmware levels and update firmware, select the IBM i on an IBM POWER blade (read-me first) 54

55 target system from the list of hosts on the Resources tab of the welcome page. Then click Release Management -> Power Firmware Management from the Action button. Follow the steps in Section of the IBM Systems Director Management Console Introduction and Overview at: to view and update Power blade firmware levels using the SDMC Update VIOS (if necessary) Updating VIOS using the VIOS CLI Updates to the VIOS software are released as fix packs, service packs or rarely for important individual fixes, interim fixes (ifixes). A fix pack is cumulative and contains all fixes from previous fix packs. Each fix pack also includes information on the level of VIOS to which it will update the system. To find your current VIOS level, use the following command in a VIOS Telnet session: ioslevel Then compare your current level with the level documented in the latest fix pack. To find the latest VIOS fix pack, visit To install a fix pack, service pack or ifix, click on it on the VIOS Downloads page above and follow the instructions in the README file for that update Updating VIOS using the HMC The HMC interface does not support updating VIOS. Use the VIOS CLI Updating VIOS using SDMC On the SDMC Welcome tab, expand Hosts and right click on the VIOS virtual server. Click Release Management -> Check for updates. This assumes that the SDMC has internet access to IBM Fix Central. If updates exist, you will shown an option to install them. You need to plan for a VIOS reboot, which implies that the client virtual servers will need to be shutdown Update the microcode on the I/O expansion cards on the blade (if necessary) using the VIOS CLI Displaying the current microcode level of the expansion adapters on the blade: Start a Telnet session to VIOS lsfware all The device names shown can be: o fcs0 or similar for CFFh and CIOv Fibre Channel adapters o mptsas0 or similar is the device name for the CFFv SAS card o sissas0 for the embedded SAS controller, which is updated instead of the CIOv pass-through SAS expansion adapter if the latter is present Notice a result similar to: (for the QLogic CFFh Fibre Channel card) IBM i on an IBM POWER blade (read-me first) 55

56 Manually downloading the latest available level of expansion adapter microcode: If your VIOS partition does not have access to an external network, you can use the following steps to get your adapters updated. Use the General Download instructions for Fix Centralfor accessing the Fix Central portal to access the POWER blade you are working with. Locate the correct microcode download link by using the adapter description, such as SAS Expansion Card (CFFv) Firmware for AIX - IBM BladeCenter or QLogic 4 Gb fibre channel expansion card Multiboot v1.46 (AIX package) - IBM BladeCenter If multiple microcode links are present for a single adapter, select the one designated for AIX. Select the firmware for download and follow the process to download it. It should result in a microcode file, such as for the CFFv SAS adapter Manually updating the adapter microcode: Start an FTP session to VIOS and sign in with padmin user ID. Type bin and press Enter Type lcd <dir>, where <dir> is the folder where you downloaded the microcode file Type put <file>, where <file> is the name of the microcode file o By default, the file is uploaded to /home/padmin in VIOS Start a Telnet session to VIOS and sign in with the padmin user ID. Type oem_setup_env and press Enter Type mv <file> /etc/microcode/, where <file> is the name of the microcode file. Do not forget the trailing / Type exit and press Enter Type diagmenu and press Enter Press Enter Use the arrow keys to select Task Selection and press Enter Use the arrow keys to select Microcode Tasks and press Enter Use the arrow keys to select Download Microcode and press Enter Use the arrow keys to select the correct device (such as fcs0 or sissas0), then press Enter. A plus sign should appear next to the selected device Press F7 or Esc+7, depending on the Telnet client used Press Enter several times to complete the microcode update, while retainging all the defaults in place Updating adapter firmware using HMC The HMC interface does not support updating adapter firmware. Use the VIOS CLI Updating adapter firmware using SDMC Follow the steps in the Update VIOS Updating VIOS using SDMC section to see options to update adapter firmware too Verify disks for IBM i are reporting in VIOS IBM i on an IBM POWER blade (read-me first) 56

57 With VIOS networking configured, you can start using IVM by opening a browser connection to the IP address of the VIOS partition Log in with the padmin user profile. Your goal is to verify that the correct number of LUNs or SAS drives for IBM i are reporting in VIOS If the This system does not have PowerVM Enterprise Edition enabled. PowerVM Enterprise Edition enables live partition mobility message appears, click Do not show this message again. In IVM, click on View/Modify Virtual Storage Click the Physical Volumes tab. Any LUNs or SAS drives will report in as disk units hdisk1, hdisk2 and so on. If no LUNs or SAS drives are reporting in and you are using a BladeCenter H, check the configuration of the SAN I/O module in the BladeCenter, any external SAN switches and the SAN system, as well as all physical Fibre Channel connections. If you are using a BladeCenter S, check the configuration of the SAS module(s), as discussed in Configuring storage in BladeCenter S using the SAS Connectivity Module or Configuring storage in BladeCenter S using the SAS RAID Controller Module sections Fibre Channel configuration commands As soon as the LUNs that will be virtualized to IBM i have reported in VIOS, change their queue depth to improve performance. The queue_depth value tells VIOS how many concurrent IO commands to send to the SAN. Start a Telnet session to VIOS and login as padmin. Use the following command for each LUN (hdisk1 in this example): lsdev - attr -dev hdisk#, where # is the hdisk number you want to display. chdev -perm -attr queue_depth=8 -dev hdisk1# o Note: The order of the parameters can be changed, as shown, to facilitate repeating the command and only having to alter the hdisk number. o Note: Some of the low end SANs might not handle the larger number of concurrent commands as well, which can adversely affect performance. o For redundant VIOS servers, each server needs to be able to access the same hdisks, so another attribute needs to be set for this: reserve_policy=no_reserve on each hdisk. Add a space between the attribute lists on the command. lsdev -attr -dev hdisk#, to validate the change. Another parameter that can improve performance is the number of I/O commands to send to the Fibre Channel adapter. The recommended value is 512. Be sure to change the value for all ports of all of the FC adapters: lsdev -attr -dev fcs#, where # is the FC adapter number you want to display. chdev -attr num_cmd_elems=512 perm dev fcs# lsdev -attr -dev fcs#, where # is the FC adapter number you want to display to validate the change. To improve reliability, enable dynamic tracking and fast fail over for the LUNs virtualized to IBM i. Fast_fail should be done only when multiple paths to the disks exist. Do so for all ports of the all of the adapters: lsdev -attr -dev fscsi#, where # is the FC adapter number you want to display. chdev -attr dyntrk=yes fc_err_recov=fast_fail perm dev fscsi# lsdev -attr -dev fscsi#, where # is the FC adapter number you want to validate. IBM i on an IBM POWER blade (read-me first) 57

58 Using the perm option in these commands means that the value will be updated only in the VIOS device configuration database (ODM). To make the change effective, reboot VIOS when there is downtime available for all client partitions 5.28 Virtual Ethernet concepts and configuration Power servers have long supported the ability to communication between partitions using virtual Ethernet connections managed by the Power hypervisor (FSP). Every partition, including VIOS, that needs to communicate to one another creates a virtual Ethernet adapter in their profile and specifies the same port VLAN ID (PVID). This type of adapter is recognized by IBM i as a communications port (CMNxx) with a different type (268C). To do this configuration follow these steps from the appropriate management interface Configuring VIOS Virtual Ethernet using the HMC To create one, refer to the topic Configuring a virtual Ethernet adapter using the HMC in the Power Systems Logical Partitioning Guide at: Ensure that the Use this adapter for Ethernet bridging check box is selected. Continue with the section Shared Ethernet Adapter (SEA) configuration Configuring VIOS Virtual Ethernet using the SDMC To configure the Virtual Ethernet bridge, start an SDMC session and log in with the sysadmin user ID. From the Resources tab of the welcome page, open the list of virtual servers and rightclick on the VIOS you are configuring Click System Configuration -> Manage Virtual Server and click the Network tab Select the adapter port to configure (typically the first port) and click the Edit button. Select the Use this adapter for Ethernet bridging check box in the Shared Ethernet section on this page. Continue with the section Shared Ethernet Adapter (SEA) configuration Configuring VIOS Virtual Ethernet using IVM See the steps in section Configure networking in VIOS (if necessary) then continue with the next section Shared Ethernet Adapter (SEA) configuration Shared Ethernet Adapter (SEA) configuration VIOS provides virtual networking to client partitions by bridging a physical Ethernet adapter and one or more virtual Ethernet adapters. The virtualization object that provides this Ethernet bridge is called a Shared Ethernet Adapter (SEA). The SEA forwards network packets from any client partitions on a VLAN to the physical LAN through the physical Ethernet adapter. Because the SEA creates a Layer-2 bridge, the original MAC address of the virtual Ethernet adapter in IBM i is used on the physical LAN. The CMNxx communications port that represents the virtual Ethernet adapter in IBM i is configured with an externally routable IP address and a standard network configuration is used. The physical adapter bridge by the SEA can be any network adapter supported by VIOS, including Integrated Virtual Ethernet (IVE) ports, also known as Host Ethernet Adapter (HEA) ports. Client IBM i partitions/virtual servers may need more than one virtual Ethernet adapter to be bridged by VIOS, for instance a replication application would drive a lot of IO and should have its own VE and SEA defined. Note: for Redundant VIOS servers, follow the link in this section: Redundant VIOS partitons and specifically note the networking section of that document. IBM i on an IBM POWER blade (read-me first) 58

59 Configure the Shared Ethernet Adapter (SEA) using the HMC The enhanced virtualization management functions of the HMC also allow for network virtualization without using the IVM/VIOS command line. Perform the following steps in the HMC to work with the virtual network management interface. Select the correct managed server by clicking the selection box to the left of its name. In the menu below, expand Configuration, then Virtual Resources Click Virtual Network Management Use the options in the window to create or modify the settings of a VSwitch (referred to as ETHERNET0) o Select the VLAN ID that is configured for the IBM i client partition and to the VIOS that will bridge it. When you do this, you are shown the participants on the VLAN. o Near the bottom right hand side of the window, select the Create SEA button. Choose the VIOS(s) where the SEA will be created. Choose the physical adapter to bridge the IO to. You recorded this value back in section Configure networking in VIOS (if necessary). For redundant VIOS s, check the Configure virtual network failover box and select both the Failover VIOS and the Failover physical adapter values using the associated pull-downs. Ensure that a control channel, used for the heartbeat function between the VIOS s, is selected to be configured. Click OK to create the SEA Configure the Virtual Ethernet bridge for IBM i LAN console using IVM As discussed previously, IBM i does not have direct access to any physical network adapters on the POWER blade. Instead, it uses a Virtual Ethernet adapter to connect to VIOS and any other partitions on the same blade over a Virtual LAN (VLAN). VIOS in turn connects to the external LAN by using either the embedded ports on the blade or the Ethernet ports on the expansion card and an Ethernet I/O module in the BladeCenter. The VLAN is bridged over to the external physical LAN using a Virtual Ethernet bridge in VIOS. The Virtual Ethernet bridge associates a VLAN with a physical network port in VIOS. This Layer-2 bridge allows externally routable IP addresses to be assigned to the IBM i partition for both LAN console and regular network communications. Separate Virtual Ethernet adapters should be created in the IBM i partition for LAN console and production traffic. This would require bridging a second Virtual LAN to either the second embedded port on the blade or an Ethernet port on the expansion card. To configure the Virtual Ethernet bridge, start an IVM session and log in with padmin user ID. If you are using a POWER blade server other than a PS703 or PS704: o Click on View/Modify Host Ethernet Adapters o Select the first IVE/HEA port (its physical location ends with P1 T6), then click on Properties. o Check the Allow virtual Ethernet bridging check box and then click OK. The remaining steps apply to all Power blade servers. Click View/Modify Virtual Ethernet. Click the Virtual Ethernet Bridge tab. Select Virtual Ethernet 1 and change its physical adapter to the IVE/HEA port you just modified. IBM i on an IBM POWER blade (read-me first) 59

60 Click Apply Configure the Virtual Ethernet bridge using the SDMC To configure the Virtual Ethernet bridge, start an SDMC session and log in with the sysadmin user ID. From the Resources tab of the welcome page, open the list of virtual servers and rightclick on the VIOS you are configuring Click System Configuration -> Manage Virtual Server and click the Network tab Select the adapter port to configure (typically the first port) and click the Edit button Select the Use this adapter for Ethernet bridging check box in the Shared Ethernet section on this page. Click OK to create the adapter Configure the VIOS link aggregation The BladeCenter chassis can contain redundant Ethernet switches. These can be used as aggregated ports for more bandwidth in a stacked mode or for failover if a switch fails. Note that the changes to the switches will affect all other blade s networking and should be done with care and in coordination with the networking administrator. This requires some separate configuration changes as follows: First, you need to shut off the redundant Internal ports on the second Ethernet switch in IO Bay 2. Use the Chassis Management Module s component IP prompt to determine the switch s IP address. To shut off internal ports on the switch, use an SSH interface to the switch s address and make sure you are in iscli mode (these instructions were written for a BNT switch, but are close to Cisco commands). Then run the following: en conf t interface port INTA1-INTB14 shutdown On VIOS, determine the physical Ethernet ports that are being seen: lsdev grep ent This should show you a list of physical and virtual Ethernet ports along with a Shared Ethernet Adapter (SEA). Locate the first 2 physical ports, typically ent0 and ent1. Now configure a link aggregation between those first two physical ports: cfgassist -> Devices -> Link Aggregation Adapter -> Add a Link Aggregation Adapter -> then enter ent0,ent1 for the Target ADAPTERS field and hash_mode=src_dst_port for the ATTRIBUTES field. Then hit enter. Or use this command: mkvdev lnagg ent0,ent1 attr hash_mode=src_dst_port This should create another ent# for your link aggregate adapter. Now make a shared Ethernet adapter (SEA) that bridges the virtual Ethernet adapters (ent4 and ent5) to the link aggregate adapter (ent# created above): mkvdev sea ent# -vadapter ent4,ent5 default ent4 defaultid 4091 This should create another enty for your SEA Now assign your IP address to the SEA: cfgassist -> VIOS TCP configuration -> choose the en# of the link aggregation port -> enter the field parameters for your IP configuration and hit enter. Or use this command: IBM i on an IBM POWER blade (read-me first) 60

61 mktcpip -hostname <VIOS hostname> -inetaddr <VIOS IP address> -interface eny netmask <subnet mask> gateway <gateway IP address> nsrvaddr <DNS server IP address> -nsrvdomain <domain name> start Test the IP by pinging another address on the same subnet. To turn the internal ports back on: en conf t interface port INTA1-INTB14 no shutdown Note: By default, only one of a pair of redundant VIOS servers handles ALL of the Ethernet traffic for the client virtual servers. This is configured as ha_mode=auto attribute on the shared Ethernet adapter (SEA) command. To better utilize the Ethernet adapters owned by each VIOS server, ha_mode=sharing was implemented. The virtual Ethernet traffic handling is negotiated between the VIOS servers as additional LANs are configured, but there is a catch. You have to enable IEEE 802.1Q VLAN tagging on each virtual Ethernet LAN for this to work. A different port VLAN ID (PVID) is not enough. You don t have to specify an explicit VLAN tag, just enable the tagging support on each virtual Ethernet adapter. Note: The Edit option will not allow you to set bridging on an active adapter. You must shutdown VIOS or add a new adapter. 6 IBM i install and configuration Create the IBM i partition using IVM Using IVM to create an IBM i partition is similar to using the HMC; however, fewer steps are necessary. IVM uses a number of defaults that simplify partition creation. For example, because IBM i partitions cannot own physical hardware on an IVM-managed system (such as a POWER blade), those screens are omitted from the creation wizard. IVM defaults the load source and alternate initial program load (IPL) adapters to the VSCSI client adapter in the IBM i partition, and the console adapter to the first virtual Ethernet adapter. If you plan to use separate virtual Ethernet adapters for LAN console and production traffic and want to use the second virtual Ethernet adapter for LAN console, you can make the change in the partition properties. If you choose to use shared processors for IBM i, IVM defaults to assigning 0.1 times the number of processors you select as shared processor units and the whole number of processors you select as virtual processors. For example, if you select four shared processors, IVM initially assigns 0.4 processing units and 4 virtual processors to the partition. Also note, that by default, shared processor partitions are configured as uncapped. When assigning memory during partition creation, you are selecting the assigned or desired amount for the partition. IVM automatically assigns minimum and maximum values. The default processor and memory configuration can be changed by working with the partition properties after creation. The minimum recommended amount of memory for an IBM i client partition on the POWER blade is 1 GB. The actual memory and processor values should be sized individually for each IBM i workload using the Workload Estimator, available at 304.ibm.com/systems/support/tools/estimator/index.html. To create an IBM i partition: Click View/Modify Partitions. Click Create Partition. Partition ID will default to the next available partition number. Enter a name for the partition. IBM i on an IBM POWER blade (read-me first) 61

62 Select Environment: IBM i and click Next. Set the assigned (desired) memory value and click Next. Set the desired processor configuration and click Next. Set the first virtual Ethernet adapter to VLAN1, which you previously configured for bridging and click Next. Select Assign existing virtual disks and physical volumes and click Next. Select the LUNs or SAS disks you configured for IBM i from the list of Available Physical Volumes. If you are using a BladeCenter S and need to configure IBM i disk mirroring after installation, refer to the Configure mirroring in IBM i (if necessary) section. Click Next Select the USB physical optical drive (cd0) in the top portion of the screen to virtualize the DVD-ROM drive in the BladeCenter to IBM i (this assumes the media tray is assigned to this blade) and click Next. Review the summary and click Finish Create the IBM i client partition using HMC The IBM i client partition configuration as a client of VIOS is the same as that for a client of an IBM i 6.1 host partition. Refer to the Creating an IBM i logical partition that uses IBM i virtual I/O resources using the HMC topic in the Logical Partitioning Guide at: Create the IBM i Virtual Server using SDMC If the IVM was used earlier to create the IBM i partitions, you can skip this step. If you are using the SDMC, perform the following steps to create the IBM i virtual server. On the welcome page, click the Hosts. Right-click on the host on which you want to create the IBM i partition (virtual server) and click System Configuration -> Create Virtual Server. Virtual server ID will default to the next available virtual server number. Enter a name for the virtual server. Select Environment: IBM i and click Next. Set the assigned memory value in GB and click Next. Set the desired processor configuration and click Next. Click the check box for the first virtual Ethernet adapter and click the Edit button. Set the port virtual ethernet (VLAN) to 1 which should be the default. Then check the box labeled Use this adapter for Ethernet bridging check box and set the priority to 1 and click OK. Click Next Select No, I want to manage the virtual storage adapters for this Virtual Server and click Next. Specify the maximum number of virtual adapters you want to use for this virtual server and click Create Adapter. Specify the adapter ID and type (SCSI client or Fibre Channel). Then specify the VIOS you are connecting to and a Connecting Adapter ID for the server side and click Ok. Click Next. Choose the adapters you want to use for physical I/O and click Next Select the load source, alternate restart device (if any) and the console and click Next. Review the summary and click Finish. IBM i on an IBM POWER blade (read-me first) 62

63 6.1.4 Increasing the number of virtual adapters in the IBM i partition The default number of virtual adapters for a client partition is 10. VIOS uses the slots 0 to 10, so these are reserved. The overhead of increasing this value is spproximately 1KB of memory per adapter increase used by the hypervisor who manages these Using the VIOS CLI If there is a need for more than 10,slots you can use the chsyscfg command to increase this value. chsyscfg -r prof -i "lpar_name=<partition3>,max_virtual_slots=<300>" The <values> are variables Using the HMC If there is a need for more than 10 slots, you can use the HMC GUI to increase this value by performing the following steps. On the Managed Systems link of the navigation pane, click on Power blade. Select the virtual server for IBM i. Click Configuration -> Manage Profile. Select the profile name associated with IBM i by clicking on the name. Click the Virtual Adapters tab. Change the value in the Maximum Virtual Adapters field and click OK to accept the change. Note: the change does not take affect until the profile is re-activated Using the SDMC If there is a need for more than 10 slots, you can use the SDMC GUI to increase this value by performing the following steps. On the Resources link of the welcome page, click on Virtual Servers. Select the virtual server for IBM i, right-click on the name and click System Configuration -> Manage Profile. Select the profile name associated with IBM i by clicking on the name. Click the Virtual Adapters tab. Change the value in the Maximum Virtual Adapters field and click OK to accept the change. Note: the change does not take affect until the profile is re-activated. 6.2 Creating multiple Virtual SCSI adapters per IBM i partition With the availability of VIOS 2.1 in November 2008, it is possible to create multiple Virtual SCSI client adapters per IBM i partition on a POWER blade. POWER blades can be IVM-managed or SDMC managed. This allows for increased flexibility in configuring storage and optical devices for IBM i in the blade environment: More than 16 disk and 16 optical devices can be virtualized by VIOS per IBM i partition Disk and optical devices can be configured on separate Virtual SCSI adapters The IVM web browser interface creates a single Virtual SCSI client adapter per client partition and a corresponding Virtual SCSI server adapter in VIOS. With SDMC, you can manually create more adapter pairs. The exception for IVM is for tape virtualization, as described in the Multiple Virtual SCSI adapters and virtual tape using IVM section. IBM i on an IBM POWER blade (read-me first) 63

64 6.2.1 Creating multiple Virtual SCSI adapters using the VIOS CLI To create additional Virtual SCSI client adapters under IVM, you must use the VIOS command line: Log into VIOS with padmin or another administrator user ID. VIOS always has partition ID 1 when IVM is used, and by default carries the serial number of the blade as a name. To display the current names and IDs of all existing partitions, use te following command: lssyscfg -r lpar -F "name,lpar_id" If the IBM i partition is not activated, refer to the following example that adds a new Virtual SCSI client adapter in slot 5 of IBM i partition test, connecting to a server adapter in the next available slot (chosen automatically by IVM) in the partition named VIOS : o o chsyscfg -r prof -i "name=test,virtual_scsi_adapters+=5/client/1/vios//1" The corresponding server adapter in VIOS is created automatically by IVM If the IBM i partition is running, refer to the following example that creates a new Virtual SCSI client adapter in slot 5 of IBM i partition test, connecting to a server adapter in the next available slot (chosen automatically by IVM) in partition VIOS : o o chhwres -r virtualio --rsubtype scsi -p test -o a -s 5 -a "adapter_type=client" The corresponding server adapter in VIOS is created automatically by IVM Notice that there are three variables in the previous commands: the name of the IBM i partition, the new slot for the Virtual SCSI client adapter, and the name of the VIOS partition. To display which virtual slots have been already been used, by partition and adapter, use: lshwres -r virtualio --rsubtype slot --level slot (notice the double dashes) The type of adapter is shown on the middle of each line. The slot numbers are shown at the beginning of each line Creating multiple Virtual SCSI adapters using the HMC When using an HMC to manage Power Blades, you can create virtual SCSI adapters to create the virtual SCSI connection between VIOS and the IBM i client partition. In this section you will see how to create server adapter and client adapter pair. You need to determine the next adapter slot that is available for both the VIOS and the IBM i partitions. The slot numbers are used to tie the adapters together. Use the Current configuration for the VIOS and IBM i partition s to note the next available virtual adapter slot number. Create Server SCSI Adapter Perform the following steps to create a server SCSI adapter. Log into HMC with hscroot as the user ID or using another user ID. Click the Managed System link of the navigation pane Select the blade from the list of hosts and select the checkbox next to the VIOS you are configuring Click Configuration -> Manage Profiles Select a profile and select Actions->Edit to edit the profile IBM i on an IBM POWER blade (read-me first) 64

65 Click the Virtual Adapters tab and then click Actions -> Create Virtual Adapter -> SCSI Adapter Specify the next available (determined above) Server Adapter slot number and click Only selected client partition can connect. Do NOT use Any Partition can connect. That will not work. Use the list to select the IBM i client partition you want to use Specify the next available (determined above) Client Adapter slot number and click OK to create the server Virtual SCSI Adapter Create Client SCSI Adapter Perform the following steps to create a client SCSI adapter: Click the Resources tab of the SDMC welcome screen and select the checkbox next to the IBM i client you are configuring. Click Actions-> System Configuration -> Manage Profiles Select a profile and select Actions->Edit to edit the profile Click the Virtual Adapters tab and then click Actions -> Create Virtual Adapter -> SCSI Adapter Specify the next available (determined above) Client Adapter slot number and select the VIOS from the Server partition list. Specify the next available (determined above) Server Adapter slot number and click OK to create the client Virtual SCSI Adapter Creating multiple Virtual SCSI adapters using the SDMC When using an SDMC to manage Power Blades, you can create virtual SCSI adapters to create the virtual SCSI connection between VIOS and the IBM i client partition. In this section you will see how to create server adapter and client adapter pair. You need to determine the next adapter slot that is available for both the VIOS and the IBM i partitions. The slot numbers are used to tie the adapters together. Use the Current configuration for the VIOS and IBM i partition s to note the next available virtual adapter slot number. Create Server SCSI Adapter Perform the following steps to create a server SCSI adapter. Log into SDMC with sysadmin as the user ID or using another user ID. From the Resources tab of the SDMC, select the blade from the list of hosts and select the checkbox next to the VIOS you are configuring. Click Actions-> System Configuration -> Manage Profiles Select a profile and select Actions->Edit to edit the profile Click the Virtual Adapters tab and then click Actions -> Create Virtual Adapter -> SCSI Adapter Specify the next available (determined above) Server Adapter slot number click Only selected client partition can connect Use the list to select the IBM i client partition you want to use Specify the next available (determined above) Client Adapter slot number and click OK to create the server Virtual SCSI Adapter Create Client SCSI Adapter Perform the following steps to create a client SCSI adapter: IBM i on an IBM POWER blade (read-me first) 65

66 Click the Resources tab of the SDMC welcome screen and select the checkbox next to the IBM i client you are configuring Click Actions-> System Configuration -> Manage Profiles Select a profile and select Actions->Edit to edit the profile Click the Virtual Adapters tab and then click Actions -> Create Virtual Adapter -> SCSI Adapter Specify the next available (determined above) Client Adapter slot number and select the VIOS from the Server partition list. Specify the next available (determined above) Server Adapter slot number and click OK to create the client Virtual SCSI Adapter Mapping storage to new Virtual SCSI adapters using the VIOS CLI After creating a new Virtual SCSI client adapter for IBM i and the server adapter for VIOS are created, you can assign additional LUNs to IBM i by mapping them to the new server adapter in VIOS. Alternatively, you can map an optical drive to a separate Virtual SCSI connection. Note that even if you create a new Virtual SCSI adapter on the command line as described above, the IVM Web interface will not use it to map LUNs or optical devices to IBM i. The IVM interface does not show the new vscsi adapter or allow you to select it. Only map up to 16 disks to a client partition when using the IVM interface. More LUNs are allowed,to be mapped from IVM because AIX supports 256, but IBM i does not. The assignment of LUNs and optical devices to IBM i using a Virtual SCSI adapter other than the first virtual adapter must be done explicitly using the VIOS CLI. To display LUNs (or other physical volumes, such as SAS disks) that are available to be assigned to IBM i on the blade, use the following list physical volume command: lspv -avail To display all existing virtual resource mappings by Virtual SCSI server adapter in VIOS (vhostx) and client partition, as well as any newly created Virtual SCSI server adapters, use the following list mappings command: lsmap all more Press the Space bar key to move forward one screen of data at a time and the Down Arrow key for one line at a time. Enter q to quit. Any new Virtual SCSI server adapters will have no resources mapped to them. Assuming that hdisk7 is an available LUN and vhost1 is a newly created Virtual SCSI server adapter, use the following command to make hdisk7 available to IBM i: mkvdev vdev hdisk7 vadapter vhost1 The dev parameter can be used to name the device to something more meaningful, such as <partition_name_diskx> The lsmap command above will also show whether the physical DVD drive in the BladeCenter (typically cd0) is already assigned to a client other partition. If so, a vtoptx device will exist under a Virtual SCSI server adapter (vhostx). To map the DVD drive to a different Virtual SCSI adapter, first delete the correct existing vtoptx device (such as vtopt0 ) using the following command: rmdev dev vtopt0 IBM i on an IBM POWER blade (read-me first) 66

67 Skip the previous step if the DVD drive is not already assigned to a client partition. Next, assign the physical optical device (such as cd0) to the IBM i partition using the correct separate Virtual SCSI adapter (such as vhost 1 ) using the following command: mkvdev vdev cd0 vadapter vhost1 To map a file-backed optical device to a new Virtual SCSI adapter (such as vhost 1 ), using the following command : mkvdev fbo vadapter vhost Mapping storage to new Virtual SCSI adapters using the HMC Use the following steps: On the HMC navigation pane, click on Managed Systems. Then select the Power blade by clicking in the checkbox next to it. In the lower menu, expand Configuration and then Virtual Resources. Click Virtual Storage Management. Select the correct VIOS from the list and click Query VIOS. Click the Physical Storage tab. A list of hdisks is shown. In the upper middle section of the pane, use the pull down to choose the combination of partition name and virtual SCSI adapter to associate the hdisk to. You can map up to 16 hdisks to each VSCSI adapter, but the interface will not stop you from mapping more. This is because AIX partitions can also use this interface and AIX supports up to 256 hdisks per adapter (though they seldom map that many). If you map more than 16, the additional hdisks will not be seen by IBM i. Choose an unassigned hdisk to map to the adapter and click Assign. Repeat this process for each hdisk. As the mapping completes, the IBM i client partition should be listed as the new hdisk owner. Close the window when done Mapping storage to new Virtual SCSI adapters using the SDMC Use the following steps: On the SDMC welcome page, click on Hosts. Right click on the Power blade and select System Configuration -> Virtual Resources - > Virtual Storage Management. Select the correct VIOS from the list and click Query VIOS. Click the Physical Storage tab. A list of hdisks is shown. In the upper middle section of the pane, use the pull down to choose the combination of partition name and virtual SCSI adapter to associate the hdisk to. You can map up to 16 hdisks to each VSCSI adapter, but the interface will not stop you from mapping more. This is because AIX partitions can also use this interface and AIX supports up to 256 hdisks per adapter (though they seldom map that many). If you map more than 16, the additional hdisks will not be seen by IBM i. Choose an unassigned hdisk to map to the adapter and click Assign. Repeat this process for each hdisk. As the mapping completes, the IBM i client partition should be listed as the new hdisk owner. Close the window when done Removing Virtual SCSI adapters using the VIOS CLI IBM i on an IBM POWER blade (read-me first) 67

68 The VIOS command line is also used to remove Virtual SCSI client adapters from an IBM i partition. Note that removing a Virtual SCSI client adapter from IBM i will make any devices it provides access to unavailable. As mentioned above, to check which devices in VIOS are mapped to which Virtual SCSI server adapter, and therefore which client partition, use the following command on the VIOS command line: lsmap all more Press the Spacebar key to move forward one screen of data at a time and the Down Arrow key for one line at a time, and enter q to quit. To remove a Virtual SCSI client adapter when the IBM i partition is not activated, refer to the following example which removes the client adapter in slot 5 of IBM i partition test : chsyscfg -r prof -i "name=test,virtual_scsi_adapters-=5/////" (note the minus sign before the equal sign) To remove a Virtual SCSI client adapter when the IBM i partition is running, refer to the following example which removes the client adapter in slot 5 of IBM i partition test : chhwres -r virtualio --rsubtype scsi -p test -o r -s Removing virtual SCSI adapters using SDMC Note that removing a Virtual SCSI client adapter from IBM i will make any devices it provides access to unavailable. Log into SDMC with sysadmin as the user ID or another user ID On the Resources tab of the SDMC, select the blade from the list of hosts and select the checkbox next to the IBM i virtual server you are configuring Click Actions-> System Configuration -> Manage Profiles Select a profile and then click Actions->Edit to edit the profile Click the Virtual Adapters tab and then click Actions -> Delete Multiple Virtual SCSI adapters and virtual tape using IVM VIOS can now directly virtualize the SAS-attached tape drive to IBM i client partitions. VIOS does so by mapping the physical tape device, rmtx, to a Virtual SCSI server adapter, which is then connected to a Virtual SCSI client adapter in IBM i. A separate pair of Virtual SCSI server and client adapters is used for tape virtualization. However, there is no need to manually add Virtual SCSI adapters specifically for tape. When the tape drive is assigned to an IBM i partition in IVM, IVM will automatically create a new Virtual SCSI server adapter in VIOS and a Virtual SCSI client adapter in that IBM i partition. IVM will do so for each tape device assigned to IBM i. It is possible to manually add a new pair of Virtual SCSI server and client adapters for tape virtualization and map the tape drive to IBM i. It is also possible to map a tape drive to an already existing Virtual SCSI server adapter in VIOS used for disk or optical virtualization. However, the recommended approach to making tape drives available to IBM i is to use IVM Configuring Virtual tape using the HMC There is not a graphical interface for the configuration of virtual tape. Perform the following steps to configure virtual tape using the VIOS CLI: Use either the Save and restore with a single LTO4/5 SAS tape drive section or the Save and restore with a Fibre Channel-attached tape library section to add a virtual tape device. Use telnet or PuTTy to connect to the VIOS partition. Sign in using padmin as the user ID. Enter cfgdev to check for new devices. IBM i on an IBM POWER blade (read-me first) 68

69 Enter lsdev grep rmt to view the tape devices and ensure that they are in Available state. Enter lsdev grep vhost and note the last vhost listed there. You need to associate this device with a VSCSI adapter pair. You need to use the HMC interface to create those. Refer to the Increasing the number of virtual adapters in the IBM i partition section for details. Then return to this step. On the VIOS CLI enter lsdev grep vhost. There should be a new vhosty listed. This vhosty is the VSCSI adapter in VIOS that you just created. To map the tape drive to the vhosty, enter mkvdev vdev <rmtx> -vadapter vhosty. Enter lsmap all more and press the Spacebar key to advance through the mappings. Look for the vhosty and make sure the vttapez device is associated with it. On the IBM i virtual server with auto configuration turned on, a TAPxx device appears. Vary it on to use it Configuring Virtual tape using the SDMC There is not a graphical interface for the configuration of virtual tape at this time. Perform the following steps to configure virtual tape using the VIOS CLI: Use either the Save and restore with a single LTO4/5 SAS tape drive section or the Save and restore with a Fibre Channel-attached tape library section to add a virtual tape device. Use telnet or PuTTy to connect to the VIOS partition. Sign in using padmin as the user ID. Enter cfgdev to check for new devices. Enter lsdev grep rmt to view the tape devices and ensure that they are in Available state. Enter lsdev grep vhost and note the last vhost listed there. You need to associate this device with a VSCSI adapter pair. You need to use the SDMC interface to create those. Refer to the Increasing the number of virtual adapters in the IBM i partition section for details. Then return to this step. On the VIOS CLI enter lsdev grep vhost. There should be a new vhosty listed. This vhosty is the VSCSI adapter in VIOS that you just created. To map the tape drive to the vhosty, enter mkvdev vdev <rmtx> -vadapter vhosty. Enter lsmap all more and press the Spacebar key to advance through the mappings. Look for the vhosty and make sure the vttapez device is associated with it. On SDMC: update the inventory to view the changes. On the IBM i virtual server with auto configuration turned on, a TAPxx device appears. Vary it on to use it. 6.3 End to end LUN mapping using HMC On October 2009, IBM enhanced both the HMC and VIOS to allow end-to-end device mapping for LUNs assigned to client LPARs, such as IBM i. The new function enables administrators to quickly identify which LUN reporting in VIOS (or, hdisk) is which DDxxx disk device in IBM i. This in turn makes it easier to troubleshoot disk-related problems and safer to change a virtualized disk configuration. In order to correctly perform the mapping, the HMC requires an active RMC connection to VIOS. To perform end-to-end LUN device mapping, use the following steps: Sign in to the HMC as hscroot user ID. Expand Systems Management. Expand Servers. Click the correct managed system (server). Select the correct VIOS by using the checkbox. Click Hardware Information Virtual I/O Adapters SCSI. You will be shown a list of drives associated with their VIOS hdisks. IBM i on an IBM POWER blade (read-me first) 69

70 Click back on the word Servers in the left hand navigation pane of the HMC. Select the correct managed server by selecting its checkbox. In the menu below, expand Configuration and then Virtual Resources. Click Virtual Storage Management. Select the correct VIOS from the list and click Query VIOS. Click the Physical Volumes. The hdisks that VIOS sees are shown along with the partition that they are assigned to. On the far right side of the Physical Location Code column, there is a L#00000 This is the LUN number associated with the hdisk. This is a hexadecimal number. Use the SAN interface to determine which volume has that LUN number. You may have to convert the hex number to a decimal number (I know, it s been a while, but you can do it!). If the SAN is the V7000, look for the SCSI ID as the LUN #. 6.4 End to end LUN mapping using VIOS CLI If at all possible, use the HMC or SDMC interfaces for this process. You can use the lsdev dev vpd command to map the hdisks back to the volumes, but there is not an easy way to map the DDxxx devices to the hdisks using the CLI. 6.5 End to end LUN mapping using SDMC You may need to map the VIOS hdiskx back to their associated LUNs for debug, there is an SDMC interface to help do that: On SDMC Right Click on the hosting VIOS virtual server and go to System Configuration -> Manage Virtual Storage You'll be able to see virtual mappings under Storage Adapters and Storage Devices. There are plans to add client disk info to the Storage Adapters in the future releases of SDMC. 6.6 NPIV configuration steps using the HMC/SDMC There are three general steps in configuring NPIV for IBM i: LPAR and VIOS configuration on the Power server s management console Storage subsystem or tape library configuration SAN zoning To perform the LPAR and VIOS setup, refer to Chapter 2.9 in the Redbooks PowerVM Virtualizing Managing and Monitoring (SG ) at: While the examples given are for an AIX client of VIOS, the procedure is identical for an IBM i client. To perform the storage or tape configuration, refer to the Redbooks IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i (SG ) at: or Implementing IBM Tape in i5/os (SG ) at As mentioned above, from the storage subsystem s or tape library s perspective, the configuration is identical to that for IBM i directly attached through the SAN fabric. There is a web interface on the tape media library where you need to enable control paths from each device that you want IBM i to be able to work with. Selecting Enable, creates the control paths. IBM i cannot dynamically detect these control paths. To detect the control paths, you need to re-ipl the virtual I/O adapter (IOA). First determine which virtual IOA has been created for the virtual FC adapters. To do this, enter a WRKHDWRSC *STG command and check for a 6B25 (virtual FC) adapter. Note the IOP/IOA name. IBM i on an IBM POWER blade (read-me first) 70

71 Next, use STRSST command: Use option 1 to Start a service function Option 7 Hardware service manager Option 2 Logical Hardware Resources Option 1 System bus resources Enter a 255 in the System bus(es) to work with field and hit enter. This is the virtual adapter bus. Locate the virtual IOA from above Enter an option 6 for I/O debug Then option 4 to IPL the IOA. Use F3 to exit SST. Return to the WRKHDWRSC *STG and use an option 9 to Refer to the tape devices under the VFC IOA. With auto configuration turned on, a new tape device(s) should show up under WRKCFGSTS *DEV TAP*. You may have to check the port type defined on the tape media library for the fibre channel port associated with the tape device. Log into the tape library interface Go to Configure Library Then select Drives Set the port type to N-Port Accordingly, DS8000 LUNs might be created as IBM i protected or IBM i unprotected and will correctly report as such to Storage Management in IBM i. A tape library and a drive within will also report the correct device names and types, such as TAPMLBxx, 3584 and TAPxx, and so on. All tape operations supported with direct FC attachment are supported through NPIV, including hardware tape encryption. IBM i on an IBM POWER blade (read-me first) 71

72 7 Install System i Access for Windows (IVM installed) As mentioned above, IBM i on POWER blade uses LAN console for a system console. LAN console necessitates having a PC (initially on the same subnet as VIOS and IBM i) with the System i Access for Windows software at version 6.1 or higher. The same PC can be used for the browser connection to IVM, SDMC and Telnet sessions to the AMM or VIOS. You can obtain the System i Access software version 6.1 or higher by visiting the following website: Complete the PC preparations for LAN console and install the software as described in this section of the IBM i Information Center: Make sure to install the Operations Console component Create the LAN console connection on the console PC (IVM install) Next, you should have System i Access installed on a PC that is on the same subnet as the IP address you are going to assign to the new IBM i partition for LAN console. To configure the Operations Console (LAN) connection, follow the process described in the IBM i Information Center: tm Keep in mind the following tips: IBM i partitions on POWER blade have partition IDs of 2 or higher (VIOS always has partition ID 1) You can look up the serial number of the blade in IVM by clicking on View/Modify System Properties. Note that the LAN console uses an IP address different from the ones for VIOS and, later, the IBM i LPAR production interface. Refer to the Plan for necessary IP addresses section for a complete list of the IP addresses required for implementing IBM i on blade. After configuring the LAN console connection, start it in preparation for installing IBM i. IBM i on an IBM POWER blade (read-me first) 72

73 7.1.2 Install IBM i using the IVM interface After completing the prerequisites, installing IBM i on POWER blade is essentially the same as on any other supported system. Place the IBM i installation media in the DVD-ROM drive in the BladeCenter (which at this point should be assigned to your blade and selected in the IBM i partition s properties). In IVM, click View/Modify Partitions. Select the IBM i partition and click Activate (IVM defaults a new IBM i partition s IPL source and type to D mode, manual). System reference codes are not automatically refreshed. There is a refresh smart icon to watch the code changes. Use the LAN console connection and the installation topic in the IBM i Information Center to perform the installation: m&tocnode=int_ When first authenticating the LAN console connection, use the ID and password Installing IBM i using SDMC Place the IBM i installation media in the DVD-ROM drive in the BladeCenter. Refer to the Moving the BladeCenter DVD drive to another blade using an SDMC section and perform the necessary steps to assign the DVD-ROM to the blade for this virtual server. In SDMC, right-click the IBM i virtual server name and click Operations -> Activate -> Profile to activate the virtual server. Select a profile you want to use (such as the default profile) and select the Open a terminal window or console session check box. Then click Advanced. On the Advanced tab, select Manual from the Keylock position list and D-mode from the Boot mode list. Then click OK. The welcome page will open and show the status of the virtual server including SRC information. Note that during installation, the load source and other disk units will be initialized before they are included in the system or other ASPs by IBM i. This initialization time will vary depending on the type of storage used: LUNs on a SAN with the BladeCenter H enables a faster installation than individual SAS drives in the BladeCenter S Configure mirroring in IBM i (if necessary) Disk protection for IBM i in BladeCenter H When IBM i is implemented in a BladeCenter H, it is not necessary to configure disk protection within the operating system. Data integrity is provided by the SAN, where each LUN made available to IBM i through VIOS is created from multiple physical drives in a RAID array. However, unlike LUNs physically that are attached to IBM i, LUNs that are virtualized by VIOS will not appear in IBM i as parity-protected. Instead, both LUNs from a SAN and physical SAS drives in a BladeCenter S is recognized in IBM i as unprotected DHxxx devices. IBM i on an IBM POWER blade (read-me first) 73

74 IBM i mirroring in BladeCenter S using SAS connectivity modules Presently, when IBM i is implemented in a BladeCenter S, IBM i disk-level mirroring must be used for disk protection. For an overview of mirroring in IBM i, refer to the Planning for Mirrored Protection topic in the IBM i Information Center at the following URL to create the mirroring configuration: ng.htm For maximum redundancy, one SAS disk unit from each DSM should be used in every mirrored pair in IBM i. However, if multiple SAS drives (the same number from each DSM) are assigned to IBM i at the same time and mirroring is started using all of them, there is no guarantee that every mirrored pair will contain drives from both DSMs. The only way to ensure that each disk unit in a mirrored pair resides in a separate DSM is to use the following method: Assign the same number of SAS drives from both DSMs to the POWER blade. Refer to the Configuring storage in BladeCenter S using the SAS Connectivity Module section for more information. As soon as the SAS drives are available in VIOS, identify which drives reside in which DSM. Refer to the Identifying SAS disk units in different DSMs section for more information. Assign one drive from each DSM when creating the IBM i LPAR, as discussed in the Create the IBM i partition using IVM section. Install the IBM i Licensed Internal Code (LIC), as discussed in the Install IBM i using the IVM interface section. Start mirroring using only the two available SAS drives. Refer to the Configuring mirroring section for more information. In VIOS, assign any additional SAS drives to IBM i, using only two drives each time. Refer to the Configuring mirroring section for more information In IBM i, add the newly available drives to the mirrored ASP, two at a time. Refer to the Configuring mirroring section for more information. Install the operating system, as discussed in the Install IBM i using the IVM interface section Identifying SAS disk units in different DSMs The two DSMs in a BladeCenter S can be distinguished by their serial numbers. Each SAS drive, or hdisk in VIOS, carries a portion of its DSM serial number in its physical location code. To locate the serial numbers of the DSMs: Start a browser session to the AMM and sign in with an administrator ID Click Hardware VPD in the menu options on the left Scroll down to the Storage section and check the FRU Serial No. column for each storage module. The serial number is similar to YK12907CM0Z9 The last seven characters of the DSM serial number (07CM0Z9, in this case) will be reflected in the physical location code of the SAS drives in VIOS. To find the location code for a SAS drive, or hdisk, in VIOS and identify in which DSM it is located: Start an IVM browser session to VIOS and sign in with padmin user ID. Click View/Modify Virtual Storage. IBM i on an IBM POWER blade (read-me first) 74

75 Click Physical Volumes and view the Physical Location Code column for each hdisk. The location code will be similar to U CM0Z9-P1-D2. The location code for the hdisk in VIOS also identifies the position of the SAS drive in the DSM. D2 is the second drive, left to right, in the top row in the DSM Configuring mirroring When assigning hdisks to the IBM i partition during creation, select only two hdisks with locations signifying that they are from different DSMs (for example, U CM0Z9-P1-D2 and U CM0YT-P2-D2). Mirroring does not have to be configured between drives in the same position in each DSM. After installing IBM i LIC is installed on one of these hdisks, add the second one to the System ASP: Sign into Dedicated Service Tools (DST) with QSECOFR Choose option 4 to work with disk units Choose option 1 to work with the disk unit configuration Choose option 3 to work with the ASP configuration Choose option 3 to add units to ASPs Choose option 3 to add units to existing ASPs Specify ASP 1 for the new disk unit and press Enter Confirm with Enter, or with F10 and then Enter To start mirroring: Sign into Dedicated Service Tools (DST) with QSECOFR Choose option 4 to work with disk units Choose option 1 to work with the disk unit configuration Choose option 2 to start mirrored protection Select the System ASP and press Enter Confirm with Enter and F10 To include additional SAS drives in the mirrored configuration: In IVM, identify two additional disk units from different DSMs, as described in the Identifying SAS disk units in different DSMs section. On the same Physical Volumes screen in IVM, select the check box next to them and click Modify partition assignment. Select the correct IBM i LPAR from the list and click OK. In IBM i, add the two new disk units to the mirrored System ASP, as described earlier in this section Install IBM i PTFs (if necessary) Refer to the Fixes concepts and terms topic in the IBM i Information Center for the steps to install any required PTFs at the following URL: 1.htm Refer to the IBM i Recommended Fixes website to get a list of the latest recommended PTFs at: IBM i on an IBM POWER blade (read-me first) 75

76 7.1.6 Post-install tasks and considerations Configure IBM i networking Refer to the Ethernet topic in the IBM i Information Center to configure IBM i networking for production at: Note that while any virtual Ethernet adapter available in IBM i (as a CMNxx device) can be configured, only those on VLANs bridged with a virtual Ethernet bridge can communicate with the external LAN Configure Electronic Customer Support (ECS) over LAN There is no physical modem available to IBM i on POWER blade, so ECS over LAN should be configured. Refer to the Setting up a connection to IBM topic in the IBM i Information Center at: How to perform IBM i operator panel functions (IVM) In IVM, click View/Modify Partitions. Select the IBM i partition. From the More Tasks list, select Operator panel service functions. Select the function you wish to perform and click OK How to display the IBM i partition System Reference Code (SRC) history (IVM) In IVM, click View/Modify Partitions. Select the IBM i partition. From the More Tasks list,select Reference Codes. Click an SRC to display all words IBM i on POWER blade considerations and limitations Refer to the Considerations and limitations for IBM i client partitions on systems managed by the Integrated Virtualization Manager topic in the IBM i Information Center at: htm. Also refer to the Limitations and restrictions for IBM i client partitions on systems managed by the Integrated Virtualization Manager topic in the PowerVM Editions Operations Guide, available at m. If you encounter message CPF9E7F and / or message CPF9E2D that deal with using more processor resources for IBM i than you are licensed for, please read this technote: afec8 IBM i on an IBM POWER blade (read-me first) 76

77 Moving the BladeCenter DVD drive to another blade using IVM The Error! Reference source not found. section explained how to initially assign the DVD- ROM drive in the BladeCenter to a POWER blade. When the blade is later powered on, the DVD drive device in VIOS (/dev/cd0) attains the available status and can be used (this might require the cfgdev command). The status of the DVD drive in VIOS can be verified in IVM by performing the following steps: Start a browser session to VIOS and log into IVM with the padmin user ID. Click View/Modify Virtual Storage. Click Optical Devices. If cd0 appears under Physical Optical Devices, the status of the device is available. However, when moving the BladeCenter DVD drive from one POWER blade to another, its status is not properly updated in VIOS and IVM on both blades. After performing the steps to re-assign the DVD drive to the second POWER blade, as explained in the Error! Reference source not found. section, the cd0 device will still appear in IVM on the first blade and its status in VIOS will remain available. Meanwhile, the status of cd0 on the second blade will be defined in VIOS and the device will not appear in IVM. To correct the status of the DVD drive on both blades, perform the following steps: Start a Telnet session to VIOS on the first blade and log in with the padmin user ID. Run the command rmdev dev cd0. Start an IVM session to VIOS on the second blade and log in with the padmin user ID. Click on Hardware Inventory. Then click on the Configure Devices. The display should refresh, then look for the cdx device. The DVD drive will now appear in IVM on the second blade and no longer appear on the first blade Moving the BladeCenter DVD drive to another blade using an HMC Use the following steps: From the client IBM i partition, vary off the OPTxx device. On the HMC welcome page, expand Hosts. Then select the Power blade by clicking in the checkbox next to it. In the lower menu, expand Configuration and then Virtual Resources. Click Virtual Storage Management. Select the correct VIOS from the list and click Query VIOS. Click the Optical Devices tab for optical resources). The DVD drive is named cdx. Select it, then click the Remove to remove it from the virtual server. If prompted, click the Force checkbox for the operation to complete. Repeat the above steps for the target virtual server using the Add option. Then vary on the OPTxx device on the target IBM i partition Moving the BladeCenter DVD drive to another blade using an SDMC Use the following steps: From the client IBM i partition, vary off the OPTxx device. On the SDMC welcome page, expand Hosts. Then select the blade and then select the IBM i virtual server that is using the DVD. IBM i on an IBM POWER blade (read-me first) 77

78 Right click on the name and click System Configuration -> Manage Virtual Server Data. Click the Media Devices link on the left hand side of the pane. The DVD drive is named cdx. Select it, then click the Remove to remove it from the virtual server. Repeat the steps for the target virtual server using the Add option, then vary on the OPTxx device there Moving the tape drive to another virtual server using an HMC HMC does not support showing or moving tape drives through the GUI. Instead you can refer to the Moving a tape device to another virtual server using the VIOS CLI section for more information Moving the tape drive to another virtual server using an SDMC SDMC does not support showing or moving tape drives through the GUI. Instead you can refer to the Moving a tape device to another virtual server using the VIOS CLI section for more information Moving a tape device to another virtual server using the VIOS CLI From the client IBM i partition, vary off the OPTxx device. Use the following commands after logging into the VIOS CLI: lsdev grep rmt to list all tape devices (rmt# is the resource name assigned) known to VIOS. If no devices are shown and you know that the tape device is connected to the switch and zoned to VIOS, try the cfgdev command. Repeat the above command. lsmap all more and use the space bar to find the new vhost# that has the rmt# device mapped to it. The vhostx is the server VSCSI adapter owned by VIOS. You must undo the mapping between the two. rmdev dev vhost# -recursive -ucfg to break the mapping. Create a new VSCSI adapter pair between the VIOS and the target IBM i partition using your management interface or section Creating multiple Virtual SCSI adapters using the VIOS CLI lsmap all more and use the space bar to find the new vhostx that you just created. It should be shown near the bottom of the list. Note the number of the vhost. You may have to do a cfgdev command to see the new vhostx. mkvdev vdev rmt# -vadapter vhostx where X is the number determined from the first lsmap command. Repeat the lsmap command to see that the rmt# device has been mapped to the vhostx. 7.2 Redundant VIOS partitons HMC and SDMC add support for redundant VIOS virtual servers on a POWER blade that IVM was not capable of. Each VIOS has shared access to the same storage LUNs used for the client IBM i virtual server. Integrated disks can not be shared, so they can not be used for this type of configuration. Refer to the sections and in the IBM i Virtualization and Open Storage Read-me First at: IBM i on an IBM POWER blade (read-me first) 78

79 8 Backup and restore 8.1 Overview of backup and restore for IBM i on a POWER blade The save and restore process for IBM i on a POWER blade is similar to that on other Power servers. Clients have the option to use either a single LTO4/5 SAS-attached tape drive or a Fibre Channel-attached tape library with LTO drives. The SAS tape drive or a tape drive from the Fibre Channel tape library is recognized in the IBM i LPAR and follows the same naming convention as in any other IBM i environment. This allows an administrator to use native IBM i save and restore commands or Backup, Recovery and Media Services (BRMS) as part of the same backup and recovery strategy employed on other Power servers. For both the backup options, the physical adapters connecting to the tape devices are owned by VIOS. A different type of I/O virtualization is used in each case, but the result is the same: the tape device is available in IBM i as though it were physically attached to the IBM i LPAR. The single LTO4/5 SAS-attached tape drive solution leverages VSCSI, so that the tape device first becomes available in VIOS and is then assigned to IBM i. The Fibre Channel-attached tape library solution uses NPIV, so that a tape drive from the library is directly mapped to IBM i in the SAN, with VIOS managing the physical FC adapter in a passthrough method. IVM will automatically create a virtual SCSI adapter pair when VIOS detects the tape device and will map the device to the adapter With SDMC, you manually create the virtual adapter pairs. Refe to the Configuring Virtual tape using the HMC There is not a graphical interface for the configuration of virtual tape. Perform the following steps to configure virtual tape using the VIOS CLI: Use either the Save and restore with a single LTO4/5 SAS tape drive section or the Save and restore with a Fibre Channel-attached tape library section to add a virtual tape device. Use telnet or PuTTy to connect to the VIOS partition. Sign in using padmin as the user ID. Enter cfgdev to check for new devices. IBM i on an IBM POWER blade (read-me first) 79

80 Enter lsdev grep rmt to view the tape devices and ensure that they are in Available state. Enter lsdev grep vhost and note the last vhost listed there. You need to associate this device with a VSCSI adapter pair. You need to use the HMC interface to create those. Refer to the Increasing the number of virtual adapters in the IBM i partition section for details. Then return to this step. On the VIOS CLI enter lsdev grep vhost. There should be a new vhosty listed. This vhosty is the VSCSI adapter in VIOS that you just created. To map the tape drive to the vhosty, enter mkvdev vdev <rmtx> -vadapter vhosty. Enter lsmap all more and press the Spacebar key to advance through the mappings. Look for the vhosty and make sure the vttapez device is associated with it. On the IBM i virtual server with auto configuration turned on, a TAPxx device appears. Vary it on to use it. Configuring Virtual tape using the SDMC section for the steps to map the tape device to the virtual adapter. The following sections describe the technical details, support statements and implementation steps for both backup and restore options. Note: As of the time of publication, this paper contains the complete supported environments for these two main save and restore options for IBM i on a POWER blade. They remain in effect until the online master copy of the paper is updated at: Any other I/O adapters, versions of software or tape models may or may not work and are not supported by IBM. 8.2 Save and restore with a single LTO4/5 SAS tape drive Technical overview VIOS can now virtualize a SAS-attached tape drives to IBM i, as VIOS has previously done with disk, optical and network devices. This functionality allows an administrator to perform backups to the SAS-attached LTO4/5 tape directly from IBM i, using native save and restore commands or BRMS. No SAS Tape media libraries are supported and no hardware encryption is supported. The virtualized tape drive is not physically attached to IBM i. The tape device is attached to a SAS CFFv or CIOv adapter in VIOS through a SAS I/O module in the BladeCenter. Note: In a BladeCenter-S chassis a Disk Storage Module (DSM) is required to be installed, even if there are no disks drives installed in it, to virtualize a tape drive. As soon as the tape drive is recognized by VIOS, it is assigned an rmtx device name, using rmt0 for the first one. The rmtx device is virtualized to IBM i by mapping it to a separate VSCSI server adapter, which is in turn connected to a VSCSI client adapter in the IBM i partition. When available in IBM i, the tape drive is recognized as a 3580 model 004/005 tape device and is assigned a TAPxx device description. This TAPxx device can then be used for all types of backup operations using native IBM i save and restore commands or BRMS. Figure 17 illustrates the save and restore environment for IBM i on POWER blade: IBM i on an IBM POWER blade (read-me first) 80

81 Figure 17 Save and restore environment for IBM i A note on terminology: the ability of VIOS to directly virtualize SAS-attached tape to client partitions is known as VIOS virtual tape, or just virtual tape. However, IBM i also has the concept of virtual tape, which refers to the capability of IBM i to use a virtual tape image catalog for save and restore. Virtual tape in this document refers to the VIOS functionality, unless specifically noted otherwise Support statements and requirements for tape drives. With virtual tape, IBM i can read LTO2, LTO3 and LTO4 tape media, and write to LTO3 and LTO4 media to an LTO-4 device. IBM i can read LTO3, LTO4 and LTO5 tape media, and write to LTO4 and LTO5 media to an LTO-5 device. IBM i can read LTO4, LTO5 and LTO6 tape media, and write to LTO5 and LTO6 media to an LTO-6 device The tape media can contain a save from IBM i on another POWER blade or an IBM Power server. Virtual tape is supported for IBM i on BladeCenter JS12, JS22, JS23, JS43 and PS7XX in both BladeCenter S and BladeCenter H. The requirements to use the capability are the same for all blades in both chassis. Only certain tape drives are supported for this type of virtualization to IBM i. This follows the support statement for AIX client partitions. The best way to determine what is and isn t currently supported is to check the developerworks topic: IBM Removable Media on IBM i: https://www.ibm.com/developerworks/mydeveloperworks/wikis/home?lang=en#/wiki/ib M%20Removable%20Media%20on%20IBM%20i Refer to the Support statements and requirements for FC tape libraries section for support statements for FC attached tape libraries. Refer to the Performing a D-mode IPL from virtual tape section for recommendations about IBM i installation from tape Making the SAS tape drive available to VIOS The tape drive can be attached to any external port on the SAS I/O module. Using the correct SAS cable, up to four SAS tape drives can be attached to a single external port on the I/O module for a total of 16 tape drives per SAS I/O module. In order for VIOS on a particular blade to IBM i on an IBM POWER blade (read-me first) 81

82 access the drive, the port to which the drive is connected must be assigned to that blade in the SAS zone configuration. All predefined configurations assign all external ports to all blades. If a custom configuration is used, each blade must be explicitly allowed to access the external port to which the tape device is attached. Refer to the Configuring storage in BladeCenter S using the SAS Connectivity Module section for working with the SAS zone configuration. As soon as the tape drive is physically accessible to the blade, VIOS recognizes it on startup and assigns it an rmtx device name. If VIOS is already running, use the cfgdev command on the VIOS command line to force a rescan of the available hardware and detect the tape drive. Perform the following steps in IVM to verify that VIOS has detected and is ready to use the tape device: Log into IVM with the padmin user ID. Click on Hardware Inventory. Click on Configure devices to ensure that VIOS has found the tape device. Click on View/Modify Virtual Storage. Click on the Optical/Tape tab. The tape drive should be listed under Physical Tape Resources and be in Available state. If it is not, verify that: The tape drive is physically connected to the SAS I/O module The SAS zone configuration is correct The blade has a CFFv SAS adapter The tape drive is powered on All requirements mentioned in the Save and restore with a single LTO4/5 SAS tape drive section are met. If the tape drive is in Defined state, refer to the Support statements and requirements for FC tape libraries section Sharing a tape drive among multiple blades A single tape drive can be used for VIOS and IBM i backups on multiple blades in the same chassis, including x86 blades with a SAS adapter, but only from one blade at a time. Any blade that requires access to the tape drive must be allowed to use the external SAS port on the I/O module to which the tape drive is connected (refer to the Configuring storage in BladeCenter S using the SAS Connectivity Module section for information on working with the SAS zone configuration). The tape drive can also be restricted to a certain blade by assigning the correct external SAS port only to that blade. If the tape drive is accessible by multiple blades, only one VIOS on one blade can use the tape drive at a time. The first blade on which VIOS is installed and started will recognize and configure the tape drive as an rmtx device, placing it in Available state. In this state, the tape drive can be used for backups by VIOS or IBM i. Any other instances of VIOS on different blades will detect the tape drive, but will not be able to configure it and place it in Available state; instead the tape device will be in Defined state. To check the state of the tape device, refer to the instructions in the Making the SAS tape drive available to VIOS section. To switch the tape drive between blades, perform the following steps from IVM: On the blade currently owning the tape drive: IBM i on an IBM POWER blade (read-me first) 82

83 Vary off the TAPxx device in IBM i as soon as all save operations are complete, using the WRKCFGSTS *DEV *TAP command or using the Windows, Linux, AIX or VMware interface to unallocated the tape device. Log into IVM and unassign the rmtx tape device from the i partition by clicking View/Modify and then Virtual Storage Optical/Tape Modify partition assignment under Physical Tape Devices Start a Telnet session to VIOS and use the following command to unconfigure the tape drive and place it in Defined state: rmdev dev rmtx ucfg (replace rmtx with the correct tape device name). On the blade to which the tape drive is being switched: Start a Telnet session to VIOS and use the cfgdev command to configure the tape drive and place it in Available state: cfgdev Assign the tape drive to the IBM i partition, as described in the Making the SAS tape drive available to VIOS section. Also, note that the tape drive cannot be disconnected (through the SAS cable) and reconnected dynamically. If the tape drive is disconnected from the SAS switch, use the steps mentioned in this section on the same blade to restore access to the tape drive Assigning the tape drive to IBM i using IVM Log into IVM with the padmin user ID. Click View/Modify Virtual Storage. Click the Optical/Tape tab. Select the rmtx device under Physical Tape Resources. Click Modify partition assignment. Select the correct IBM i partition from the list and click OK. IVM will automatically create a new pair of VSCSI server and client adapters and map the rmtx device to the VSCSI server adapter in VIOS. This enables error logging and troubleshooting to be performed separately for tape versus disk and optical. No action is required in IBM i if the QAUTOCFG system value is set to On : the tape drive will be recognized automatically, assigned the next available TAPxx device description and varied on. Tape media is initialized in IBM i Error logs for tape virtualization Errors associated with a virtualized tape drive are logged in both VIOS and IBM i. To display the error log in VIOS, use the following command: errlog more Using the more option displays the error log one screen at a time. Use the Spacebar key to advance to the next screen and q to leave the error log and return to the command prompt. To display detailed information about each error, use the following command: errlog ls more Perform the following steps to display virtual tape errors in IBM i using the Product Activity Log (PAL). In IBM i, type STRSST and press Enter. Sign on to service tools with the correct user profile and password. Select option 1 to start a service tool. IBM i on an IBM POWER blade (read-me first) 83

84 Select option 1 to work with the PAL. Select option 1 to analyze the log. Select option 3 to specify magnetic media as the type of log. If you want to also display statistical entries, change the corresponding option to Y. Look for entries for device type 63A Performing an IBM i save In IBM i, the virtual tape drive is recognized as a 3580 model 004/005 tape device. It supports all standard IBM i save and restore commands, as well as BRMS commands. Virtual tape can be used for full-system or partial saves, employing the same backup strategy as with a single tape drive on a Power standalone server. Initialize the tape volume in IBM i using the INZTAP command. Specify the Tape density parameter at *CFGTYPE. IBM i can use LTO3, LTO4 and LTO5 media for save operations with virtual tape. Refer to the Save and restore with a single LTO4/5 SAS tape drive for tape requirements section for information on tape requirements. After initializing the volume, use your standard backup procedure or refer to the IBM i Information Center Backup and recovery topic at: tocnode=toc:rzahg/i5os/17/0/ Performing an IBM i restore Refer to the Performing an IBM i restore section for information on tape drive read capabilities. Before attempting a restore, ensure that the following prerequisites are met: The tape drive is powered on and the tape media loaded The tape drive is recognized in VIOS, as described in the Making the SAS tape drive available to VIOS section. The tape drive is assigned to the correct IBM i partition, as described in the Assigning the tape drive to IBM i using IVM section. The tape drive is available for use in IBM i, using the WRKCFGSTS *DEV *TAP command. This prerequisite applies to partial restores; referring to the Performing a D- mode IPL from virtual tape section for performing a D-mode IPL from virtual tape When the tape volume containing a save is available to IBM i, use the standard restore procedure or refer to the IBM i Information Center Backup and recovery topic at: tocnode=toc:rzahg/i5os/17/0/ Performing a D-mode IPL from virtual tape It is possible to perform a full-system save in an IBM i partition on a different system and then use that save for a D-mode IPL and full-system restore on a blade with virtual tape. Ensure that the system on which you are doing the save on has i6.1.1 (or higher) installed with the latest PTFs prior to doing the save system back up. Supported tape media must be used, as described in Support statements and requirements for tape drive section. To perform a D-mode IPL using virtual tape, use the following steps: Verify that the tape drive is powered on and the tape is loaded IBM i on an IBM POWER blade (read-me first) 84

85 Verify that the tape drive is recognized in VIOS, as described in the Making the SAS tape drive available to VIOS section. Assign the tape drive to the correct IBM i partition, as described in the Assigning the tape drive to IBM i using IVM section. Perform the following steps to verify that the correct tape drive is selected as an alternate IPL resource for the IBM i partition. o In IVM, click on View/Modify Partitions. o Click on the IBM i partition. o On the General partition properties tab, verify that the correct rmtx device is selected from the Alternate restart adapter list/ o If more than one rmtx device is available in VIOS, the full list can be displayed by using the lsdev VIOS command. VIOS recognizes tape drives and assigns rmtx device names sequentially, starting from the first tape device on the first external SAS port in SAS I/O module bay 3. o On the same tab, verify the IBM i partition is configured for a D-mode manual IPL. Close the partition properties window. Select the IBM i partition and click Activate On the SDMC welcome page expand Hosts then select the POWER blade to see the virtual servers. Right-click on the IBM i client virtual server and click Operations->Activate->Profile. On the Profile tab select the open terminal window and then click Advanced. On the Advanced tab, select Manual from the Keylock position list and D-mode from the Boot mode list. Then click OK. 8.3 Save and restore with a Fibre Channel-attached tape library Technical overview On October 2009, IBM announced Fibre Channel (FC) tape libraries as a supported save and restore option for IBM i on a POWER blade. This capability allows an administrator to perform backup and recovery operations using the same IBM i native or BRMS commands as on any other IBM i LPAR attached to a tape library. Advanced capabilities such as automatic media change and hardware encryption of tape media are supported. Encrypted tape volumes can then be read and used for restore on a non-blade Power server connected to a tape library. Note that single tape drives, such as the TS2240, do not support hardware tape encryption. Therefore, it is not possible to read a tape volume encrypted in a library by using a TS2240 tape drive attached to IBM i on a POWER blade in a different BladeCenter. If the tape volume is not encrypted, however, backups performed through both save and restore methods for IBM i on a blade are interchangeable. IBM i uses a different I/O virtualization technology to get access to FC tape libraries compared to. a single LTO SAS tape drive. While VIOS virtualizes a single SAS tape drive to IBM i through VSCSI, VIOS uses N_Port ID Virtualization (NPIV) to allow IBM i direct access to a tape library in the SAN through an NPIV-capable adapter owned by VIOS. NPIV is a Fibre Channel technology that enables a single port on a FC adapter to be presented to the SAN as an N-number of independent ports with different World-wide Port Names (WWPNs). NPIV-capable adapters on Power servers and blades allow up to 256 virtual FC ports to be assigned to a single physical FC port. On Power servers and blades, VIOS always owns and manages the FC adapter. NPIV has been available for AIX as a client of VIOS on Power servers since November In October 2009, IBM also made NPIV available for IBM i and Linux on Power servers and for IBM i, AIX and Linux on POWER blades. IBM i on an IBM POWER blade (read-me first) 85

86 To leverage NPIV, an IBM i LPAR must have a Virtual Fibre Channel (virtual FC) client adapter created, which connects to a virtual FC server adapter in VIOS, similar to the VSCSI model. However, the virtual FC client adapter does not allow IBM i to access LUNs or tape drives already assigned to VIOS in the SAN. Instead, the virtual FC server adapter in VIOS is mapped to a FC port on the physical NPIV-capable adapter. This allows the client virtual FC adapter in IBM i direct access to the physical port on the FC adapter, with VIOS having a pass-through role, unlike with VSCSI. The tape drive in the FC tape library for IBM i use is not mapped to the WWPNs of the physical ports on the NPIV adapter and it does not become available in VIOS first. Instead, when the virtual FC client adapter in the IBM i LPAR is created, two virtual WWPNs are generated by the PowerVM Hypervisor. The tape drive in the library is zoned directly to the first of the two WWPNs on the virtual FC client adapter in IBM i. The second WWPN is used to facilitate Live Partition Mobility, although only AIX and Linux LPARs can leverage it at the moment. The PowerVM Hypervisor on a Power server or blade has the default capability to create 32,000 virtual WWPNs. When virtual FC client adapters are deleted, WWPNs are not reused. If all of the default 32,000 WWPNs are used, the client must obtain an enablement code from IBM, which allows the creation of a new set of 32,000 WWPNs. Zoning a tape drive in a FC tape library directly to a WWPN on the virtual FC client adapter in IBM i has two important advantages: It allows simpler SAN zoning and I/O virtualization configuration on the POWER blade. The tape drive in the library does not have to be made available to VIOS first and then assigned to IBM i; instead, it is mapped to IBM i in the SAN From the perspective of both the SAN device (the tape drive in the library) and IBM i, an NPIV connection is identical to a direct FC connection, which is otherwise not possible on POWER blades. NPIV enables IBM i to recognize and use all characteristics of the FC device, instead of using it as a generic tape drive virtualized by VIOS Figure 18 presents an overview of accessing a FC tape library through NPIV for IBM i on a POWER blade: Figure 18 An overview of accessing a FC tape library through NPIV for IBM i on a POWER processor-based blade Support statements and requirements for FC tape libraries There are three main hardware components of the NPIV-based FC tape library solution an NPIV-capable adapter on the POWER blade, an NPIV-capable FC switch module in BladeCenter IBM i on an IBM POWER blade (read-me first) 86

87 H and a FC-connected tape library. NPIV is not supported in BladeCenter S. The following is a comprehensive list of the currently supported expansion adapters, FC switch modules and tape libraries for NPIV for IBM i on a POWER blade: FC expansion adapters: o For JS12 and JS22: #8271 QLogic 8Gb Fibre Channel Expansion Card (CFFh) #8275 QLogic Converged Network Adapter (CFFh) o For JS23, JS43, PS700, PS701 and PS702: #8271 QLogic 8Gb Fibre Channel Expansion Card (CFFh) #8275 QLogic Converged Network Adapter (CFFh) #8242 QLogic 8Gb Fibre Channel Card (CIOv) #8240 Emulex 8Gb Fibre Channel Expansion Card (CIOv) FC switch modules: o #3243, #3244 QLogic 4Gb 10- and 20-port switch modules o #3284 QLogic 8Gb switch module o #5045, #5869 Brocade 8Gb 10- and 20-port switch modules o #3206, #3207 Brocade 4Gb 10- and 20-port switch modules FCoE switch modules: o #3248 BNT Virtual Fabric 10Gb Switch Module + #3268 QLogic Virtual Fabric Extension Module o #2241/2242 Cisco Nexus 4001 Switch Module FC tape libraries: o TS3100 (M/T 3573) with LTO3, LTO4 and LTO5 tape drives o TS3200 (M/T 3573) with LTO3, LTO4 and LTO5 tape drives o TS3310 (M/T 3576) with LTO3, LTO4 and LTO5 tape drives o TS3400 (M/T 3577) with TS1120 and TS1130 tape drives o TS3500 (M/T 3584) with LTO3, LTO4, LTO5, TS1120, TS1130, TS1140 and 3592-J1A tape drives o TS7610 (IBM System Storage ProtecTIER ) with software version v2.1 o TS7650 (ProtecTIER) with software version v2.4 Refer to the following APAR for the IBM i PTFs required and for D-Mode limitations for NPIV virtualization of Fibre Channel libraries and a current list of supported devices at: 912.ibm.com/n_dir/nas4apar.nsf/c79815e083182fec862564c00079d117/02327e e c6d66?OpenDocument&Highlight=2,ii Refer to the BladeCenter Interoperability Guide (BIG), specifically the FCoE Configuration Matrix for JS/PS Blades table and the Fibre Channel NPIV Support Matrix on JS and PS Blades table at: Pay attention to the footnotes associated with each of these tables to get the correct software and firmware levels that are required to run NPIV. FC Intelligent Pass-through Modules (IPMs) are not supported for this solution. The FC switch module must be in full-fabric mode, which is designated with the necessary feature codes (explained in this section) for different numbers of licensed ports. In most cases, the IPM shares the same hardware with the 10- and 20-port full-fabric switch; therefore, the IPM can be upgraded to one of the supported FC switch modules above by purchasing an additional license. Note that while all supported FC adapters and most supported FC switch modules operate at 8Gbps, the rest of the SAN does not have to operate at that speed. Additional FC switches outside of the chassis can operate at 4Gbps or 2Gbps. While this difference in throughput will have an effect on performance, it will not prevent NPIV from working. Similarly, only the first FC switch connected to the NPIV adapter must be NPIV-enabled. For this solution, that first FC switch is one of the supported switch modules (explained in this section). IBM i on an IBM POWER blade (read-me first) 87

88 The following minimum hardware, firmware and software are required for FC tape library support for IBM i on a Power blade: One supported FC tape library (can be shared among all blades in a chassis and other servers) One supported FC I/O module o If using the #8271 QLogic 8Gb Fibre Channel Expansion Card (CFFh) on the Power blade, one #3239 Multi-switch Interconnect Module is also required o If using the #3243 or #3244 QLogic 4Gb 10- or 20-port switch module, firmware level or higher on it is required o If using the #3284 QLogic 8Gb switch module, firmware level or higher on it is required One supported FC expansion adapter per blade o If using the #8271 QLogic 8Gb Fibre Channel Expansion Card (CFFh) or #8242 QLogic 8Gb Fibre Channel Card (CIOv), firmware level on it is required FC cables to connect the FC I/O module in the chassis to the tape library Power blade service processor firmware version 350_xxx from October 2009 or higher (verified with the lsfware command on the VIOS command line) VIOS or higher (verified with the ioslevel command on the VIOS command line) IBM i or higher with the latest PTF package from October 2009 or higher IBM i hosted by VIOS supports NPIV attachment of DS8000 logical drives. These logical drives must be formatted as IBM i 520 byte sector drives. While IBM announced the ability to connect the IBM DS5100/53000 storage subsystem through NPIV to IBM i on Power servers in April 2011, that solution is not yet supported for IBM i on POWER blades. As with other support statements, this paper reflects the current support and its online master version is updated as IBM s support position changes Creating the virtual FC configuration using IVM Before the tape library can be mapped to IBM i in the SAN, the virtual FC server and client adapters must be created in VIOS and IBM i, and the IBM i LPAR must be associated with the physical FC adapter. IVM makes these tasks easier by automatically creating the virtual FC server adapter in VIOS As soon as the virtual FC client adapter is created in IBM i. IVM also allows an administrator to directly map an IBM i virtual FC client adapter to a physical port on the NPIV adapter: Open a browser connection to VIOS and sign on with padmin user ID. Click the correct IBM i LPAR to open its properties window. Click Storage. Click the Virtual Fibre Channel twist arrow. Click Add. IVM will temporarily show Automatically generate for the new virtual FC client adapter s WWPNs. The reason is that the new WWPNs are not generated by the PowerVM Hypervisor until the create action is performed by clicking OK on the main window. Select the correct physical port to map the new virtual FC client adapter to in the dropdown menu. Click OK. Click Virtual Fibre Channel. Select the correct physical port to map the new virtual FC client adapter to from the list. Click View Partition Connections. Your IBM i LPAR and its WWPNs should appear in the list. Make a note of the WWPNs that are also displayed in the partition s properties. IBM i on an IBM POWER blade (read-me first) 88

89 If you want to set your own WWPN values, there is an option to do this with the chsyscfg or the chhwres IVM commands. Refer to the VIOS command reference manual for more details Creating the virtual FC configuration using SDMC Before the tape library can be mapped to IBM i in the SAN, the virtual FC server and client adapters must be created in VIOS and IBM i and the IBM i LPAR must be associated with the physical FC adapter. Perform the following steps define the virtual FC adapter: On the SDMC welcome tab, click Resources. Expand Virtual Servers and right-click the IBM i client virtual server you are configuring. Click System Configuration -> Manage Virtual Server. Click the Storage Devices link on the left hand side. In the Fibre Channel section click Add. An Add Fibre Channel Adapter menu appears and lists the physical FC adapters on the Virtual I/O server(s) that are available for mapping to the newly created virtual FC adapter. Select an adapter port and click OK. This creates the virtual FC adapter and the virtual WWPNs that are now assigned to that client virtual server. Perform the steps in the Making the tape library available to IBM i section. Note: Use the change port login (chnportlogin) command from the SDMC CLI to bring the virtual FC ports online for the SAN fabric to detect. This allows the SAN zoning to be done easier. Refer to the HMC commands at: hehmcremotecommandline.htm Making the tape library available to IBM i After configuring the virutal FC configurations is complete, making a tape drive within the FC tape library available to IBM i requires the correct zoning in the FC switch module in the BladeCenter H and any other switches to which the tape library might be connected. This process is identical to connecting a FC tape library directly over the SAN fabric to IBM i on a Power server without NPIV. Use the first WWPN from the virtual FC WWPN pair and refer to the Redbooks Implementing Tape in i5/os at: to perform the correct zoning. Note: When using an SDMC, you can use the CLI to enter an HMC command to login to the virtual FC port to allow the port to be seen on the SAN for zone configuration. The command is chnportlogin. Refer to the following link for more details: There is a web interface on the tape media library where you need to enable control paths from each device that you want IBM i to be able to work with. Selecting Enable creates the control paths. IBM i cannot dynamically detect these control paths. To detect the control paths, you need to re- IPL (reboot) the virtual I/O adapter (IOA). First determine which virtual IOA has been created for the virtual fibre channel adapters. To do this, enter a WRKHDWRSC *STG command and check for a 6B25 (virtual FC) adapter. Note the IOP/IOA name. IBM i on an IBM POWER blade (read-me first) 89

90 Next, use STRSST and start a service function (1) -> hardware service manager (7) -> Logical Hardware Resources (2) System bus resources (1). Enter a 255 in the System bus(es) to work with field and hit enter. This is the virtual adapter bus. Locate the virtual IOA from above and enter a 6 for I/O debug, then option 4 to IPL the IOA. Use F3 to exit SST. Return to the WRKHDWRSC *STG and use an option 9 to see the tape devices under the virtual FC IOA. With auto configuration turned on, a new tape device(s) should show up under WRKCFGSTS *DEV TAP* Performing an IBM i save In IBM i, the FC tape library is recognized with the correct type and model, such as for TS3500, and is assigned a standard TAPMLBxx device name. For instance, each LTO4 tape drive within the library mapped to the IBM i LPAR is recognized as a device with a TAPxx name. The tape library connected through NPIV supports all standard and advanced library functions, as well as both native save restore and BRMS commands. Use the standard backup procedure or refer to the IBM i Information Center s Backup and recovery topic at: Performing an IBM i restore IBM i can read from any supported tape media depending upon the model tape device used in the FC tape library and can perform both full-system and partial restores. You can use the standard restore procedure or refer to the IBM i Information Center s Backup and recovery topic at: Performing a D-mode IPL from a FC tape library using IVM It is possible to perform a full-system save in an IBM i partition on a different system and then use that save for a D-mode IPL and full-system restore on a blade with access to a FC tape library. To perform a D-mode IPL from a tape library, use the following steps: Note: Make sure that you have the requisite PTFs installed on the system prior to the save system. Verify that the tape drive is powered on and the tape is loaded Verify that the correct tape drive in the library is zoned to one of the WWPNs on the virtual FC client adapter in the IBM i LPAR Verify that the correct virtual FC client adapter is selected as an alternate IPL resource for the IBM i partition: o In IVM, click on View/Modify Partitions. o Click on the IBM i partition. o On the General partition properties tab, verify that the virtual FC client adapter with the correct WWPNs is selected from the Alternate restart adapter list. o On the same tab, verify that the IBM i partition is configured for a D-mode manual IPL. Close the partition properties window. Select the IBM i partition and click Activate. 8.4 Backup and restore of VIOS and IVM IBM i on an IBM POWER blade (read-me first) 90

91 For a bare metal type of restore of the entire POWER blade, backups of VIOS (which will contain IVM) and the LPAR configuration are also required. Saving VIOS, the LPAR configuration and IBM i on the blade can be performed in any order. During a restore, VIOS and the LPAR configuration are restored first, followed by IBM i as described earlier in this section Backing up VIOS to tape The VIOS (which includes IVM) can be backed up to tape using the following steps: The client (hosted) partitions can be active. Ensure a blank tape is loaded in the tape device you is using for the backup. Telnet or PuTTY into VIOS. Log in using the padmin user ID. Backup the partition profile data by typing the following command: o bkprofdata o backup f profile.bak. This results in a profile backup file location of /home/padmin/profile.bak. Alternatively, the partition profile can be backed up through IVM using the following steps: o Log into the Integrated Virtualization Manager using the padmin user ID. o On the Service Management menu, click Backup/Restore. o On the Partition Configuration Backup/Restore tab, click the Generate Backup button. o As soon as the operation is complete, the backup file location is displayed (default location is /home/padmin/profile.bak) You can restore the partition profile by clicking Restore Partition Configuration, o Note that the instructions on the Management Partition Backup/Restore tab will not work for a POWER blade. The steps that follow are the proper method. Find the tape device name by typing the following command: lsdev type tape If the device has a status of Defined, type the following command, with name as the tape device name: cfgdev dev name Check the current physical block size for the tape device using the following command, with name as the tape device name: lsdev dev name attr Change the block size to if it is smaller than that using the command: o chdev dev name attr block_size= Perform the VIOS backup, saving 512 blocks at a time, by using the following command: o backupios -tape /dev/rmt0 -blocks 512 The resulting tape is a bootable VIOS backup. Alternatively, the partition profile can be backed up through the SDMC interface using the following steps: On the SDMC welcome page, expand Hosts. Select the blade server and then click System Configuration -> Manage Virtual Server Data -> Backup. Enter a file name for the backup when prompted. Click Ok Restoring VIOS from a SAS tape backup The steps to restore the VIOS (including IVM) from a tape backup are described in this section. Ensure that the backup tape media is loaded in the tape device before proceeding. Perform the following steps to restore VIOS. Open a console for VIOS: o Telnet/PuTTY to the BladeCenter s AMM and log in with a valid AMM user ID and password. IBM i on an IBM POWER blade (read-me first) 91

92 o Type the following command and press Enter, where x is the slot number of the POWER blade: env T blade[x]. o Type console and press Enter to assign the VIOS console. Power on or restart the POWER blade. This can be done in a number of ways, including using the white power button on the front of the blade or using the power on or restart option from the AMM web browser interface. When the Partition Firmware displays its initial screen (a series of IBM s scroll across the console), press 1 to enter the SMS menu. If you miss the prompt to press 1, the partition will attempt to boot from the default device, which might boot back into VIOS, if there is still a bootable image installed on the default device (probably the internal hard drive). Otherwise, the boot will fail and the SMS menu will eventually be displayed. On the SMS Main Menu, type 5 to select the Select Boot Options and press Enter. On the Multiboot menu, type 1 to select the Select Install/Boot Device and press Enter. IBM i on an IBM POWER blade (read-me first) 92

93 On the Select Device Type menu, type 2 to select Tape and press Enter. On the Select Media Type menu, type 4 to select SAS Tape and press Enter. IBM i on an IBM POWER blade (read-me first) 93

94 On the Select Device menu, type the device number for the tape device with the backup media loaded and press Enter. On the Select Task menu, type 2 for Normal Boot Mode and press Enter. Type 1 and press Enter to confirm that you want to exit the SMS menus. It will take several minutes to boot from the tape. IBM i on an IBM POWER blade (read-me first) 94

95 As soon as you are past the Welcome to the Virtual I/O Server screen, a screen with the Please define the System Console message is displayed. Type 2 and press Enter as the screen directs. Note that these keystrokes will not appear on the screen. You may see a warning screen indicating the disks on the system do not match the disks on the source (of the backup) system. Type 1 and press Enter to continue with the installation. IBM i on an IBM POWER blade (read-me first) 95

96 On the Welcome to Base Operating System Installation and Maintenance menu, type 3 to select the Start Maintenance Mode for System Recovery option and press Enter. On the Maintenance menu, type 6 to select the Install from a System Backup and press Enter. IBM i on an IBM POWER blade (read-me first) 96

97 On the Choose mksysb Device menu, type the number of the device with the backup tape mounted and press Enter. The restore will now begin. A progress screen is displayed until the restore is complete. As soon as the restore is complete, the VIOS will reboot from the just completed restore image. After restoring VIOS, restore the partition data. The partition data can be restored by using the following command on the VIOS console (assuming you used the bkprofdata command as described earlier): rstprofdata l 1 f profile.bak --ignoremtms. Perform the following steps to restore the partition data through IVM: Log into the Integrated Virtualization Manager using the padmin user ID. On the Service Management menu, click Backup/Restore. On the Partition Configuration Backup/Restore tab, the Partition Configuration Backup file gets listed. Click the Restore Partition Configuration button to restore the listed configuration. Alternatively, the partition (virtual server) data can be restored through SDMC if saved by the SDMC using the following steps: Log into the SDMC using the sysadmin user ID. On the welcome page, right-click on the host you want to restore. Click System Configuration -> Manage Virtual Server Data -> Restore. Select the backup file you want to restore. Select the backup you want to perform from the following types: o Full Restore Restore from the backup file and don t merge with the existing SDMC virtual server data. o Backup Priority Merge the backup data with the current SDMC virtual server data but prioritize the backup file contents if there are conflicts between it and current SDMC data. o Host Priority Merge the backup data with the current SDMC virtual server data but prioritize the SDMC contents if there are conflicts between the current SDMC data and the backup data. o Click OK. IBM i on an IBM POWER blade (read-me first) 97

98 Appendix: 1 i Edition Express for BladeCenter S 1.1 Overview The i Edition Express for BladeCenter S is a packaged solution that includes a BladeCenter S chassis with SAS disk drives, a JS12/PS700 POWER blade server, IBM PowerVM and IBM i for 10 users. It is an integrated offering priced similarly to a Power 520 Express i Edition that makes it easier for clients to consolidate their existing IBM i and Windows workloads in a BladeCenter environment. For the minimum i Edition Express for BladeCenter S hardware configuration and further details, refer to the URL: Note that service vouchers are not available for the JS12 when it is part of an i Edition Express order. Service vouchers are otherwise available for IBM i on all POWER blades. Refer to the following website for details and voucher registration: The i Edition Express for BladeCenter S can optionally get VIOS and IBM i preloaded on the JS12/PS700. Refer to the next section in this paper for an overview of the IBM i preinstall on BladeCenter S and the post-ship configuration actions that must be performed by the implementing party. 1.2 IBM i preinstall on BladeCenter S IBM i preinstall overview On October 2008, IBM announced the capability to preinstall IBM i in a BladeCenter S configuration with BladeCenter JS12 and JS22. This greatly simplifies the implementation of IBM i in a BladeCenter S environment and provides the client with a white-button ready server, similar to an IBM Power 520 server. On April 2009, IBM added preinstall support for JS23 and JS43. On October 2009, IBM expanded the preinstall options to include BladeCenter S with SAS RAID Controller Module configurations. On April 2010, IBM expanded the preinstall options to include the PS7xx blades. The IBM i preinstall option is available only when the BladeCenter S and the POWER blade(s) are ordered together. The preinstall is not available when a POWER blade is ordered separately or as part of the BladeCenter H configuration, because IBM manufacturing does not have access to the storage that is used for IBM i in that case. The i Express Edition for BladeCenter S orders can automatically get VIOS and IBM i preloaded on the JS12 or PS700. This is a change for the offering, which previously included a preinstall only of VIOS IBM i preinstall with the SAS Connectivity Module The SAS Connectivity Module is also known as the non-raid SAS Switch Module, or NSSM. When a client orders one or two NSSMs and IBM i preinstall with BladeCenter S, IBM manufacturing will perform the following tasks: Check that all requirements in the order are met. Activate the correct SAS zone configuration on the BladeCenter S to assign drives to the POWER blade. IBM i on an IBM POWER blade (read-me first) 98

99 Create the VIOS partition on the POWER blade. Install VIOS. Mirror the VIOS installation. Create the IBM i partition and assign two drives in the BladeCenter S to it. Install the Licensed Internal Code (LIC). Start mirroring in IBM i. Install the IBM i operating environment and licensed program products (LPPs) specified on the order. The VIOS partition will always be installed on the drive(s) on the POWER blade. In the case of JS12, JS43 PS700, PS702 or PS704, VIOS is mirrored to the second drive on the blade. In the case of JS22, JS23, PS701 or PS703, VIOS is mirrored to a separate drive in the BladeCenter S. IBM i will always be installed and mirrored on two drives in the BladeCenter S. Additional drives can be added to the IBM i installation later by the implementing party, depending on the SAS zone configuration selected IBM i preinstall with the SAS RAID Controller Module The SAS RAID Controller Module is also known as the RAID SAS Switch Module (RSSM) and two must always be ordered. When a client selects IBM i preinstall on BladeCenter S with RSSMs, IBM manufacturing will perform the following tasks: Check that all requirements on the order are met Create one RAID5 array (or storage pool) with three drives for each POWER blade with IBM i preinstall that is part of the order Create one 36-GB volume for VIOS on each array and assign the volume to the correct blade Create one 100-GB volume for IBM i on each array and assign the volume to the correct blade Install VIOS on the 10-GB volume assigned to the blade Install IBM i LIC, operating environment and LPPs specified on the order on the 100-GB volume assigned to the blade In this case, both VIOS and IBM i are installed on separate volumes on the same storage pool of three drives in the chassis. Therefore, it is recommended that clients not order drives on the POWER blades when selecting IBM i preinstall on BladeCenter S with RSSMs, If drives are ordered on the POWER blades, those drives can be used after implementation. However, note that drives on the POWER blades cannot be included in an RSSM storage pool and would present a very low-performance option for IBM i workloads. Using the drives on the POWER blades for IBM i continues to be a non-recommended option Requirements Before configuring a BladeCenter S for IBM i preinstall, IBM manufacturing checks to ensure that all requirements for a successful installation are met. If any requirements are not met, the preinstall will fail Requirements for NSSM The following features are required in an IBM i preinstall configuration on BladeCenter S with NSSMs: IBM i on an IBM POWER blade (read-me first) 99

100 FC 8250 (SAS Expansion Card (CFFv) for IBM BladeCenter) on JS12 or JS22. FC 8246 (SAS Pass-through Expansion Card (CIOv) for IBM BladeCenter) on JS23, JS43, or PS7xx. When configuring a JS12, JS43, PS700, PS702 or PS704, both drives on the blade are required. When configuring a JS 22, JS23, PS701 or PS703, one drive on the blade is required. All drives in the BladeCenter S must be the same size as part of an IBM i preinstall order. When configuring a JS12, JS43, PS700 or PS702, at least two drives are required in the BladeCenter S for each POWER blade with the IBM i preinstall, using one of the predefined SAS zone configuration feature codes. When configuring a JS22, JS23 or PS701, at least three drives are required in the BladeCenter S for each POWER blade with the IBM i preinstall, using one of the predefined SAS zone configuration feature codes. o The drives in the BladeCenter S must be the same size or bigger than the drives on the JS22, JS23, PS701 or PS703. o One of the drives in the BladeCenter S will be used to mirror the VIOS install from the drive on the JS22, JS23, PS701 or PS703. FC 5005 (Software Preinstall), FC 8146 (VIOS Preinstall) and FC 8141 (IBM i Preinstall) must be ordered o In the case of i Express Edition for BladeCenter S, FC 8141 is not required. All i Express Edition orders with FC 0775 will now get IBM i preinstalled Table 4 shows the available feature codes for predefined SAS zone configurations for BladeCenter S: IBM i preload requirements Predefined Storage Config feature Up to # of blades Up to # drives per blade Up to # od DSMs # of SAS I/O Modules X JS12, X JS43,PS700, PS703 Pre-defined storage FC valid for X JS12, X JS43, PS700 and PS X JS22, X JS23, and PS701, PS703 Min BCS drives per blade Pre-defined storage FC valid for X JS22, X JS23, PS701 and PS704 Up to # codes Config # of drives 5059* Yes 3 No 5067* Yes 3 No Yes 3 Yes Yes 3 Yes 5084** Yes 3 Yes 5085** Yes 3 Yes Yes 3 Yes Yes 3 Yes Table 1: IBM i preload requirements * Not valid with 1 DSM ** Not valid with JS22, JS23, PS701 or PS703 and 1 DSM Requirements for RSSM The following features are required in an IBM i preinstall configuration on BladeCenter S with RSSMs: FC 8250 (SAS Expansion Card (CFFv) for IBM BladeCenter) on JS12 or JS22. FC 8246 (SAS Passthrough Expansion Card (CIOv) for IBM BladeCenter) on JS23, JS43, PS700, PS701 or PS702. IBM i on an IBM POWER blade (read-me first) 100

101 All drives in the BladeCenter S must be the same size if they are part of an IBM i preinstall order. Three BladeCenter S drives per POWER blade with preinstall selected FC 5005 (Software Preinstall), FC 8146 (VIOS Preinstall) and FC 8141 (IBM i Preinstall) must be ordered. In the case of i Express Edition for BladeCenter S with RSSM, FC 8141 is not required. All i Express Edition orders with FC 0776 will get IBM i preinstalled Installation steps that must be performed by the implementing party While the IBM i preinstall capability greatly simplifies the installation of IBM i in a BladeCenter S environment, some manual steps must be performed by the party implementing this solution. Install the BladeCenter S hardware, as discussed in section Install the BladeCenter and blade server hardware of this paper. Configure and update the AMM, as discussed in Configure the Advanced Management Module (AMM), Download BladeCenter management module firmware and Update the BladeCenter firmware sections Configure and update any I/O modules and DSMs, as discussed in Update the BladeCenter firmware section.. Create a physical network connection to I/O module bay 1 of the BladeCenter S, to allow connection of the LAN console connection for the IBM i partition created and installed by IBM manufacturing. Install the Operations Console (LAN) component of the System i Access for Windows software on a PC connected to the network on which the BladeCenter S resides. Refer to the Creating multiple Virtual SCSI adapters per IBM i partition With the availability of VIOS 2.1 in November 2008, it is possible to create multiple Virtual SCSI client adapters per IBM i partition on a POWER blade. POWER blades can be IVM-managed or SDMC managed. This allows for increased flexibility in configuring storage and optical devices for IBM i in the blade environment: More than 16 disk and 16 optical devices can be virtualized by VIOS per IBM i partition Disk and optical devices can be configured on separate Virtual SCSI adapters The IVM web browser interface creates a single Virtual SCSI client adapter per client partition and a corresponding Virtual SCSI server adapter in VIOS. With SDMC, you can manually create more adapter pairs. The exception for IVM is for tape virtualization, as described in the Multiple Virtual SCSI adapters and virtual tape using IVM section Creating multiple Virtual SCSI adapters using the VIOS CLI To create additional Virtual SCSI client adapters under IVM, you must use the VIOS command line: Log into VIOS with padmin or another administrator user ID. VIOS always has partition ID 1 when IVM is used, and by default carries the serial number of the blade as a name. To display the current names and IDs of all existing partitions, use te following command: lssyscfg -r lpar -F "name,lpar_id" If the IBM i partition is not activated, refer to the following example that adds a new Virtual SCSI client adapter in slot 5 of IBM i partition test, connecting to a server IBM i on an IBM POWER blade (read-me first) 101

102 adapter in the next available slot (chosen automatically by IVM) in the partition named VIOS : o o chsyscfg -r prof -i "name=test,virtual_scsi_adapters+=5/client/1/vios//1" The corresponding server adapter in VIOS is created automatically by IVM If the IBM i partition is running, refer to the following example that creates a new Virtual SCSI client adapter in slot 5 of IBM i partition test, connecting to a server adapter in the next available slot (chosen automatically by IVM) in partition VIOS : o o chhwres -r virtualio --rsubtype scsi -p test -o a -s 5 -a "adapter_type=client" The corresponding server adapter in VIOS is created automatically by IVM Notice that there are three variables in the previous commands: the name of the IBM i partition, the new slot for the Virtual SCSI client adapter, and the name of the VIOS partition. To display which virtual slots have been already been used, by partition and adapter, use: lshwres -r virtualio --rsubtype slot --level slot (notice the double dashes) The type of adapter is shown on the middle of each line. The slot numbers are shown at the beginning of each line Creating multiple Virtual SCSI adapters using the HMC When using an HMC to manage Power Blades, you can create virtual SCSI adapters to create the virtual SCSI connection between VIOS and the IBM i client partition. In this section you will see how to create server adapter and client adapter pair. You need to determine the next adapter slot that is available for both the VIOS and the IBM i partitions. The slot numbers are used to tie the adapters together. Use the Current configuration for the VIOS and IBM i partition s to note the next available virtual adapter slot number. Create Server SCSI Adapter Perform the following steps to create a server SCSI adapter. Log into HMC with hscroot as the user ID or using another user ID. Click the Managed System link of the navigation pane Select the blade from the list of hosts and select the checkbox next to the VIOS you are configuring Click Configuration -> Manage Profiles Select a profile and select Actions->Edit to edit the profile Click the Virtual Adapters tab and then click Actions -> Create Virtual Adapter -> SCSI Adapter Specify the next available (determined above) Server Adapter slot number and click Only selected client partition can connect. Do NOT use Any Partition can connect. That will not work. Use the list to select the IBM i client partition you want to use Specify the next available (determined above) Client Adapter slot number and click OK to create the server Virtual SCSI Adapter Create Client SCSI Adapter Perform the following steps to create a client SCSI adapter: IBM i on an IBM POWER blade (read-me first) 102

103 Click the Resources tab of the SDMC welcome screen and select the checkbox next to the IBM i client you are configuring. Click Actions-> System Configuration -> Manage Profiles Select a profile and select Actions->Edit to edit the profile Click the Virtual Adapters tab and then click Actions -> Create Virtual Adapter -> SCSI Adapter Specify the next available (determined above) Client Adapter slot number and select the VIOS from the Server partition list. Specify the next available (determined above) Server Adapter slot number and click OK to create the client Virtual SCSI Adapter Creating multiple Virtual SCSI adapters using the SDMC When using an SDMC to manage Power Blades, you can create virtual SCSI adapters to create the virtual SCSI connection between VIOS and the IBM i client partition. In this section you will see how to create server adapter and client adapter pair. You need to determine the next adapter slot that is available for both the VIOS and the IBM i partitions. The slot numbers are used to tie the adapters together. Use the Current configuration for the VIOS and IBM i partition s to note the next available virtual adapter slot number. Create Server SCSI Adapter Perform the following steps to create a server SCSI adapter. Log into SDMC with sysadmin as the user ID or using another user ID. From the Resources tab of the SDMC, select the blade from the list of hosts and select the checkbox next to the VIOS you are configuring. Click Actions-> System Configuration -> Manage Profiles Select a profile and select Actions->Edit to edit the profile Click the Virtual Adapters tab and then click Actions -> Create Virtual Adapter -> SCSI Adapter Specify the next available (determined above) Server Adapter slot number click Only selected client partition can connect Use the list to select the IBM i client partition you want to use Specify the next available (determined above) Client Adapter slot number and click OK to create the server Virtual SCSI Adapter Create Client SCSI Adapter Perform the following steps to create a client SCSI adapter: Click the Resources tab of the SDMC welcome screen and select the checkbox next to the IBM i client you are configuring Click Actions-> System Configuration -> Manage Profiles Select a profile and select Actions->Edit to edit the profile Click the Virtual Adapters tab and then click Actions -> Create Virtual Adapter -> SCSI Adapter Specify the next available (determined above) Client Adapter slot number and select the VIOS from the Server partition list. Specify the next available (determined above) Server Adapter slot number and click OK to create the client Virtual SCSI Adapter IBM i on an IBM POWER blade (read-me first) 103

104 1.2.7 Mapping storage to new Virtual SCSI adapters using the VIOS CLI After creating a new Virtual SCSI client adapter for IBM i and the server adapter for VIOS are created, you can assign additional LUNs to IBM i by mapping them to the new server adapter in VIOS. Alternatively, you can map an optical drive to a separate Virtual SCSI connection. Note that even if you create a new Virtual SCSI adapter on the command line as described above, the IVM Web interface will not use it to map LUNs or optical devices to IBM i. The IVM interface does not show the new vscsi adapter or allow you to select it. Only map up to 16 disks to a client partition when using the IVM interface. More LUNs are allowed,to be mapped from IVM because AIX supports 256, but IBM i does not. The assignment of LUNs and optical devices to IBM i using a Virtual SCSI adapter other than the first virtual adapter must be done explicitly using the VIOS CLI. To display LUNs (or other physical volumes, such as SAS disks) that are available to be assigned to IBM i on the blade, use the following list physical volume command: lspv -avail To display all existing virtual resource mappings by Virtual SCSI server adapter in VIOS (vhostx) and client partition, as well as any newly created Virtual SCSI server adapters, use the following list mappings command: lsmap all more Press the Space bar key to move forward one screen of data at a time and the Down Arrow key for one line at a time. Enter q to quit. Any new Virtual SCSI server adapters will have no resources mapped to them. Assuming that hdisk7 is an available LUN and vhost1 is a newly created Virtual SCSI server adapter, use the following command to make hdisk7 available to IBM i: mkvdev vdev hdisk7 vadapter vhost1 The dev parameter can be used to name the device to something more meaningful, such as <partition_name_diskx> The lsmap command above will also show whether the physical DVD drive in the BladeCenter (typically cd0) is already assigned to a client other partition. If so, a vtoptx device will exist under a Virtual SCSI server adapter (vhostx). To map the DVD drive to a different Virtual SCSI adapter, first delete the correct existing vtoptx device (such as vtopt0 ) using the following command: rmdev dev vtopt0 Skip the previous step if the DVD drive is not already assigned to a client partition. Next, assign the physical optical device (such as cd0) to the IBM i partition using the correct separate Virtual SCSI adapter (such as vhost 1 ) using the following command: mkvdev vdev cd0 vadapter vhost1 To map a file-backed optical device to a new Virtual SCSI adapter (such as vhost 1 ), using the following command : mkvdev fbo vadapter vhost Mapping storage to new Virtual SCSI adapters using the HMC Use the following steps: IBM i on an IBM POWER blade (read-me first) 104

105 On the HMC navigation pane, click on Managed Systems. Then select the Power blade by clicking in the checkbox next to it. In the lower menu, expand Configuration and then Virtual Resources. Click Virtual Storage Management. Select the correct VIOS from the list and click Query VIOS. Click the Physical Storage tab. A list of hdisks is shown. In the upper middle section of the pane, use the pull down to choose the combination of partition name and virtual SCSI adapter to associate the hdisk to. You can map up to 16 hdisks to each VSCSI adapter, but the interface will not stop you from mapping more. This is because AIX partitions can also use this interface and AIX supports up to 256 hdisks per adapter (though they seldom map that many). If you map more than 16, the additional hdisks will not be seen by IBM i. Choose an unassigned hdisk to map to the adapter and click Assign. Repeat this process for each hdisk. As the mapping completes, the IBM i client partition should be listed as the new hdisk owner. Close the window when done Mapping storage to new Virtual SCSI adapters using the SDMC Use the following steps: On the SDMC welcome page, click on Hosts. Right click on the Power blade and select System Configuration -> Virtual Resources - > Virtual Storage Management. Select the correct VIOS from the list and click Query VIOS. Click the Physical Storage tab. A list of hdisks is shown. In the upper middle section of the pane, use the pull down to choose the combination of partition name and virtual SCSI adapter to associate the hdisk to. You can map up to 16 hdisks to each VSCSI adapter, but the interface will not stop you from mapping more. This is because AIX partitions can also use this interface and AIX supports up to 256 hdisks per adapter (though they seldom map that many). If you map more than 16, the additional hdisks will not be seen by IBM i. Choose an unassigned hdisk to map to the adapter and click Assign. Repeat this process for each hdisk. As the mapping completes, the IBM i client partition should be listed as the new hdisk owner. Close the window when done Removing Virtual SCSI adapters using the VIOS CLI The VIOS command line is also used to remove Virtual SCSI client adapters from an IBM i partition. Note that removing a Virtual SCSI client adapter from IBM i will make any devices it provides access to unavailable. As mentioned above, to check which devices in VIOS are mapped to which Virtual SCSI server adapter, and therefore which client partition, use the following command on the VIOS command line: lsmap all more Press the Spacebar key to move forward one screen of data at a time and the Down Arrow key for one line at a time, and enter q to quit. To remove a Virtual SCSI client adapter when the IBM i partition is not activated, refer to the following example which removes the client adapter in slot 5 of IBM i partition test : IBM i on an IBM POWER blade (read-me first) 105

106 chsyscfg -r prof -i "name=test,virtual_scsi_adapters-=5/////" (note the minus sign before the equal sign) To remove a Virtual SCSI client adapter when the IBM i partition is running, refer to the following example which removes the client adapter in slot 5 of IBM i partition test : chhwres -r virtualio --rsubtype scsi -p test -o r -s Removing virtual SCSI adapters using SDMC Note that removing a Virtual SCSI client adapter from IBM i will make any devices it provides access to unavailable. Log into SDMC with sysadmin as the user ID or another user ID On the Resources tab of the SDMC, select the blade from the list of hosts and select the checkbox next to the IBM i virtual server you are configuring Click Actions-> System Configuration -> Manage Profiles Select a profile and then click Actions->Edit to edit the profile Click the Virtual Adapters tab and then click Actions -> Delete Multiple Virtual SCSI adapters and virtual tape using IVM VIOS can now directly virtualize the SAS-attached tape drive to IBM i client partitions. VIOS does so by mapping the physical tape device, rmtx, to a Virtual SCSI server adapter, which is then connected to a Virtual SCSI client adapter in IBM i. A separate pair of Virtual SCSI server and client adapters is used for tape virtualization. However, there is no need to manually add Virtual SCSI adapters specifically for tape. When the tape drive is assigned to an IBM i partition in IVM, IVM will automatically create a new Virtual SCSI server adapter in VIOS and a Virtual SCSI client adapter in that IBM i partition. IVM will do so for each tape device assigned to IBM i. It is possible to manually add a new pair of Virtual SCSI server and client adapters for tape virtualization and map the tape drive to IBM i. It is also possible to map a tape drive to an already existing Virtual SCSI server adapter in VIOS used for disk or optical virtualization. However, the recommended approach to making tape drives available to IBM i is to use IVM Configuring Virtual tape using the HMC There is not a graphical interface for the configuration of virtual tape. Perform the following steps to configure virtual tape using the VIOS CLI: Use either the Save and restore with a single LTO4/5 SAS tape drive section or the Save and restore with a Fibre Channel-attached tape library section to add a virtual tape device. Use telnet or PuTTy to connect to the VIOS partition. Sign in using padmin as the user ID. Enter cfgdev to check for new devices. Enter lsdev grep rmt to view the tape devices and ensure that they are in Available state. Enter lsdev grep vhost and note the last vhost listed there. You need to associate this device with a VSCSI adapter pair. You need to use the HMC interface to create those. Refer to the Increasing the number of virtual adapters in the IBM i partition section for details. Then return to this step. On the VIOS CLI enter lsdev grep vhost. There should be a new vhosty listed. This vhosty is the VSCSI adapter in VIOS that you just created. To map the tape drive to the vhosty, enter mkvdev vdev <rmtx> -vadapter vhosty. Enter lsmap all more and press the Spacebar key to advance through the mappings. Look for the vhosty and make sure the vttapez device is associated with it. IBM i on an IBM POWER blade (read-me first) 106

107 On the IBM i virtual server with auto configuration turned on, a TAPxx device appears. Vary it on to use it Configuring Virtual tape using the SDMC There is not a graphical interface for the configuration of virtual tape at this time. Perform the following steps to configure virtual tape using the VIOS CLI: Use either the Save and restore with a single LTO4/5 SAS tape drive section or the Save and restore with a Fibre Channel-attached tape library section to add a virtual tape device. Use telnet or PuTTy to connect to the VIOS partition. Sign in using padmin as the user ID. Enter cfgdev to check for new devices. Enter lsdev grep rmt to view the tape devices and ensure that they are in Available state. Enter lsdev grep vhost and note the last vhost listed there. You need to associate this device with a VSCSI adapter pair. You need to use the SDMC interface to create those. Refer to the Increasing the number of virtual adapters in the IBM i partition section for details. Then return to this step. On the VIOS CLI enter lsdev grep vhost. There should be a new vhosty listed. This vhosty is the VSCSI adapter in VIOS that you just created. To map the tape drive to the vhosty, enter mkvdev vdev <rmtx> -vadapter vhosty. Enter lsmap all more and press the Spacebar key to advance through the mappings. Look for the vhosty and make sure the vttapez device is associated with it. On SDMC: update the inventory to view the changes. On the IBM i virtual server with auto configuration turned on, a TAPxx device appears. Vary it on to use it. 1.3 End to end LUN mapping using HMC On October 2009, IBM enhanced both the HMC and VIOS to allow end-to-end device mapping for LUNs assigned to client LPARs, such as IBM i. The new function enables administrators to quickly identify which LUN reporting in VIOS (or, hdisk) is which DDxxx disk device in IBM i. This in turn makes it easier to troubleshoot disk-related problems and safer to change a virtualized disk configuration. In order to correctly perform the mapping, the HMC requires an active RMC connection to VIOS. To perform end-to-end LUN device mapping, use the following steps: Sign in to the HMC as hscroot user ID. Expand Systems Management. Expand Servers. Click the correct managed system (server). Select the correct VIOS by using the checkbox. Click Hardware Information Virtual I/O Adapters SCSI. You will be shown a list of drives associated with their VIOS hdisks. Click back on the word Servers in the left hand navigation pane of the HMC. Select the correct managed server by selecting its checkbox. In the menu below, expand Configuration and then Virtual Resources. Click Virtual Storage Management. Select the correct VIOS from the list and click Query VIOS. Click the Physical Volumes. The hdisks that VIOS sees are shown along with the partition that they are assigned to. On the far right side of the Physical Location Code column, there is a L#00000 This is the LUN number associated with the hdisk. This is a hexadecimal number. IBM i on an IBM POWER blade (read-me first) 107

108 Use the SAN interface to determine which volume has that LUN number. You may have to convert the hex number to a decimal number (I know, it s been a while, but you can do it!). If the SAN is the V7000, look for the SCSI ID as the LUN #. 1.4 End to end LUN mapping using VIOS CLI If at all possible, use the HMC or SDMC interfaces for this process. You can use the lsdev dev vpd command to map the hdisks back to the volumes, but there is not an easy way to map the DDxxx devices to the hdisks using the CLI. 1.5 End to end LUN mapping using SDMC You may need to map the VIOS hdiskx back to their associated LUNs for debug, there is an SDMC interface to help do that: On SDMC Right Click on the hosting VIOS virtual server and go to System Configuration -> Manage Virtual Storage You'll be able to see virtual mappings under Storage Adapters and Storage Devices. There are plans to add client disk info to the Storage Adapters in the future releases of SDMC. 1.6 NPIV configuration steps using the HMC/SDMC There are three general steps in configuring NPIV for IBM i: LPAR and VIOS configuration on the Power server s management console Storage subsystem or tape library configuration SAN zoning To perform the LPAR and VIOS setup, refer to Chapter 2.9 in the Redbooks PowerVM Virtualizing Managing and Monitoring (SG ) at: While the examples given are for an AIX client of VIOS, the procedure is identical for an IBM i client. To perform the storage or tape configuration, refer to the Redbooks IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i (SG ) at: or Implementing IBM Tape in i5/os (SG ) at As mentioned above, from the storage subsystem s or tape library s perspective, the configuration is identical to that for IBM i directly attached through the SAN fabric. There is a web interface on the tape media library where you need to enable control paths from each device that you want IBM i to be able to work with. Selecting Enable, creates the control paths. IBM i cannot dynamically detect these control paths. To detect the control paths, you need to re-ipl the virtual I/O adapter (IOA). First determine which virtual IOA has been created for the virtual FC adapters. To do this, enter a WRKHDWRSC *STG command and check for a 6B25 (virtual FC) adapter. Note the IOP/IOA name. Next, use STRSST command: Use option 1 to Start a service function Option 7 Hardware service manager Option 2 Logical Hardware Resources Option 1 System bus resources Enter a 255 in the System bus(es) to work with field and hit enter. This is the virtual adapter bus. Locate the virtual IOA from above IBM i on an IBM POWER blade (read-me first) 108

109 Enter an option 6 for I/O debug Then option 4 to IPL the IOA. Use F3 to exit SST. Return to the WRKHDWRSC *STG and use an option 9 to Refer to the tape devices under the VFC IOA. With auto configuration turned on, a new tape device(s) should show up under WRKCFGSTS *DEV TAP*. You may have to check the port type defined on the tape media library for the fibre channel port associated with the tape device. Log into the tape library interface Go to Configure Library Then select Drives Set the port type to N-Port Accordingly, DS8000 LUNs might be created as IBM i protected or IBM i unprotected and will correctly report as such to Storage Management in IBM i. A tape library and a drive within will also report the correct device names and types, such as TAPMLBxx, 3584 and TAPxx, and so on. All tape operations supported with direct FC attachment are supported through NPIV, including hardware tape encryption. IBM i on an IBM POWER blade (read-me first) 109

110 Install System i Access for Windows (IVM installed) section. Configure the LAN console connection on the PC, as discussed in the Create the LAN console connection on the console PC (IVM install) section. Power on the blade by pressing the white button on its front (it is hidden behind a protective screen). Start the LAN console connection on the PC. The VIOS and IBM i partitions on the blade are configured to start automatically. However, because this is the first time the blade starts outside of IBM manufacturing, the system firmware and VIOS licenses must be accepted before the boot can continue. Open a console to VIOS on the blade, as described in the Opening a console for VIOS using Serial-over-LAN (SOL) section After several minutes, you will be presented with the system firmware language screen: Type 2 and press Enter to continue booting, or 1 and Enter to change the firmware screen language. Then the firmware license screen will be displayed. IBM i on an IBM POWER blade (read-me first) 110

111 Type 1 and press Enter to accept the license agreement. As soon as VIOS boots, you will be prompted to confirm the console screen. Type 2 and press Enter. Sign in with padmin user ID and set a password. Accept the VIOS license as described in the Completing the install section. Configure networking in VIOS as described in the Configure networking in VIOS (if necessary) section IBM i on an IBM POWER blade (read-me first) 111

112 Configure the virtual Ethernet bridge for IBM LAN Console as described in the Configure the Virtual Ethernet bridge for IBM i LAN console using IVM section Connect LAN Console to the IBM i partition as described in the Install IBM i using the IVM interface section The IBM i partition is now started and accessible through the LAN Console. Further setup tasks may involve adding disk units to the System ASP, configuring networking in IBM i, installing additional LPPs, PTFs and applications, migrating data from an existing system, or creating users, similar to those on an IBM Power 520 server. 2 DS4000 or DS5000 Copy Services and IBM i IBM has conducted some basic functional testing of DS4000 or DS5000 Copy Services with IBM i as client of VIOS. In this section, you can find information on the scenarios tested and the resulting statements of support for using DS4000 or DS5000 Copy Services with IBM i. 2.1 FlashCopy and VolumeCopy Test scenario The following diagram shows the test environment used for FlashCopy and VolumeCopy: IBM i on an IBM POWER blade (read-me first) 112

113 2.1.2 FlashCopy and VolumeCopy support statements The use of DS4000 or DS5000 FlashCopy and VolumeCopy with IBM i as a client of VIOS is supported as outlined in this section. Note that to implement and use this solution, multiple manual steps on the DS4000 or DS5000 storage subsystem, in VIOS and in IBM i are required. Currently, no toolkit that automates this solution exists and it is not part of IBM PowerHA for IBM i. The components of the solution: DS4000 or DS5000 FlashCopy or VolumeCopy and VIOS and IBM i, must be managed separately and require the corresponding skill set. Note also that support for this solution is provided by multiple IBM support organizations and not solely by the IBM i Support Center. Support statements: DS4000 or DS5000 FlashCopy and VolumeCopy are supported by IBM as a client of VIOS on both IBM Power servers and IBM POWER blades. Full-system FlashCopy and VolumeCopy when the production IBM i LPAR is powered off are supported. Full-system FlashCopy and VolumeCopy when the production IBM i LPAR is in restricted state are supported. The DS4000 or DS5000 'disable' and 're-create' functions with full-system FlashCopy and VolumeCopy when the production IBM i LPAR is powered off or is in restricted state are supported. IBM i on an IBM POWER blade (read-me first) 113

114 Full-system FlashCopy and VolumeCopy of the production IBM i logical partition (LPAR) after only using the IBM i 6.1 memory flush to disk (quiesce) function are not supported. Full-system FlashCopy and VolumeCopy when the production IBM i LPAR is running are not supported. FlashCopy and VolumeCopy of Independent Auxiliary Storage Pools (IASPs) are not supported. Having the production and backup IBM i LPAR under the same VIOS is not supported. For assistance with using DS4000 or DS5000 FlashCopy and VolumeCopy with IBM i, contact IBM Lab Services using this Web site: 2.2 Enhanced Remote Mirroring (ERM) Test scenario The following diagram shows the test environment used for ERM: IBM i on an IBM POWER blade (read-me first) 114

Getting started with IBM i on an IBM Flex System compute node.

Getting started with IBM i on an IBM Flex System compute node. F Getting started with IBM i on an IBM Flex System compute node. Mike Schambureck (schambur@us.ibm.com) IBM i Lab Services,Rochester, MN Jan 2013 Getting Started with IBM i on an IBM PureFlex System Compute

More information

Integrated Virtualization Manager ESCALA REFERENCE 86 A1 82FA 01

Integrated Virtualization Manager ESCALA REFERENCE 86 A1 82FA 01 Integrated Virtualization Manager ESCALA REFERENCE 86 A1 82FA 01 ESCALA Integrated Virtualization Manager Hardware May 2009 BULL CEDOC 357 AVENUE PATTON B.P.20845 49008 ANGERS CEDEX 01 FRANCE REFERENCE

More information

System i and System p. Customer service, support, and troubleshooting

System i and System p. Customer service, support, and troubleshooting System i and System p Customer service, support, and troubleshooting System i and System p Customer service, support, and troubleshooting Note Before using this information and the product it supports,

More information

Installation and Support Guide for Microsoft Windows Server, Linux, Novell NetWare, and VMware ESX Server

Installation and Support Guide for Microsoft Windows Server, Linux, Novell NetWare, and VMware ESX Server System Storage DS3000 Storage Manager Version 10 Installation and Support Guide for Microsoft Windows Server, Linux, Novell NetWare, and VMware ESX Server System Storage DS3000 Storage Manager Version

More information

QLogic 4Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter IBM BladeCenter at-a-glance guide

QLogic 4Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter IBM BladeCenter at-a-glance guide QLogic 4Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter IBM BladeCenter at-a-glance guide The QLogic 4Gb Fibre Channel Expansion Card (CIOv) for BladeCenter enables you to quickly and simply

More information

Chapter 5 Cubix XP4 Blade Server

Chapter 5 Cubix XP4 Blade Server Chapter 5 Cubix XP4 Blade Server Introduction Cubix designed the XP4 Blade Server to fit inside a BladeStation enclosure. The Blade Server features one or two Intel Pentium 4 Xeon processors, the Intel

More information

DD670, DD860, and DD890 Hardware Overview

DD670, DD860, and DD890 Hardware Overview DD670, DD860, and DD890 Hardware Overview Data Domain, Inc. 2421 Mission College Boulevard, Santa Clara, CA 95054 866-WE-DDUPE; 408-980-4800 775-0186-0001 Revision A July 14, 2010 Copyright 2010 EMC Corporation.

More information

Deploying SAP on Microsoft SQL Server 2008 Environments Using the Hitachi Virtual Storage Platform

Deploying SAP on Microsoft SQL Server 2008 Environments Using the Hitachi Virtual Storage Platform 1 Deploying SAP on Microsoft SQL Server 2008 Environments Using the Hitachi Virtual Storage Platform Implementation Guide By Sean Siegmund June 2011 Feedback Hitachi Data Systems welcomes your feedback.

More information

Windows Host Utilities 6.0.2 Installation and Setup Guide

Windows Host Utilities 6.0.2 Installation and Setup Guide Windows Host Utilities 6.0.2 Installation and Setup Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 463-8277

More information

Direct Attached Storage

Direct Attached Storage , page 1 Fibre Channel Switching Mode, page 1 Configuring Fibre Channel Switching Mode, page 2 Creating a Storage VSAN, page 3 Creating a VSAN for Fibre Channel Zoning, page 4 Configuring a Fibre Channel

More information

Emulex 8Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter IBM BladeCenter at-a-glance guide

Emulex 8Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter IBM BladeCenter at-a-glance guide Emulex 8Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter IBM BladeCenter at-a-glance guide The Emulex 8Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter enables high-performance connection

More information

Brocade Enterprise 20-port, 20-port, and 10-port 8Gb SAN Switch Modules IBM BladeCenter at-a-glance guide

Brocade Enterprise 20-port, 20-port, and 10-port 8Gb SAN Switch Modules IBM BladeCenter at-a-glance guide Brocade Enterprise 20-port, 20-port, and 10-port 8Gb SAN Switch Modules IBM BladeCenter at-a-glance guide The Brocade Enterprise 20-port, 20-port, and 10-port 8 Gb SAN Switch Modules for IBM BladeCenter

More information

Installation Guide July 2009

Installation Guide July 2009 July 2009 About this guide Edition notice This edition applies to Version 4.0 of the Pivot3 RAIGE Operating System and to any subsequent releases until otherwise indicated in new editions. Notification

More information

Linux. Installing Linux with the IBM Installation Toolkit for PowerLinux

Linux. Installing Linux with the IBM Installation Toolkit for PowerLinux Linux Installing Linux with the IBM Installation Toolkit for PowerLinux Linux Installing Linux with the IBM Installation Toolkit for PowerLinux Note Before using this information and the product it supports,

More information

Linux. Installing Linux on Power Systems servers

Linux. Installing Linux on Power Systems servers Linux Installing Linux on Power Systems servers Linux Installing Linux on Power Systems servers Note Before using this information and the product it supports, read the information in Notices on page

More information

Step-by-Step Guide for Testing Hyper-V and Failover Clustering

Step-by-Step Guide for Testing Hyper-V and Failover Clustering Step-by-Step Guide for Testing Hyper-V and Failover Clustering Microsoft Corporation Published: May 2008 Author: Kathy Davies Editor: Ronald Loi Abstract This guide shows you how to test using Hyper-V

More information

Installing the Operating System or Hypervisor

Installing the Operating System or Hypervisor Installing the Operating System or Hypervisor If you purchased E-Series Server Option 1 (E-Series Server without preinstalled operating system or hypervisor), you must install an operating system or hypervisor.

More information

Windows Host Utilities 6.0 Installation and Setup Guide

Windows Host Utilities 6.0 Installation and Setup Guide Windows Host Utilities 6.0 Installation and Setup Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 4-NETAPP

More information

Fibre Channel HBA and VM Migration

Fibre Channel HBA and VM Migration Fibre Channel HBA and VM Migration Guide for Hyper-V and System Center VMM2008 FC0054605-00 A Fibre Channel HBA and VM Migration Guide for Hyper-V and System Center VMM2008 S Information furnished in this

More information

3.5 EXTERNAL NETWORK HDD. User s Manual

3.5 EXTERNAL NETWORK HDD. User s Manual 3.5 EXTERNAL NETWORK HDD User s Manual Table of Content Before You Use Key Features H/W Installation Illustration of Product LED Definition NETWORK HDD Assembly Setup the Network HDD Home Disk Utility

More information

Additional Requirements for ARES-G2 / RSA-G2. One Ethernet 10 Base T/100 Base TX network card required for communication with the instrument.

Additional Requirements for ARES-G2 / RSA-G2. One Ethernet 10 Base T/100 Base TX network card required for communication with the instrument. TA Instruments TRIOS Software Installation Instructions Installation Requirements Your TRIOS Instrument Control software includes all the components necessary to install or update the TRIOS software, as

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

Ultra Thin Client TC-401 TC-402. Users s Guide

Ultra Thin Client TC-401 TC-402. Users s Guide Ultra Thin Client TC-401 TC-402 Users s Guide CONTENT 1. OVERVIEW... 3 1.1 HARDWARE SPECIFICATION... 3 1.2 SOFTWARE OVERVIEW... 4 1.3 HARDWARE OVERVIEW...5 1.4 NETWORK CONNECTION... 7 2. INSTALLING THE

More information

IBM Power Systems Facts and Features POWER7 Blades and Servers October 2010

IBM Power Systems Facts and Features POWER7 Blades and Servers October 2010 IBM Power Systems Facts and Features POWER7 Blades and Servers October 2010 IBM Power Systems servers and IBM BladeCenter blade servers using IBM POWER6 and POWER6+ processors are described in a separate

More information

McAfee Firewall Enterprise

McAfee Firewall Enterprise Hardware Guide Revision C McAfee Firewall Enterprise S1104, S2008, S3008 The McAfee Firewall Enterprise Hardware Product Guide describes the features and capabilities of appliance models S1104, S2008,

More information

Annex 1: Hardware and Software Details

Annex 1: Hardware and Software Details Annex : Hardware and Software Details Hardware Equipment: The figure below highlights in more details the relation and connectivity between the Portal different environments. The number adjacent to each

More information

Chapter 3 Management. Remote Management

Chapter 3 Management. Remote Management Chapter 3 Management This chapter describes how to use the management features of your ProSafe 802.11a/g Dual Band Wireless Access Point WAG102. To access these features, connect to the WAG102 as described

More information

HP ProLiant DL380 G5 High Availability Storage Server

HP ProLiant DL380 G5 High Availability Storage Server HP ProLiant DL380 G5 High Availability Storage Server installation instructions *5697-7748* Part number: 5697 7748 First edition: November 2008 Legal and notice information Copyright 1999, 2008 Hewlett-Packard

More information

Symantec Database Security and Audit 3100 Series Appliance. Getting Started Guide

Symantec Database Security and Audit 3100 Series Appliance. Getting Started Guide Symantec Database Security and Audit 3100 Series Appliance Getting Started Guide Symantec Database Security and Audit 3100 Series Getting Started Guide The software described in this book is furnished

More information

Microsoft System Center 2012 SP1 Virtual Machine Manager with Storwize family products. IBM Systems and Technology Group ISV Enablement January 2014

Microsoft System Center 2012 SP1 Virtual Machine Manager with Storwize family products. IBM Systems and Technology Group ISV Enablement January 2014 Microsoft System Center 2012 SP1 Virtual Machine Manager with Storwize family products IBM Systems and Technology Group ISV Enablement January 2014 Copyright IBM Corporation, 2014 Table of contents Abstract...

More information

PowerLinux introduction

PowerLinux introduction PowerLinux introduction Course materials may not be reproduced in whole or in part without the prior written permission of IBM. 9.0 Unit objectives After completing this unit, you should be able to: Recognize

More information

Exam Name: i5 iseries LPAR Technical Solutions V5R3 Exam Type: IBM Exam Code: 000-365 Total Questions: 132

Exam Name: i5 iseries LPAR Technical Solutions V5R3 Exam Type: IBM Exam Code: 000-365 Total Questions: 132 Question: 1 Company.com has three AS/400 systems: a Model 720, a Model 270, and a Model 170, all are on V5R3. They want to consolidate all the servers into an eserver i5 520 with three partitions. The

More information

Planning for Virtualization

Planning for Virtualization Planning for Virtualization Jaqui Lynch Userblue Jaqui.lynch@mainline.com http://www.circle4.com/papers/ubvirtual.pdf Agenda Partitioning Concepts Virtualization Planning Hints and Tips References 1 Partitioning

More information

Appendix B Lab Setup Guide

Appendix B Lab Setup Guide JWCL031_appB_467-475.indd Page 467 5/12/08 11:02:46 PM user-s158 Appendix B Lab Setup Guide The Windows Server 2008 Applications Infrastructure Configuration title of the Microsoft Official Academic Course

More information

Vess A2000 Series. NVR Storage Appliance. Windows Recovery Instructions. Version 1.0. 2014 PROMISE Technology, Inc. All Rights Reserved.

Vess A2000 Series. NVR Storage Appliance. Windows Recovery Instructions. Version 1.0. 2014 PROMISE Technology, Inc. All Rights Reserved. Vess A2000 Series NVR Storage Appliance Windows Recovery Instructions Version 1.0 2014 PROMISE Technology, Inc. All Rights Reserved. Contents Introduction 1 Different ways to backup the system disk 2 Before

More information

Cisco FlexFlash: Use and Manage Cisco Flexible Flash Internal SD Card for Cisco UCS C-Series Standalone Rack Servers

Cisco FlexFlash: Use and Manage Cisco Flexible Flash Internal SD Card for Cisco UCS C-Series Standalone Rack Servers Cisco FlexFlash: Use and Manage Cisco Flexible Flash Internal SD Card for Cisco UCS C-Series Standalone Rack Servers White Paper February 2014 What You Will Learn The Cisco UCS C220 M3, C240 M3, C420 M3,

More information

Using iscsi with BackupAssist. User Guide

Using iscsi with BackupAssist. User Guide User Guide Contents 1. Introduction... 2 Documentation... 2 Terminology... 2 Advantages of iscsi... 2 Supported environments... 2 2. Overview... 3 About iscsi... 3 iscsi best practices with BackupAssist...

More information

Building Microsoft Windows Server 2012 Clusters on the Dell PowerEdge VRTX

Building Microsoft Windows Server 2012 Clusters on the Dell PowerEdge VRTX Building Microsoft Windows Server 2012 Clusters on the Dell PowerEdge VRTX Startup Guide Paul Marquardt Contents Introduction... 4 Requirements... 4 Chassis setup... 6 Chassis placement and CMC cabling...

More information

Using Cisco UC320W with Windows Small Business Server

Using Cisco UC320W with Windows Small Business Server Using Cisco UC320W with Windows Small Business Server This application note explains how to deploy the Cisco UC320W in a Windows Small Business Server environment. Contents This document includes the following

More information

unisys ClearPath Enterprise Servers Network Services Implementation Guide ClearPath MCP 15.0 April 2013 4198 6670 029

unisys ClearPath Enterprise Servers Network Services Implementation Guide ClearPath MCP 15.0 April 2013 4198 6670 029 unisys ClearPath Enterprise Servers Network Services Implementation Guide ClearPath MCP 15.0 April 2013 4198 6670 029 NO WARRANTIES OF ANY NATURE ARE EXTENDED BY THIS DOCUMENT. Any product or related information

More information

Chapter 4 Management. Viewing the Activity Log

Chapter 4 Management. Viewing the Activity Log Chapter 4 Management This chapter describes how to use the management features of your NETGEAR WG102 ProSafe 802.11g Wireless Access Point. To get to these features, connect to the WG102 as described in

More information

McAfee. b Under Self Service, click Product Documentation. d Download the model S7032 installation guide.

McAfee. b Under Self Service, click Product Documentation. d Download the model S7032 installation guide. Quick Start Guide McAfee Firewall Enterprise, Multi-Firewall Edition model S7032 This quick start guide provides high-level instructions for setting up McAfee Firewall Enterprise, Multi-Firewall Edition

More information

Connecting the DG-102S VoIP Gateway to your network

Connecting the DG-102S VoIP Gateway to your network Contents of Package: DG-102S VoIP Station Gateway Power adapter CD-ROM, including User s Manual Quick Install Guide Requirements: RS-232 Console Cable Two RJ-45 CAT-5 Straight-Through Cables For more information

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service ESX 4.0 ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until the

More information

Installation and Configuration Guide for Cluster Services running on Microsoft Windows 2000 Advanced Server using Acer Altos Servers

Installation and Configuration Guide for Cluster Services running on Microsoft Windows 2000 Advanced Server using Acer Altos Servers Acer Altos Server Installation and Configuration Guide for Cluster Services running on Microsoft Windows 2000 Advanced Server using Acer Altos Servers This installation guide provides instructions for

More information

Quick Start Guide. Cisco Small Business. 200E Series Advanced Smart Switches

Quick Start Guide. Cisco Small Business. 200E Series Advanced Smart Switches Quick Start Guide Cisco Small Business 200E Series Advanced Smart Switches Welcome Thank you for choosing the Cisco 200E series Advanced Smart Switch, a Cisco Small Business network communications device.

More information

HP StoreEasy 1000 Storage Administrator Guide

HP StoreEasy 1000 Storage Administrator Guide HP StoreEasy 000 Storage Administrator Guide This document describes how to install, configure, and maintain all models of HP StoreEasy 000 Storage and is intended for system administrators. For the latest

More information

Deploying Windows Streaming Media Servers NLB Cluster and metasan

Deploying Windows Streaming Media Servers NLB Cluster and metasan Deploying Windows Streaming Media Servers NLB Cluster and metasan Introduction...................................................... 2 Objectives.......................................................

More information

Quick Start Guide. Cisco Small Business. 300 Series Managed Switches

Quick Start Guide. Cisco Small Business. 300 Series Managed Switches Quick Start Guide Cisco Small Business 300 Series Managed Switches Welcome Thank you for choosing the Cisco 300 Series Managed Switch, a Cisco Small Business network communications device. This device

More information

Management Software. Web Browser User s Guide AT-S106. For the AT-GS950/48 Gigabit Ethernet Smart Switch. Version 1.0.0. 613-001339 Rev.

Management Software. Web Browser User s Guide AT-S106. For the AT-GS950/48 Gigabit Ethernet Smart Switch. Version 1.0.0. 613-001339 Rev. Management Software AT-S106 Web Browser User s Guide For the AT-GS950/48 Gigabit Ethernet Smart Switch Version 1.0.0 613-001339 Rev. A Copyright 2010 Allied Telesis, Inc. All rights reserved. No part of

More information

Cloud Infrastructure Management - IBM VMControl

Cloud Infrastructure Management - IBM VMControl Cloud Infrastructure Management - IBM VMControl IBM Systems Director 6.3 VMControl 2.4 Thierry Huche IBM France - Montpellier thierry.huche@fr.ibm.com 2010 IBM Corporation Topics IBM Systems Director /

More information

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Implementation Guide By Eduardo Freitas and Ryan Sokolowski February 2010 Summary Deploying

More information

Cisco 831 Router and Cisco SOHO 91 Router Cabling and Setup Quick Start Guide

Cisco 831 Router and Cisco SOHO 91 Router Cabling and Setup Quick Start Guide English CHAPTER 1 Cisco 831 Router and Cisco SOHO 91 Router Cabling and Setup Quick Start Guide Cisco One-Year Limited Hardware Warranty Terms Easy Installation: Try These Steps First! (CRWS Users) Overview

More information

SAN Implementation Course SANIW; 3 Days, Instructor-led

SAN Implementation Course SANIW; 3 Days, Instructor-led SAN Implementation Course SANIW; 3 Days, Instructor-led Course Description In this workshop course, you learn how to connect Windows, vsphere, and Linux hosts via Fibre Channel (FC) and iscsi protocols

More information

Compellent Storage Center

Compellent Storage Center Compellent Storage Center Microsoft Multipath IO (MPIO) Best Practices Guide Dell Compellent Technical Solutions Group October 2012 THIS BEST PRACTICES GUIDE IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY

More information

Note: This case study utilizes Packet Tracer. Please see the Chapter 5 Packet Tracer file located in Supplemental Materials.

Note: This case study utilizes Packet Tracer. Please see the Chapter 5 Packet Tracer file located in Supplemental Materials. Note: This case study utilizes Packet Tracer. Please see the Chapter 5 Packet Tracer file located in Supplemental Materials. CHAPTER 5 OBJECTIVES Configure a router with an initial configuration. Use the

More information

Cisco NAC Appliance Hardware Platforms

Cisco NAC Appliance Hardware Platforms 1 CHAPTER This chapter provides general information on the Cisco NAC Appliance network access control system, as well as hardware specifications for all Clean Access Manager (CAM) and Clean Access Server

More information

Infinity C Reference Guide

Infinity C Reference Guide 1 2! Infinity C Reference Guide Table of Contents Components... 1 Hardware Setup... 5 SmartDrive Configuration... 12 Startup... 15 Login... 16 System Configuration... 19 DICOM Setup... 20 Monitoring Status...

More information

AlienVault. Unified Security Management (USM) 4.8-5.x Initial Setup Guide

AlienVault. Unified Security Management (USM) 4.8-5.x Initial Setup Guide AlienVault Unified Security Management (USM) 4.8-5.x Initial Setup Guide Contents USM v4.8-5.x Initial Setup Guide Copyright AlienVault, Inc. All rights reserved. The AlienVault Logo, AlienVault, AlienVault

More information

Trend Micro Incorporated reserves the right to make changes to this document and to the products described herein without notice.

Trend Micro Incorporated reserves the right to make changes to this document and to the products described herein without notice. Trend Micro Incorporated reserves the right to make changes to this document and to the products described herein without notice. Before installing and using the software, please review the readme files,

More information

Required Virtual Interface Maps to... mgmt0. virtual network = mgmt0 wan0. virtual network = wan0 mgmt1. network adapter not connected lan0

Required Virtual Interface Maps to... mgmt0. virtual network = mgmt0 wan0. virtual network = wan0 mgmt1. network adapter not connected lan0 VXOA VIRTUAL APPLIANCES Microsoft Hyper-V Hypervisor Router Mode (Out-of-Path Deployment) 2013 Silver Peak Systems, Inc. Assumptions Windows 2008 server is installed and Hyper-V server is running. This

More information

Intel Entry Storage System SS4000-E

Intel Entry Storage System SS4000-E Intel Entry Storage System SS4000-E Software Release Notes March, 2006 Storage Systems Technical Marketing Revision History Intel Entry Storage System SS4000-E Revision History Revision Date Number 3 Mar

More information

A Principled Technologies deployment guide commissioned by Dell Inc.

A Principled Technologies deployment guide commissioned by Dell Inc. A Principled Technologies deployment guide commissioned by Dell Inc. TABLE OF CONTENTS Table of contents... 2 Introduction... 3 About the components... 3 About the Dell PowerEdge VRTX...3 About the Dell

More information

How to backup and restore the Virtual I/O Server

How to backup and restore the Virtual I/O Server How to backup and restore the Virtual I/O Server Table of Contents Backing up the Virtual I/O Server... 1 Backing up to a tape or DVD-RAM...1 Backing up to a remote file system by creating a nim_resources.tar

More information

c. Securely insert the Ethernet cable from your cable or DSL modem into the Internet port (B) on the WGT634U. Broadband modem

c. Securely insert the Ethernet cable from your cable or DSL modem into the Internet port (B) on the WGT634U. Broadband modem Start Here Follow these instructions to set up your router. Verify That Basic Requirements Are Met Assure that the following requirements are met: You have your broadband Internet service settings handy.

More information

Dell EqualLogic Red Hat Enterprise Linux 6.2 Boot from SAN

Dell EqualLogic Red Hat Enterprise Linux 6.2 Boot from SAN Dell EqualLogic Red Hat Enterprise Linux 6.2 Boot from SAN A Dell EqualLogic best practices technical white paper Storage Infrastructure and Solutions Engineering Dell Product Group November 2012 2012

More information

Network Storage System with 2 Bays

Network Storage System with 2 Bays USER GUIDE Network Storage System with 2 Bays Model: NAS200 About This Guide About This Guide Icon Descriptions While reading through the User Guide you may see various icons that call attention to specific

More information

What the student will need:

What the student will need: COMPTIA SERVER+: The Server+ course is designed to help the student take and pass the CompTIA Server+ certification exam. It consists of Book information, plus real world information a student could use

More information

Sonnet Web Management Tool User s Guide. for Fusion Fibre Channel Storage Systems

Sonnet Web Management Tool User s Guide. for Fusion Fibre Channel Storage Systems Sonnet Web Management Tool User s Guide for Fusion Fibre Channel Storage Systems Contents 1.0 Getting Started... 1 Discovering the IP address Optional - Setting up Internet Explorer Beginning Initial

More information

StarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with Hyper-V Cluster

StarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with Hyper-V Cluster #1 HyperConverged Appliance for SMB and ROBO StarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with MARCH 2015 TECHNICAL PAPER Trademarks StarWind, StarWind Software and the

More information

N_Port ID Virtualization

N_Port ID Virtualization A Detailed Review Abstract This white paper provides a consolidated study on the (NPIV) feature and usage in different platforms and on NPIV integration with the EMC PowerPath on AIX platform. February

More information

Remote Supervisor Adapter II. User s Guide

Remote Supervisor Adapter II. User s Guide Remote Supervisor Adapter II User s Guide Remote Supervisor Adapter II User s Guide Note: Before using this information and the product it supports, read the general information in Appendix B, Notices,

More information

Required Virtual Interface Maps to... mgmt0. bridge network interface = mgmt0 wan0. bridge network interface = wan0 mgmt1

Required Virtual Interface Maps to... mgmt0. bridge network interface = mgmt0 wan0. bridge network interface = wan0 mgmt1 VXOA VIRTUAL APPLIANCE KVM Hypervisor In-Line Deployment (Bridge Mode) 2012 Silver Peak Systems, Inc. Support Limitations In Bridge mode, the virtual appliance only uses mgmt0, wan0, and lan0. This Quick

More information

Drobo How-To Guide. Topics. What You Will Need. Prerequisites. Deploy Drobo B1200i with Microsoft Hyper-V Clustering

Drobo How-To Guide. Topics. What You Will Need. Prerequisites. Deploy Drobo B1200i with Microsoft Hyper-V Clustering Multipathing I/O (MPIO) enables the use of multiple iscsi ports on a Drobo SAN to provide fault tolerance. MPIO can also boost performance of an application by load balancing traffic across multiple ports.

More information

Flex System Chassis Management Module User's Guide (For Type 7893, 8721, and 8724 chassis only)

Flex System Chassis Management Module User's Guide (For Type 7893, 8721, and 8724 chassis only) Flex System Chassis Management Module User's Guide (For Type 7893, 8721, and 8724 chassis only) Note Before using this information and the product it supports, read the general information in Appendix

More information

Rsync-enabled NAS Hardware Compatibility List

Rsync-enabled NAS Hardware Compatibility List WHITEPAPER BackupAssist Version 5.1 www.backupassist.com Cortex I.T. Labs 2001-2008 2 Contents Introduction... 3 Hardware Setup Instructions... 3 QNAP TS-409... 3 Netgear ReadyNas NV+... 5 Drobo rev1...

More information

Deploying Microsoft RemoteFX on a Single Remote Desktop Virtualization Host Server Step-by-Step Guide

Deploying Microsoft RemoteFX on a Single Remote Desktop Virtualization Host Server Step-by-Step Guide Deploying Microsoft RemoteFX on a Single Remote Desktop Virtualization Host Server Step-by-Step Guide Microsoft Corporation Published: October 2010 Abstract This step-by-step guide walks you through the

More information

StarWind iscsi SAN: Configuring Global Deduplication May 2012

StarWind iscsi SAN: Configuring Global Deduplication May 2012 StarWind iscsi SAN: Configuring Global Deduplication May 2012 TRADEMARKS StarWind, StarWind Software, and the StarWind and StarWind Software logos are trademarks of StarWind Software that may be registered

More information

SANtricity Storage Manager 11.25

SANtricity Storage Manager 11.25 SANtricity Storage Manager 11.25 Software Installation Reference NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888)

More information

EXPRESSCLUSTER X for Windows Quick Start Guide for Microsoft SQL Server 2014. Version 1

EXPRESSCLUSTER X for Windows Quick Start Guide for Microsoft SQL Server 2014. Version 1 EXPRESSCLUSTER X for Windows Quick Start Guide for Microsoft SQL Server 2014 Version 1 NEC EXPRESSCLUSTER X 3.x for Windows SQL Server 2014 Quick Start Guide Document Number ECX-MSSQL2014-QSG, Version

More information

Device Installer User Guide

Device Installer User Guide Device Installer User Guide Part Number 900-325 Revision B 12/08 Table Of Contents 1. Overview... 1 2. Devices... 2 Choose the Network Adapter for Communication... 2 Search for All Devices on the Network...

More information

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright Converged Networking Solution for Dell M-Series Blades Authors: Reza Koohrangpour Spencer Wheelwright. THIS SOLUTION BRIEF IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL

More information

HP VMware ESXi 5.0 and Updates Getting Started Guide

HP VMware ESXi 5.0 and Updates Getting Started Guide HP VMware ESXi 5.0 and Updates Getting Started Guide Abstract This guide is intended to provide setup information for HP VMware ESXi. HP Part Number: 616896-002 Published: August 2011 Edition: 1 Copyright

More information

Cisco Expressway CE500 Appliance

Cisco Expressway CE500 Appliance Cisco Expressway CE500 Appliance Installation Guide First Published: April 2014 Last Updated: November 2015 X8.2 or later Cisco Systems, Inc. www.cisco.com Introduction About This Document This document

More information

System Area Manager. Remote Management

System Area Manager. Remote Management System Area Manager Remote Management Remote Management System Area Manager provides remote management functions for its managed systems, including Wake on LAN, Shutdown, Restart, Remote Console and for

More information

Disaster Recovery Cookbook Guide Using VMWARE VI3, StoreVault and Sun. (Or how to do Disaster Recovery / Site Replication for under $50,000)

Disaster Recovery Cookbook Guide Using VMWARE VI3, StoreVault and Sun. (Or how to do Disaster Recovery / Site Replication for under $50,000) Disaster Recovery Cookbook Guide Using VMWARE VI3, StoreVault and Sun. (Or how to do Disaster Recovery / Site Replication for under $50,000) By Scott Sherman, VCP, NACE, RHCT Systems Engineer Integrated

More information

Introduction to MPIO, MCS, Trunking, and LACP

Introduction to MPIO, MCS, Trunking, and LACP Introduction to MPIO, MCS, Trunking, and LACP Sam Lee Version 1.0 (JAN, 2010) - 1 - QSAN Technology, Inc. http://www.qsantechnology.com White Paper# QWP201002-P210C lntroduction Many users confuse the

More information

SonicOS Enhanced 5.7.0.2 Release Notes

SonicOS Enhanced 5.7.0.2 Release Notes SonicOS Contents Platform Compatibility... 1 Key Features... 2 Known Issues... 3 Resolved Issues... 4 Upgrading SonicOS Enhanced Image Procedures... 6 Related Technical Documentation... 11 Platform Compatibility

More information

istorage Server: High-Availability iscsi SAN for Windows Server 2008 & Hyper-V Clustering

istorage Server: High-Availability iscsi SAN for Windows Server 2008 & Hyper-V Clustering istorage Server: High-Availability iscsi SAN for Windows Server 2008 & Hyper-V Clustering Tuesday, Feb 21 st, 2012 KernSafe Technologies, Inc. www.kernsafe.com Copyright KernSafe Technologies 2006-2012.

More information

Manual OS Installation

Manual OS Installation CA92276-8158 EN-09 ECONEL 100 S2 / TX120 / TX150 S6 / TX300 S4 / RX100 S5 / RX200 S4 / RX300 S4 / RX600 S4 Manual OS Installation Before Reading This Manual Before Reading This Manual Remarks Symbols Symbols

More information

v1 System Requirements 7/11/07

v1 System Requirements 7/11/07 v1 System Requirements 7/11/07 Core System Core-001: Windows Home Server must not exceed specified sound pressure level Overall Sound Pressure level (noise emissions) must not exceed 33 db (A) SPL at ambient

More information

1-bay NAS User Guide

1-bay NAS User Guide 1-bay NAS User Guide INDEX Index... 1 Log in... 2 Basic - Quick Setup... 3 Wizard... 3 Add User... 6 Add Group... 7 Add Share... 9 Control Panel... 11 Control Panel - User and groups... 12 Group Management...

More information

1 Modular System Dual SCM MPIO Software Installation

1 Modular System Dual SCM MPIO Software Installation 1 Modular System Dual SCM MPIO Software Installation This document will help you to connect a MAXDATA dual controller SAS disk array with redundant dual connection to both storage controller modules (SCM)

More information

EMC ViPR Controller. User Interface Virtual Data Center Configuration Guide. Version 2.4 302-002-416 REV 01

EMC ViPR Controller. User Interface Virtual Data Center Configuration Guide. Version 2.4 302-002-416 REV 01 EMC ViPR Controller Version 2.4 User Interface Virtual Data Center Configuration Guide 302-002-416 REV 01 Copyright 2014-2015 EMC Corporation. All rights reserved. Published in USA. Published November,

More information

Quick Start Guide. RV 120W Wireless-N VPN Firewall. Cisco Small Business

Quick Start Guide. RV 120W Wireless-N VPN Firewall. Cisco Small Business Quick Start Guide Cisco Small Business RV 120W Wireless-N VPN Firewall Package Contents Wireless-N VPN Firewall Ethernet Cable Power Adapter Quick Start Guide Documentation and Software on CD-ROM Welcome

More information

Honeywell Internet Connection Module

Honeywell Internet Connection Module Honeywell Internet Connection Module Setup Guide Version 1.0 - Page 1 of 18 - ICM Setup Guide Technical Support Setup - Guide Table of Contents Introduction... 3 Network Setup and Configuration... 4 Setting

More information

PCI Express Impact on Storage Architectures and Future Data Centers. Ron Emerick, Oracle Corporation

PCI Express Impact on Storage Architectures and Future Data Centers. Ron Emerick, Oracle Corporation PCI Express Impact on Storage Architectures and Future Data Centers Ron Emerick, Oracle Corporation SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies

More information

AP6511 First Time Configuration Procedure

AP6511 First Time Configuration Procedure AP6511 First Time Configuration Procedure Recommended Minimum Configuration Steps From the factory, all of the 6511 AP s should be configured with a shadow IP that starts with 169.254.xxx.xxx with the

More information

2-Bay Raid Sub-System Smart Removable 3.5" SATA Multiple Bay Data Storage Device User's Manual

2-Bay Raid Sub-System Smart Removable 3.5 SATA Multiple Bay Data Storage Device User's Manual 2-Bay Raid Sub-System Smart Removable 3.5" SATA Multiple Bay Data Storage Device User's Manual www.vipower.com Table of Contents 1. How the SteelVine (VPMP-75211R/VPMA-75211R) Operates... 1 1-1 SteelVine

More information