HUAWEI SAN Storage Host Connectivity Guide for VMware
|
|
|
- Charleen Wilson
- 10 years ago
- Views:
Transcription
1 Technical White Paper HUAWEI SAN Storage Host Connectivity Guide for VMware OceanStor Storage VMware Huawei Technologies Co., Ltd
2 Copyright Huawei Technologies Co., Ltd All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of Huawei Technologies Co., Ltd. Trademarks and Permissions and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd. All other trademarks and trade names mentioned in this document are the property of their respective holders. Notice The purchased products, services and features are stipulated by the contract made between Huawei and the customer. All or part of the products, services and features described in this document may not be within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information, and recommendations in this document are provided "AS IS" without warranties, guarantees or representations of any kind, either express or implied. The information in this document is subject to change without notice. Every effort has been made in the preparation of this document to ensure accuracy of the contents, but all statements, information, and recommendations in this document do not constitute a warranty of any kind, express or implied. Huawei Technologies Co., Ltd. Address: Website: Huawei Industrial Base Bantian, Longgang Shenzhen People's Republic of China i
3 About This Document About This Document Overview This document details the configuration methods and precautions for connecting Huawei SAN storage devices to VMware hosts. Intended Audience This document is intended for: Huawei technical support engineers Technical engineers of Huawei's partners Conventions Symbol Conventions The symbols that may be found in this document are defined as follows: Symbol Description Indicates a hazard with a high level of risk, which if not avoided, will result in death or serious injury. Indicates a hazard with a medium or low level of risk, which if not avoided, could result in minor or moderate injury. Indicates a potentially hazardous situation, which if not avoided, could result in equipment damage, data loss, performance degradation, or unexpected results. Indicates a tip that may help you solve a problem or save time. Provides additional information to emphasize or supplement important points of the main text. ii
4 About This Document General Conventions Convention Times New Roman Boldface Italic Courier New Description Normal paragraphs are in Times New Roman. Names of files, directories, folders, and users are in boldface. For example, log in as user root. Book titles are in italics. Examples of information displayed on the screen are in Courier New. Command Conventions Format Boldface Italic Description The keywords of a command line are in boldface. Command arguments are in italics. iii
5 Contents Contents About This Document...ii 1 VMware VMware Infrastructure File Systems in VMware VMware RDM VMware Cluster VMware vmotion VMware DRS VMware FT and VMware HA Specifications Network Planning Fibre Channel Network Diagram Multi-Path Direct-Connection Network Multi-Path Switch-based Network iscsi Network Diagram Multi-Path Direct-Connection Network Multi-Path Switch-based Network Preparations Before Configuration (on a Host) HBA Identification HBA Information Versions Earlier than ESXi ESXi Preparations Before Configuration (on a Storage System) Configuring Switches Fibre Channel Switch Querying the Switch Model and Version Configuring Zones Precautions Ethernet Switch Configuring VLANs Binding Ports iv
6 Contents 6 Establishing Fibre Channel Connections Checking Topology Modes OceanStor T Series Storage System OceanStor Series Enterprise Storage System Adding Initiators Establishing Connections Establishing iscsi Connections Host Configurations Configuring Service IP Addresses Configuring Host Initiators Configuring CHAP Authentication Storage System OceanStor T Series Storage System OceanStor Series Enterprise Storage System Mapping and Using LUNs Mapping LUNs to a Host OceanStor T Series Storage System OceanStor Series Enterprise Storage System Scanning for LUNs on a Host Using the Mapped LUNs Mapping Raw Devices Creating Datastores (File Systems) Mapping Virtual Disks Differences Between Raw Disks and Virtual Disks Multipathing Management Overview VMware PSA Overview VMware NMP VMware PSP Software Functions and Features Multipathing Selection Policy Policies and Differences PSPs in Different ESX Versions VMware SATPs Policy Configuration OceanStor T Series Storage System OceanStor Series Enterprise Storage System LUN Failure Policy Path Policy Query and Modification v
7 Contents ESX/ESXi ESX/ESXi ESXi ESXi ESXi Differences Between iscsi Multi-Path Networks with Single and Multiple HBAs iscsi Multi-Path Network with a Single HBA iscsi Multi-Path Network with Multiple HBAs Common Commands Host High-Availability Overview Working Principle and Functions Relationship Among VMware HA, DRS, and vmotion Installation and Configuration Log Collection Acronyms and Abbreviations vi
8 Figures Figures Figure 1-1 VMware Infrastructure virtual data center... 2 Figure 1-2 Storage architecture in VMware Infrastructure... 3 Figure 1-3 VMFS architecture... 4 Figure 1-4 Structure of a VMFS volume... 5 Figure 1-5 RDM mechanism... 6 Figure 2-2 Fibre Channel multi-path direct-connection network diagram (dual-controller) Figure 2-3 Fibre Channel multi-path direct-connection network diagram (four-controller) Figure 2-4 Fibre Channel multi-path switch-based network diagram (dual-controller) Figure 2-5 Fibre Channel multi-path switch-based network diagram (four-controller) Figure 2-6 iscsi multi-path direct-connection network diagram (dual-controller) Figure 2-7 iscsi multi-path direct-connection network diagram (four-controller) Figure 2-8 iscsi multi-path switch-based network diagram (dual-controller) Figure 2-9 iscsi multi-path switch-based network diagram (four-controller) Figure 3-1 Viewing the HBA information Figure 5-1 Switch information Figure 5-2 Switch port indicator status Figure 5-3 Zone tab page Figure 5-4 Zone configuration Figure 5-5 Zone Config tab page Figure 5-6 Name Server page Figure 6-1 Fibre Channel port details Figure 6-2 Fibre Channel port details Figure 7-1 Adding VMkernel Figure 7-2 Creating a vsphere standard switch Figure 7-3 Specifying the network label Figure 7-4 Entering the iscsi service IP address vii
9 Figures Figure 7-5 Information summary Figure 7-6 iscsi multi-path network with dual adapters Figure 7-7 Adding storage adapters Figure 7-8 Adding iscsi initiators Figure 7-9 iscsi Software Adapter Figure 7-10 Initiator properties Figure 7-11 iscsi initiator properties Figure 7-12 Binding with a new VMkernel network adapter Figure 7-13 Initiator properties after virtual network binding Figure 7-14 Adding send target server Figure 7-15 General tab page Figure 7-16 CHAP credentials dialog box Figure 7-17 Modifying IPv4 addresses Figure 7-18 Initiator CHAP configuration Figure 7-19 CHAP Configuration dialog box Figure 7-20 Create CHAP dialog box Figure 7-21 Assigning the CHAP account to the initiator Figure 7-22 Setting CHAP status Figure 7-23 Enabling CHAP Figure 7-24 Initiator status after CHAP is enabled Figure 8-1 Scanning for the mapped LUNs Figure 8-2 Editing host settings Figure 8-3 Adding disks Figure 8-4 Selecting disks Figure 8-5 Selecting a target LUN Figure 8-6 Selecting a datastore Figure 8-7 Selecting a compatibility mode Figure 8-8 Selecting a virtual device node Figure 8-9 Confirming the information about the disk to be added Figure 8-10 Adding raw disk mappings Figure 8-11 Adding storage Figure 8-12 Selecting a storage type Figure 8-13 Select a disk/lun viii
10 Figures Figure 8-14 Selecting a file system version Figure 8-15 Viewing the current disk layout Figure 8-16 Entering a datastore name Figure 8-17 Specifying a capacity Figure 8-18 Confirming the disk layout Figure 8-19 Editing VM settings Figure 8-20 Adding disks Figure 8-21 Creating a new virtual disk Figure 8-22 Specifying the disk capacity Figure 8-23 Selecting a datastore Figure 8-24 Selecting a virtual device node Figure 8-25 Viewing virtual disk information Figure 8-26 Modifying the capacity of a virtual disk Figure 8-27 Modifying the capacity of a disk added using raw disk mappings Figure 9-1 VMware PSA Figure 9-2 VMkernel architecture Figure 9-3 Menu of the storage adapter Figure 9-4 Shortcut menu of a LUN Figure 9-5 Configuring the management path Figure 9-6 iscsi network with a single HBA Figure 9-7 iscsi network A with multiple HBAs Figure 9-8 Port mapping of iscsi network A with multiple HBAs Figure 9-9 iscsi network B with multiple HBAs Figure 9-10 Port mapping of iscsi network B with multiple HBAs Figure 10-1 Selecting a VMware version Figure 11-1 Host logs ix
11 Tables Tables Table 1-1 Major specifications of VMware... 7 Table 2-1 Networking modes Table 5-1 Mapping between switch types and names Table 5-2 Comparison of link aggregation modes Table 9-1 Path selection policies Table 9-2 Default policies for the ESX/ESXi 4.0 operating system Table 9-3 Recommended policies for the ESX/ESXi 4.0 operating system Table 9-4 Default policies for the ESX/ESXi 4.1 operating system Table 9-5 Recommended policies for the ESX/ESXi 4.1 operating system Table 9-6 Default policies for the ESXi 5.0 operating system Table 9-7 Recommended policies for the ESXi 5.0 operating system Table 9-8 Default policies for the ESXi 5.1 operating system Table 9-9 Recommended policies for the ESXi 5.1 operating system Table 9-10 Default policies for the ESXi 5.5 operating system Table 9-11 Recommended policies for the ESXi 5.5 operating system Table 9-12 Recommended policies x
12 1 VMware 1 VMware 1.1 VMware Infrastructure Today's x86 computers are designed merely for running single operating system or application program. Therefore, most of these computers are under utilization. With virtualization technologies, a physical machine can host multiple virtual machines (VMs), and resources on this physical machine can be shared among multiple environments. A physical machine can host multiple VMs running different operating systems and applications, improving x86 hardware utilization. VMware virtualization adds a condensed software layer on the computer hardware or in the host operating system. This software layer includes a VM monitor utility that allocates hardware resources in a dynamic and transparent way. Each operating system or application can access desired resources anytime as required. As an outstanding software solution for x86 virtualization, VMware enables users to manage their virtual environments in an effective and easy manner. Figure 1-1 shows the VMware Infrastructure virtual data center that consists of x86 computing servers, storage networks, storage arrays, IP networks, management servers, and desktop clients. 1
13 1 VMware Figure 1-1 VMware Infrastructure virtual data center Figure 1-2 provides an example of storage architecture in VMware Infrastructure. A Virtual Machine File System (VMFS) volume contains one or more LUNs belonging to different storage arrays. Multiple ESX servers share one VMFS volume and create virtual disks on the VMFS volume for VMs. 2
14 1 VMware Figure 1-2 Storage architecture in VMware Infrastructure VMware uses VMFS to centrally manage storage systems. VMFS is a shared cluster file system designed for VMs. This file system employs the distributed lock function to enable independent access to disks, ensuring that a VM is accessed by one physical host at a time. Raw Device Mapping (RDM) acts as the agent for raw devices on a VMFS volume. 1.2 File Systems in VMware Features of VMFS VMware VMFS is a high-performance cluster file system that allows multiple systems to concurrently access the shared storage, laying a solid foundation for the management of VMware clusters and dynamic resources. Automated maintenance directory structure File lock mechanism Distributed logical volume management Dynamic capacity expansion Cluster file system Journal logging Optimized VM data storage 3
15 1 VMware Advantages of VMFS Architecture of VMFS Improved storage utilization Simplified storage management ESX server clusters of enhanced performance and reliability In the VMFS architecture shown in Figure 1-3, a LUN is formatted into a VMFS file system, whose storage space is shared by three ESX servers each carrying two VMs. Each VM has a Virtual Machine Disk (VMDK) file that is stored in a directory (named after a VM) automatically generated by VMFS. VMFS adds a lock for each VMDK to prevent a VMDK from being accessed by two VMs at the same time. Figure 1-3 VMFS architecture Structure of a VMFS Volume Figure 1-4 shows the structure of a VMFS volume. A VMFS volume consists of one or more partitions that are arranged in lines. Only after the first partition is used out can the following partitions be used. The identity information about the VMFS volume is recorded in the first partition. 4
16 1 VMware Figure 1-4 Structure of a VMFS volume VMFS divides each extent into multiple blocks, each of which is then divided into smaller blocks. This block-based management is typically suitable for VMs. Files stored on VMs can be categorized as large files (such as VMDK files, snapshots, and memory swap files) and small files (such as log files, configuration files, and VM BIOS files). Large and small blocks are allocated to large and small files respectively. In this way, storage space is effectively utilized and the number of fragments in the file system is minimized, improving the storage performance of VMs. The VMFS-3 file system supports four data block sizes: 1 MB, 2 MB, 4 MB, and 8 MB. Sizes of files and volumes supported by VMFS-3 file systems vary with a file system's block size. 1.3 VMware RDM VMware RDM enables VMs to directly access storage. As shown in Figure 1-5, an RDM disk exists as an address mapping file on the VMFS volume. This mapping file can be considered as a symbolic link that maps a VM's access to an RDM disk to LUNs. 5
17 1 VMware Figure 1-5 RDM mechanism RDM provides two compatible modes, both of which supports vmotion, Distributed Resource Scheduler (DRS), and High Availability (HA). Virtual compatibility: fully simulates VMDK files and supports snapshots. Physical compatibility: directly accesses SCSI devices and does not support snapshots. RDMs are applicable in the following scenarios: Physical to Virtual (P2V): migrates services from a physical machine to a virtual machine. Virtual to Physical (V2P): migrates services from a virtual machine to a physical machine. Clustering physical machines and virtual machines. 1.4 VMware Cluster VMware Cluster consists of a group of ESX servers that jointly manage VMs, dynamically assign hardware resources, and automatically allocate VMs. With VMware Cluster, loads on VMs can be dynamically transferred among ESX hosts. VMware Cluster is the foundation for Fault Tolerance (FT), High Availability (HA), and Distributed Resource Scheduler (DRS). 1.5 VMware vmotion VMware vmotion can migrate running VMs to facilitate the maintenance of physical machines. VMs can be automatically migrated within a VMware cluster. Free VM migration balances loads among physical machines, improving application performance. VMware vmotion has demanding requirements on the CPU compatibility of physical hosts. VMs can only be migrated among physical machines that run CPUs of the same series. 1.6 VMware DRS VMware DRS constantly monitors the usage of resource pools in a VMware cluster, and intelligently allocates resources to VMs based on service requirements. Deploying VMs onto 6
18 1 VMware a small number of physical machines may result in unexpected resource bottlenecks. Resources required by VMs may exceed those available on physical machines. VMware DRS offers an automated mechanism to constantly balance capacities and migrate VMs onto physical machines with sufficient resources. As a result, each VM can invoke resources in a timely manner regardless of locations. 1.7 VMware FT and VMware HA VMware FT and VMware HA provide effective failover for physical hardware and VM operating systems. VMware FT and VMware HA can: Monitor VM status to detect faults on physical hardware and operating systems. Restart VMs on another physical machine automatically when the current physical machine where the VMs reside becomes faulty. Restart VMs to protect applications upon operating system faults. 1.8 Specifications VMware specifications vary with VMware versions. Table 1-1 lists major VMware specifications. Table 1-1 Major specifications of VMware Category Specifications Max. Value iscsi Physical LUNs per Server Paths to a LUN Number of total paths on a server Fibre Channel LUNs per host LUN size 512 B to 2 TB 512 B to 2 TB TB 64 TB LUN ID Number of paths to a LUN Number of total paths on a server Number of HBAs of any type HBA ports Targets per HBA
19 1 VMware FCoE Software FCoE adapters NAS NFS mounts per host NFS Default NFS datastores NFS datastores 64 (requir es change s to advanc ed settings ) VMFS Raw device mapping (RDM) size 512 B to 2 TB 512 B to 2 TB Volume Size 16 KB to 64 to TB 64 TB TB Volume Per host VMFS-2 Files per volume (64 x additio nal extents ) Block size 256 MB VMFS-3 VMFS-3 volumes configured per host Files per volume ~30,72 ~30,72 ~30,72 ~30,72 ~30, Block size 8 MB 8 MB 8 MB 8 MB 8 MB Volume Size TB 3 64 TB VMFS-5 Volume Size TB 4 64 TB Files per volume ~ ~ ~ Local disks are included. 2. The file quantity is sufficient to support the maximum number of VMs. 3. If the block size supported by the file system is 1 MB, the maximum volume size is 50 TB. 8
20 1 VMware 4. The volume size is also subject to RAID controllers or adapter drivers. The table lists only part of specifications. For more information, see: VMware vsphere Configuration Maximums (4.0) VMware vsphere Configuration Maximums (4.1) VMware vsphere Configuration Maximums (5.0) VMware vsphere Configuration Maximums (5.1) VMware vsphere Configuration Maximums (5.5) 9
21 2 Network Planning 2 Network Planning VMware hosts and storage systems can be networked based on different criteria. Table 2-1 describes the typical networking modes. Table 2-1 Networking modes Criteria Interface module type Whether switches are used Whether multiple paths exist Networking Mode Fibre Channel network/iscsi network Direct-connection network (no switches are used)/switch-based network (switches are used) Single-path network/multi-path network The Fibre Channel network is the most widely used network for VMware operating systems. To ensure service data security, both direct-connection network and switch-based network are multi-path networks. The following details commonly used Fibre Channel and iscsi networks. 2.1 Fibre Channel Network Diagram Multi-Path Direct-Connection Network Dual-Controller Huawei provides dual-controller and multi-controller storage systems, whose network diagrams differ. The following describes network diagrams of dual-controller and multi-controller storage systems respectively. The following uses HUAWEI OceanStor S5500T as an example to explain how to connect a VMware host to a storage system over a Fibre Channel multi-path direct-connection network, as shown in Figure
22 2 Network Planning Figure 2-1 Fibre Channel multi-path direct-connection network diagram (dual-controller) Multi-Controller On this network, both controllers of the storage system are connected to the host's HBAs through optical fibers. The following uses HUAWEI OceanStor (four-controller) as an example to explain how to connect a VMware host to a storage system over a Fibre Channel multi-path direct-connection network, as shown in Figure 2-2. Figure 2-2 Fibre Channel multi-path direct-connection network diagram (four-controller) On this network, the four controllers of the storage system are connected to the host's HBAs through optical fibers Multi-Path Switch-based Network Dual-Controller Huawei provides dual-controller and multi-controller storage systems, whose network diagrams differ. The following describes network diagrams of dual-controller and multi-controller storage systems respectively. The following uses HUAWEI OceanStor S5500T as an example to explain how to connect a VMware host to a storage system over a Fibre Channel multi-path switch-based network, as shown in Figure
23 2 Network Planning Figure 2-3 Fibre Channel multi-path switch-based network diagram (dual-controller) Multi-Controller On this network, the storage system is connected to the host via two switches. Both controllers of the storage system are connected to the switches through optical fibers and both switches are connected to the host through optical fibers. To ensure the connectivity between the host and the storage system, each zone contains only one storage port and its corresponding host port. The following uses HUAWEI OceanStor (four-controller) as an example to explain how to connect a VMware host to a storage system over a Fibre Channel multi-path switch-based network, as shown in Figure 2-4. Figure 2-4 Fibre Channel multi-path switch-based network diagram (four-controller) 12
24 2 Network Planning On this network, the storage system is connected to the host via two switches. All controllers of the storage system are connected to the switches through optical fibers and both switches are connected to the host through optical fibers. To ensure the connectivity between the host and the storage system, each zone contains only one storage port and its corresponding host port. 2.2 iscsi Network Diagram Multi-Path Direct-Connection Network Dual-Controller Huawei provides dual-controller and multi-controller storage systems, whose network diagrams differ. The following describes network diagrams of dual-controller and multi-controller storage systems respectively. The following uses HUAWEI OceanStor S5500T as an example to explain how to connect a VMware host to a storage system over an iscsi multi-path direct-connection network, as shown in Figure 2-5. Figure 2-5 iscsi multi-path direct-connection network diagram (dual-controller) Multi-Controller On this network, both controllers of the storage system are connected to the host's network adapter through Ethernet cables. The following uses HUAWEI OceanStor (four-controller) as an example to explain how to connect a VMware host to a storage system over an iscsi multi-path direct-connection network, as shown in Figure
25 2 Network Planning Figure 2-6 iscsi multi-path direct-connection network diagram (four-controller) On this network, the four controllers of the storage system are connected to the host's network adapter through Ethernet cables Multi-Path Switch-based Network Dual-Controller Huawei provides dual-controller and multi-controller storage systems, whose network diagrams differ. The following describes network diagrams of dual-controller and multi-controller storage systems respectively. The following uses HUAWEI OceanStor S5500T as an example to explain how to connect a VMware host to a storage system over an iscsi multi-path switch-based network, as shown in Figure 2-7. Figure 2-7 iscsi multi-path switch-based network diagram (dual-controller) 14
26 2 Network Planning On this network, the storage system is connected to the host via two Ethernet switches. Both controllers of the storage system are connected to the switches through Ethernet cables and both switches are connected to the host's network adapter through Ethernet cables. To ensure the connectivity between the host and the storage system, each VLAN contains only one storage port and its corresponding host port. Multi-Controller The following uses HUAWEI OceanStor (four-controller) as an example to explain how to connect a VMware host to a storage system over an iscsi multi-path switch-based network, as shown in Figure 2-8. Figure 2-8 iscsi multi-path switch-based network diagram (four-controller) On this network, the storage system is connected to the host via two Ethernet switches. All controllers of the storage system are connected to the switches through Ethernet cables and both switches are connected to the host's network adapter through Ethernet cables. To ensure the connectivity between the host and the storage system, each VLAN contains only one storage port and its corresponding host port. 15
27 3 Preparations Before Configuration (on a Host) 3 Preparations Before Configuration (on a Host) Before connecting a host to a storage system, make sure that the host HBAs are identified and working correctly. You also need to obtain the WWNs of HBA ports. The WWNs will be used in subsequent configuration on the storage system. This chapter details how to check the HBA status and query WWNs of HBA ports. 3.1 HBA Identification After an HBA is installed on a host, view information about the HBA on the host. Go to the page for configuration management and choose Storage Adapters in the navigation tree. In the function pane, hardware devices on the host are displayed, as shown in Figure 3-1. Figure 3-1 Viewing the HBA information 16
28 3 Preparations Before Configuration (on a Host) 3.2 HBA Information After a host identifies a newly installed HBA, you can view properties of the HBA on the host. The method of querying HBA information varies with operating system versions. The following details how to query HBA information on ESXi 5.5 and versions earlier than ESXi Versions Earlier than ESXi 5.5 The command for viewing the HBA properties varies according to the HBA type. The details are as follows: QLogic HBA The command syntax is as follows: cat /proc/scsi/qla2xxx/n The following is an example: ~ # cat /proc/scsi/qla2xxx/4 QLogic PCI to Fibre Channel Host Adapter for QMI2572: FC Firmware version (d5), Driver version 901.k1.1-14vmw Host Device Name vmhba1 BIOS version 2.09 FCODE version 3.14 EFI version 2.27 Flash FW version ISP: ISP2532 Request Queue = 0x , Response Queue = 0x Request Queue count = 2048, Response Queue count = 512 Number of response queues for multi-queue operation: 0 Total number of interrupts = Device queue depth = 0x40 Number of free request entries = 675 Total number of outstanding commands: 0 Number of mailbox timeouts = 0 Number of ISP aborts = 0 Number of loop resyncs = 1 Host adapter:loop State = <READY>, flags = 0x1a268 Link speed = <4 Gbps> Dpc flags = 0x0 Link down Timeout = 030 Port down retry = 005 Login retry count = 008 Execution throttle = 2048 ZIO mode = 0x6, ZIO timer = 1 Commands retried with dropped frame(s) = 0 Product ID = NPIV Supported : Yes Max Virtual Ports = 254 SCSI Device Information: scsi-qla0-adapter-node= ff32f612:010300:0; scsi-qla0-adapter-port= ff32f612:010300:0; 17
29 3 Preparations Before Configuration (on a Host) FC Target-Port List: scsi-qla0-target-0= a10bc8ee:010f00:81:online; The previous output provides information such as the HBA driver version, topology, WWN, and negotiated rate. Emulex HBA The command syntax is as follows: cat /proc/scsi/lpfcxxx/n The following is an example: ~ # cat /proc/scsi/lpfc820/8 Emulex LightPulse FC SCSI vmw IBM 42D0494 8Gb 2-Port PCIe FC HBA for System x on PCI bus 0000:81 device 01 irq 65 port 1 BoardNum: 1 ESX Adapter: vmhba4 Firmware Version: 1.11A5 (US1.11A5) Portname: 10:00:00:00:c9:d4:82:83 Nodename: 20:00:00:00:c9:d4:82:83 SLI Rev: 3 0 MQ: Unavailable NPIV Supported: VPIs max 255 VPIs used 1 RPIs max 4096 RPIs used 9 IOCBs inuse 0 IOCB max 8 txq cnt 0 txq max 0 txcmplq Vport List: ESX Adapter: vmhba37 Vport DID 0x30101, vpi 1, state 0x20 Portname: 20:aa:00:0c:29:00:07:1a Nodename: 20:aa:00:0c:29:00:06:1a Link Up - Ready: PortID 0x30100 Fabric Current speed 8G Port Discovered Nodes: Count 1 t0000 DID WWPN 20:0a:30:30:37:30:30:37 WWNN 21:00:30:30:37:30:30:37 qdepth 8192 max 31 active 1 busy 0 ~ # The previous output provides information such as HBA model and driver. Brocade HBA cat /proc/scsi/bfaxxx/n ESXi 5.5 Since ESXi 5.5, the /proc/scsi/ directory contains no content. Run the following commands to query HBA information: 18
30 3 Preparations Before Configuration (on a Host) ~ # esxcli storage core adapter list HBA Name Driver Link State UID Description vmhba0 ahci link-n/a sata.vmhba0 (0:0:31.2) Intel Corporation Patsburg 6 Port SATA AHCI Controller vmhba1 megaraid_sas link-n/a unknown.vmhba1 (0:3:0.0) LSI / Symbios Logic MegaRAID SAS Fusion Controller vmhba2 rste link-n/a pscsi.vmhba2 (0:4:0.0) Intel Corporation Patsburg 4-Port SATA Storage Control Unit vmhba3 qlnativefc link-up fc d222d: d222c (0:129:0.0) QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI Express HBA vmhba4 qlnativefc link-up fc d222f: d222e (0:129:0.1) QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI Express HBA vmhba32 ahci link-n/a sata.vmhba32 (0:0:31.2) Intel Corporation Patsburg 6 Port SATA AHCI Controller vmhba33 ahci link-n/a sata.vmhba33 (0:0:31.2) Intel Corporation Patsburg 6 Port SATA AHCI Controller vmhba34 ahci link-n/a sata.vmhba34 (0:0:31.2) Intel Corporation Patsburg 6 Port SATA AHCI Controller vmhba35 ahci link-n/a sata.vmhba35 (0:0:31.2) Intel Corporation Patsburg 6 Port SATA AHCI Controller vmhba36 ahci link-n/a sata.vmhba36 (0:0:31.2) Intel Corporation Patsburg 6 Port SATA AHCI Controller ~ # ~ # ~ # esxcfg-module -i qlnativefc esxcfg-module module information input file: /usr/lib/vmware/vmkmod/qlnativefc License: GPLv2 Version: vmw Name-space: Required name-spaces: com.vmware.vmkapi@v2_2_0_0 Parameters: ql2xallocfwdump: int Option to enable allocation of memory for a firmware dump during HBA initialization. Memory allocation requirements vary by ISP type. Default is 1 - allocate memory. ql2xattemptdumponpanic: int Attempt fw dump for each function on PSOD Default is 0 - Don't attempt fw dump. ql2xbypass_log_throttle: int Option to bypass log throttling.default is 0 - Throttling enabled. 1 - Log all errors. ql2xcmdtimeout: int Timeout value in seconds for scsi command, default is 20 ql2xcmdtimermin: int Minimum command timeout value. Default is 30 seconds. ql2xdevdiscgoldfw: int Option to enable device discovery with golden firmware Default is 0 - no discovery. 1 - discover device. ql2xdisablenpiv: int Option to disable/enable NPIV feature globally. 0 - NPIV enabled. ql2xenablemsi2422: int 1 - NPIV disabled. Default is Enables MSI interrupt scheme on 2422sDefault is 0 - disable MSI-X/MSI. 1 - enable MSI-X/MSI. 19
31 3 Preparations Before Configuration (on a Host) ql2xenablemsi24xx: int Enables MSIx/MSI interrupt scheme on 24xx cardsdefault is 1 - enable MSI-X/MSI. 0 - disable MSI-X/MSI. ql2xenablemsix: int Set to enable MSI or MSI-X interrupt mechanism. 0 = enable traditional pin-based interrupt mechanism. 1 = enable MSI-X interrupt mechanism (Default). 2 = enable MSI interrupt mechanism. ql2xexecution_throttle: int IOCB exchange count for HBA.Default is 0, set intended value to override Firmware defaults. ql2xextended_error_logging: int Option to enable extended error logging, Default is 0 - no logging. 1 - log errors. ql2xfdmienable: int Enables FDMI registratons Default is 1 - perfom FDMI. 0 - no FDMI. ql2xfwloadbin: int Option to specify location from which to load ISP firmware: 2 -- load firmware via the request_firmware() (hotplug) interface load firmware from flash use default semantics. ql2xiidmaenable: int Enables iidma settings Default is 1 - perform iidma. 0 - no iidma. ql2xintrdelaytimer: int ZIO: Waiting time for Firmware before it generates an interrupt to the host to notify completion of request. ql2xioctltimeout: int IOCTL timeout value in seconds for pass-thur commands. Default is 66 seconds. ql2xioctltimertest: int IOCTL timer test enable - set to enable ioctlcommand timeout value to trigger before fw cmdtimeout value. Default is disabled ql2xloginretrycount: int Specify an alternate value for the NVRAM login retry count. ql2xlogintimeout: int Login timeout value in seconds. ql2xmaxlun: int Defines the maximum LUNs to register with the SCSI midlayer. Default is 256. Maximum allowed is ql2xmaxqdepth: int Maximum queue depth to report for target devices. ql2xmaxsgs: int Maximum scatter/gather entries per request,default is the Max the OS Supports. ql2xmqcpuaffinity: int Enables CPU affinity settings for the driver Default is 0 for no affinity of request and response IO. Set it to 1 to turn on the cpu affinity. ql2xmqqos: int Enables MQ settings Default is 1. Set it to enable queues in MQ QoS mode. ql2xoperationmode: int Option to disable ZIO mode for ISP24XX: Default is 1, set 0 to disable ql2xplogiabsentdevice: int Option to enable PLOGI to devices that are not present after a Fabric scan. This is needed for several broken switches. Default is 0 - no PLOGI. 1 - perfom PLOGI. ql2xqfullrampup: int Number of seconds to wait to begin to ramp-up the queue depth for a device after a queue-full condition has been detected. Default is 120 seconds. ql2xqfulltracking: int 20
32 3 Preparations Before Configuration (on a Host) Controls whether the driver tracks queue full status returns and dynamically adjusts a scsi device's queue depth. Default is 1, perform tracking. Set to 0 to disable dynamic tracking and adjustment of queue depth. ql2xusedefmaxrdreq: int Default is 0 - adjust PCIe Maximum Read Request Size. 1 - use system default. qlport_down_retry: int Maximum number of command retries to a port that returns a PORT-DOWN status. ~ # The previous output provides information such as HBA model, WWN, and driver. You can run the following command to obtain more HBA details: # /usr/lib/vmware/vmkmgmt_keyval/vmkmgmt_keyval -a Listing all system keys: Key Value Instance: QLNATIVEFC/qlogic Listing keys: Name: 0 Type: string value: QLogic PCI to Fibre Channel Host Adapter for HPAJ764A: FC Firmware version (90d5), Driver version Host Device Name vmhba3 BIOS version 2.12 FCODE version 2.03 EFI version 2.05 Flash FW version ISP: ISP2532, Serial# MY T MSI-X enabled Request Queue = 0x41094e3b6000, Response Queue = 0x41094e3d7000 Request Queue count = 2048, Response Queue count = 512 Number of response queues for multi-queue operation: 2 CPU Affinity mode enabled Total number of MSI-X interrupts on vector 0 (handler = ff40) = 371 Total number of MSI-X interrupts on vector 1 (handler = ff41) = 29 Total number of MSI-X interrupts on vector 2 (handler = ff42) = 2173 Total number of MSI-X interrupts on vector 3 (handler = ff43) = 6916 Device queue depth = 0x40 Number of free request entries = 238 Total number of outstanding commands: 0 Number of mailbox timeouts = 0 Number of ISP aborts = 0 Number of loop resyncs = 14 Host adapter:loop State = [DEAD], flags = 0x205a260 Link speed = [Unknown] Dpc flags = 0x0 Link down Timeout = 008 Port down retry = 010 Login retry count = 010 Execution throttle = 2048 ZIO mode = 0x6, ZIO timer = 1 Commands retried with dropped frame(s) = 0 21
33 3 Preparations Before Configuration (on a Host) Product ID = NPIV Supported : Yes Max Virtual Ports = 254 Number of Virtual Ports in Use = 1 SCSI Device Information: scsi-qla0-adapter-node= d222d:000000:0; scsi-qla0-adapter-port= d222c:000000:0; FC Target-Port List: scsi-qla0-target-0=200a :030900:1000:[offline]; scsi-qla0-target-1=200b :030900:0:[offline]; Virtual Port Information: Virtual Port WWNN:WWPN:ID = c : c :000000; Virtual Port 1:VP State = [FAILED], Vp Flags = 0x0 Virtual Port 1:Request-Q ID = [2] FC Port Information for Virtual Port 1: scsi-qla3-port-0= :200a :030900:1000; scsi-qla3-port-1= :200b :030900:0; Name: 1 Type: string value: QLogic PCI to Fibre Channel Host Adapter for HPAJ764A: FC Firmware version (90d5), Driver version Host Device Name vmhba4 BIOS version 2.12 FCODE version 2.03 EFI version 2.05 Flash FW version ISP: ISP2532, Serial# MY T MSI-X enabled Request Queue = 0x41094e42b000, Response Queue = 0x41094e44c000 Request Queue count = 2048, Response Queue count = 512 Number of response queues for multi-queue operation: 2 CPU Affinity mode enabled Total number of MSI-X interrupts on vector 0 (handler = ff44) = 384 Total number of MSI-X interrupts on vector 1 (handler = ff45) = 39 Total number of MSI-X interrupts on vector 2 (handler = ff46) = Total number of MSI-X interrupts on vector 3 (handler = ff47) = Device queue depth = 0x40 Number of free request entries = 546 Total number of outstanding commands: 0 Number of mailbox timeouts = 0 Number of ISP aborts = 0 Number of loop resyncs = 14 Host adapter:loop State = [DEAD], flags = 0x204a260 Link speed = [Unknown] Dpc flags = 0x0 Link down Timeout =
34 3 Preparations Before Configuration (on a Host) Port down retry = 010 Login retry count = 010 Execution throttle = 2048 ZIO mode = 0x6, ZIO timer = 1 Commands retried with dropped frame(s) = 0 Product ID = NPIV Supported : Yes Max Virtual Ports = 254 Number of Virtual Ports in Use = 1 SCSI Device Information: scsi-qla1-adapter-node= d222f:000000:0; scsi-qla1-adapter-port= d222e:000000:0; FC Target-Port List: scsi-qla1-target-0= :030a00:1000:[offline]; scsi-qla1-target-1= :030a00:0:[offline]; Virtual Port Information: Virtual Port WWNN:WWPN:ID = c : c :000000; Virtual Port 1:VP State = [FAILED], Vp Flags = 0x0 Virtual Port 1:Request-Q ID = [2] FC Port Information for Virtual Port 1: scsi-qla2-port-0= : :030a00:1000; scsi-qla2-port-1= : :030a00:0; Name: 2 Type: string value: Instance not found on this system. Name: 15 Type: string value: Instance not found on this system. Name: DRIVERINFO Type: string value: Driver version Module Parameters ql2xlogintimeout = 20 qlport_down_retry = 10 ql2xplogiabsentdevice = 0 ql2xloginretrycount = 0 ql2xallocfwdump = 1 ql2xioctltimeout = 66 ql2xioctltimertest = 0 ql2xextended_error_logging = 0 ql2xdevdiscgoldfw = 0 ql2xattemptdumponpanic= 0 ql2xfdmienable = 1 23
35 3 Preparations Before Configuration (on a Host) ql2xmaxqdepth = 64 ql2xqfulltracking = 1 ql2xqfullrampup = 120 ql2xiidmaenable = 1 ql2xusedefmaxrdreq = 0 ql2xenablemsix = 1 ql2xenablemsi24xx = 1 ql2xenablemsi2422 = 0 ql2xoperationmode = 1 ql2xintrdelaytimer = 1 ql2xcmdtimeout = 20 ql2xexecution_throttle = 0 ql2xmaxsgs = 0 ql2xmaxlun = 256 ql2xmqqos = 1 ql2xmqcpuaffinity = 1 ql2xfwloadbin = 0 ql2xbypass_log_throttle = 0 ~ # ~ # The previous output provides more detailed HBA information. For more information, visit: xternalid= For details about how to modify the HBA queue depth, visit: xternalid=
36 4 Preparations Before Configuration (on a Storage System) 4 Preparations Before Configuration (on a Storage System) Make sure that RAID groups, LUNs, and hosts are correctly created on the storage systems. These configurations are common and therefore not detailed here. 25
37 5 Configuring Switches 5 Configuring Switches VMware hosts and storage systems can be connected over a Fibre Channel switch-based network and an iscsi switch-based network. A Fibre Channel switch-based network uses Fibre Channel switches and an iscsi network uses Ethernet switches. This chapter describes how to configure a Fibre Channel switch and an Ethernet switch respectively. 5.1 Fibre Channel Switch The commonly used Fibre Channel switches are mainly from Brocade, Cisco, and QLogic. The following uses a Brocade switch as an example to explain how to configure switches Querying the Switch Model and Version Perform the following steps to query the switch model and version: Step 1 Log in to the Brocade switch from a web page. On the web page, enter the IP address of the Brocade switch. The Web Tools switch login dialog box is displayed. Enter the account and password. The default account and password are admin and password. The switch management page is displayed. CAUTION Web Tools works correctly only when Java is installed on the host. Java 1.6 or later is recommended. Step 2 View the switch information. On the switch management page that is displayed, click Switch Information. The switch information is displayed, as shown in Figure
38 5 Configuring Switches Figure 5-1 Switch information Tue June Note the following parameters: Fabric OS version: indicates the switch version information. The interoperability between switches and storage systems varies with the switch version. Only switches of authenticated versions can interconnect correctly with storage systems. Type: This parameter is a decimal consists of an integer and a decimal fraction. The integer indicates the switch model and the decimal fraction indicates the switch template version. You only need to pay attention to the switch model. Table 5-1Table 5-1 describes switch model mapping. Table 5-1 Mapping between switch types and names Switch Type Switch Name Switch Type Switch Name 1 Brocade 1000 Switch 58 Brocade 5000 Switch 2,6 Brocade 2800 Switch 61 Brocade 4424 Embedded Switch 3 Brocade 2100, 2400 Switches 62 Brocade DCX Backbone 4 Brocade 20x0, 2010, 2040, 2050 Switches 5 Brocade 22x0, 2210, 2240, 2250 Switches 64 Brocade 5300 Switch 66 Brocade 5100 Switch 7 Brocade 2000 Switch 67 Brocade Encryption Switch 27
39 5 Configuring Switches Switch Type Switch Name Switch Type Switch Name 9 Brocade 3800 Switch 69 Brocade 5410 Blade 10 Brocade Director 70 Brocade 5410 Embedded Switch 12 Brocade 3900 Switch 71 Brocade 300 Switch 16 Brocade 3200 Switch 72 Brocade 5480 Embedded Switch 17 Brocade 3800VL 73 Brocade 5470 Embedded Switch 18 Brocade 3000 Switch 75 Brocade M5424 Embedded Switch 21 Brocade Director 76 Brocade 8000 Switch 22 Brocade 3016 Switch 77 Brocade DCX-4S Backbone 26 Brocade 3850 Switch 83 Brocade 7800 Extension Switch 27 Brocade 3250 Switch 86 Brocade 5450 Embedded Switch 29 Brocade 4012 Embedded Switch 87 Brocade 5460 Embedded Switch 32 Brocade 4100 Switch 90 Brocade 8470 Embedded Switch 33 Brocade 3014 Switch 92 Brocade VA-40FC Switch 34 Brocade 200E Switch 95 Brocade VDX Data Center Switch 37 Brocade 4020 Embedded Switch 96 Brocade VDX Data Center Switch 38 Brocade 7420 SAN Router 97 Brocade VDX Data Center Switch 40 Fibre Channel Routing (FCR) Front Domain 41 Fibre Channel Routing, (FCR) Xlate Domain 98 Brocade VDX Data Center Switch 108 Dell M8428-k FCoE Embedded Switch 42 Brocade Director 109 Brocade 6510 Switch 28
40 5 Configuring Switches Switch Type Switch Name Switch Type Switch Name 43 Brocade 4024 Embedded Switch 116 Brocade VDX 6710 Data Center Switch 44 Brocade 4900 Switch 117 Brocade 6547 Embedded Switch 45 Brocade 4016 Embedded Switch 118 Brocade 6505 Switch 46 Brocade 7500 Switch 120 Brocade DCX Backbone 51 Brocade 4018 Embedded Switch 121 Brocade DCX Backbone 55.2 Brocade 7600 Switch Ethernet IPv4: indicates the switch IP address. Effective Configuration: indicates the currently effective configurations. This parameter is important and is related to zone configurations. In this example, the currently effective configuration is ss. ----End Configuring Zones Zone configuration is important for Fibre Channel switches. Perform the following steps to configure switch zones: Step 1 Log in to the Brocade switch from a web page. This step is the same as that in section "Querying the Switch Model and Version." Step 2 Check the switch port status. Normally, the switch port indicators are steady green, as shown in Figure 5-2. Figure 5-2 Switch port indicator status If the port indicators are abnormal, check the topology mode and rate. Proceed with the next step after all indicators are normal. Step 3 Go to the Zone Admin page. In the navigation tree of Web Tools, choose Task > Manage > Zone Admin. You can also choose Manage > Zone Admin in the navigation bar. 29
41 5 Configuring Switches Step 4 Check whether the switch identifies hosts and storage systems. On the Zone Admin page, click the Zone tab. In Ports&Attached Devices, check whether all related ports are identified, as shown in Figure 5-3. Figure 5-3 Zone tab page The preceding figure shows that ports 1,8 and 1,9 in use are correctly identified by the switch. Step 5 Create a zone. On the Zone tab page, click New Zone to create a new zone and name it zone_8_9. Select ports 1,8 and 1,9 and click Add Member to add them to the new zone, as shown in Figure 5-4. Figure 5-4 Zone configuration 30
42 5 Configuring Switches CAUTION To ensure data is transferred separately, ensure that each zone contains one initiator and one target only. Step 6 Add the new zone to the configuration file and activate the new zone. On the Zone Admin page, click the Zone Config tab. In the Name drop-down list, choose the currently effective configuration ss. In Member Selection List, select zone zone_8_9 and click Add Member to add it to the configuration file. Click Save Config to save the configuration and click Enable Config to make the configuration effective. Figure 5-5 shows the Zone Config page. Figure 5-5 Zone Config tab page Step 7 Verify that the configuration takes effect. In the navigation tree of Web Tools, choose Task > Monitor > Name Server to go to the Name Server page. You can also choose Monitor > Name Server in the navigation bar. Figure 5-6 shows the Name Server page. 31
43 5 Configuring Switches Figure 5-6 Name Server page The preceding figure shows that ports 8 and 9 are members of zone_8_9 that is now effective. An effective zone is marked by an asterisk (*). ----End Precautions Note the following when connecting a Brocade switch to a storage system at a rate of 8 Gbit/s: The topology mode of the storage system must be set to switch. fill word of ports through which the switch is connected to the storage system must be set to 0. To configure this parameter, run the portcfgfillword <port number> 0 command on the switch. Note the following when connecting a Brocade switch to a storage system at a rate of 8 Gbit/s: When the switch is connected to module HP VC 8Gb 20-port FC or HP VC FlexFabric 10Gb/24-port, change the switch configuration. For details, visit: play/?javax.portlet.prp_efb5c e51970c8fa22b053ce01=wsrp-navigationalstate% 3DdocId%3Demr_na-c %7CdocLocale%3Dzh_CN&lang=en&javax.portlet.be gcachetok=com.vignette.cachetoken&sp4ts.oid= &javax.portlet.endcachetok= com.vignette.cachetoken&javax.portlet.tpst=efb5c e51970c8fa22b053ce01&hpa ppid=sp4ts&cc=us&ac.admitted= Ethernet Switch This section describes how to configure Ethernet switches, including configuring VLANs and binding ports. 32
44 5 Configuring Switches Configuring VLANs On an Ethernet network to which many hosts are connected, a large number of broadcast packets are generated during the host communication. Broadcast packets sent from one host will be received by all other hosts on the network, consuming more bandwidth. Moreover, all hosts on the network can access each other, resulting data security risks. To save bandwidth and prevent security risks, hosts on an Ethernet network are divided into multiple logical groups. Each logical group is a VLAN. The following uses HUAWEI Quidway 2700 Ethernet switch as an example to explain how to configure VLANs. In the following example, two VLANs (VLAN 1000 and VLAN 2000) are created. VLAN 1000 contains ports GE 1/0/1 to 1/0/16. VLAN 2000 contains ports GE 1/0/20 to 1/0/24. Step 1 Go to the system view. <Quidway>system-view System View: return to User View with Ctrl+Z. Step 2 Create VLAN 1000 and add ports to it. [Quidway]VLAN 1000 [Quidway-vlan1000]port GigabitEthernet 1/0/1 to GigabitEthernet 1/0/16 Step 3 Configure the IP address of VLAN [Quidway-vlan1000]interface VLAN 1000 [Quidway-Vlan-interface1000]ip address Step 4 Create VLAN 2000, add ports, and configure the IP address. [Quidway]VLAN 2000 [Quidway-vlan2000]port GigabitEthernet 1/0/20 to GigabitEthernet 1/0/24 [Quidway-vlan2000]interface VLAN 2000 [Quidway-Vlan-interface2000]ip address End Binding Ports When storage systems and hosts are connected in point-to-point mode, existing bandwidth may be insufficient for storage data transmission. Moreover, devices cannot be redundantly connected in point-to-point mode. To address these problems, ports are bound (link aggregation). Port binding can improve bandwidth and balance load among multiple links. Link Aggregation Modes Three Ethernet link aggregation modes are available: Manual aggregation Manually run a command to add ports to an aggregation group. Ports added to the aggregation group must have the same link type. Static aggregation Manually run a command to add ports to an aggregation group. Ports added to the aggregation group must have the same link type and LACP enabled. Dynamic aggregation 33
45 5 Configuring Switches The protocol dynamically adds ports to an aggregation group. Ports added in this way must have LACP enabled and the same speed, duplex mode, and link type. Table 5-2 compares the three link aggregation modes. Table 5-2 Comparison of link aggregation modes Link Aggregation Mode Packet Exchange Port Detection CPU Usage Manual aggregation No No Low Static aggregation Yes Yes High Dynamic aggregation Yes Yes High Configuration HUAWEI OceanStor storage devices support 802.3ad link aggregation (dynamic aggregation). In this link aggregation mode, multiple network ports are in an active aggregation group and work in duplex mode and at the same speed. After binding iscsi host ports on a storage device, enable aggregation for their peer ports on a switch. Otherwise, links are unavailable between the storage device and the switch. This section uses switch ports GE 1/0/1 and GE 1/0/2 and iscsi host ports P2 and P3 as examples to explain how to bind ports. You can adjust related parameters based on site requirements. Bind the iscsi host ports. Step 2 Log in to the ISM and go to the page for binding ports. In the ISM navigation tree, choose Device Info > Storage Unit > Ports. In the function pane, click iscsi Host Ports. Step 3 Bind ports. Select the ports that you want to bind and choose Bind Ports > Bind in the menu bar. In this example, the ports to be bound are P2 and P3. The Bind iscsi Port dialog box is displayed. In Bond name, enter the name for the port bond and click OK. The Warning dialog box is displayed. In the Warning dialog box, select I have read the warning message carefully and click OK. The Information dialog box is displayed, indicating that the operation succeeded. Click OK. After the storage system ports are bound, configure link aggregation on the switch. Run the following command on the switch: <Quidway>system-view System View: return to User View with Ctrl+Z. [Quidway-Switch]interface GigabitEthernet 1/0/1 [Quidway-Switch-GigabitEthernet1/0/19]lacp enable LACP is already enabled on the port! [Quidway-Switch-GigabitEthernet1/0/19]quit 34
46 5 Configuring Switches [Quidway-Switch]interface GigabitEthernet 1/0/2 [Quidway-Switch-GigabitEthernet1/0/20]lacp enable LACP is already enabled on the port! [Quidway-Switch-GigabitEthernet1/0/20]quit After the command is executed, LACP is enabled for ports GE 1/0/1 and GE 1/0/2. Then the ports can be automatically detected and added to an aggregation group. ----End 35
47 6 Establishing Fibre Channel Connections 6 Establishing Fibre Channel Connections After connecting a host to a storage system, check the topology modes of the host and the storage system. Fibre Channel connections are established between the host and the storage system after host initiators are identified by the storage system. The following describes how to check topology modes and add initiators. 6.1 Checking Topology Modes On direct-connection networks, HBAs support specific topology modes. The topology mode of a storage system must be consistent with that of supported by host HBAs. You can use the ISM to manually change the topology mode of a storage system to that supported by host HBAs. If the storage ports connected to host HBAs are adaptive, there is no need to manually change the storage system topology mode. The method for checking topology modes varies with storage systems. The following describes how to check the topology mode of the OceanStor T series storage system and the OceanStor series enterprise storage system OceanStor T Series Storage System The check method is as follows: In the ISM navigation tree, choose Device Info > Storage Unit > Ports. In the function pane, click FC Host Ports. Select a port connected to the host and then view the port details, Figure 6-1 shows the details about a Fibre Channel port. 36
48 6 Establishing Fibre Channel Connections Figure 6-1 Fibre Channel port details As shown in the preceding figure, the topology mode of the OceanStor T series storage system is Public Loop OceanStor Series Enterprise Storage System In the ISM navigation tree, choose System. Then click the device view icon in the upper right corner. Choose Controller Enclosure ENG0 > Controller > Interface Module > FC Port and click the port whose details that you want to view, as shown in Figure 6-2. In the navigation tree, you can see controller A and controller B, each of which has different interface modules. Choose a controller and an interface module based on actual conditions. Figure 6-2 Fibre Channel port details As shown in the preceding figure, the port working mode of the OceanStor storage system is P2P. 37
49 6 Establishing Fibre Channel Connections 6.2 Adding Initiators This section describes how to add host HBA initiators on a storage system. Perform the following steps to add initiators: Step 1 Check HBA WWNs on the host. Step 2 Check host WWNs on the storage system and add the identified WWNs to the host. The method for checking host WWNs varies with storage systems. The following describes how to check WWNs on the OceanStor T series storage system and the OceanStor storage system. OceanStor T series storage system (V100 and V200R001) Log in to the ISM and choose SAN Services > Mappings > Initiators in the navigation tree. In the function pane, check the initiator information. Ensure that the WWNs in step 1 are identified. If the WWNs are not identified, check the Fibre Channel port status. Ensure that the port status is normal. OceanStor Series Enterprise Storage System ----End Log in to the ISM and choose Host in the navigation tree. On the Initiator tab page, click Add Initiator and check that the WWNs in step 1 are found. If the WWNs are not identified, check the Fibre Channel port status. Ensure that the port status is normal. 6.3 Establishing Connections Add the WWNs (initiators) to the host and ensure that the initiator connection status is Online. If the initiator status is Online, Fibre Channel connections are established correctly. If the initiator status is Offline, check the physical links and topology mode. 38
50 7 Establishing iscsi Connections 7 Establishing iscsi Connections Both a host and a storage system need to be configured before establishing iscsi connections between the host and the storage system. This chapter describes how to configure a host and a storage system before establishing iscsi connections. 7.1 Host Configurations Configuring Service IP Addresses You can configure services IP addresses on a VMware host by adding virtual networks. Perform the following steps: Step 1 In vsphere Client, choose Network > Add Network. Step 2 In Add Network Wizard that is displayed, select VMkernel, as shown in Figure 7-1. Figure 7-1 Adding VMkernel Click Next. Step 3 Select the iscsi service network port, as shown in Figure
51 7 Establishing iscsi Connections Figure 7-2 Creating a vsphere standard switch Step 4 Specify the network label, as shown in Figure 7-3. Figure 7-3 Specifying the network label Step 5 Enter the iscsi service IP address, as shown in Figure
52 7 Establishing iscsi Connections Figure 7-4 Entering the iscsi service IP address Step 6 Confirm the information that you have configured, as shown in Figure 7-5. Figure 7-5 Information summary For a single-path network, the configuration is completed. For a multi-path network, proceed with the next step. Step 7 Repeat steps 1 to 6 to create another virtual network. Figure 7-6 shows the configuration completed for a multi-path network. 41
53 7 Establishing iscsi Connections Figure 7-6 iscsi multi-path network with dual adapters ----End Configuring Host Initiators Host initiator configuration includes creating host initiators, binding initiators to virtual networks created in section "Configuring Service IP Addresses", and discovering targets. In VMware ESX 4.1 and earlier versions, storage adapters have iscsi adapters. You only need to enable those adapters. In VMware ESXi 5.0 and later versions, you need to manually add iscsi initiators. This section uses VMware ESXi 5.0 as an example to explain how to configure host initiators. Step 1 Choose Storage Adapters and right-click the function pane, as shown in Figure 7-7. Figure 7-7 Adding storage adapters Step 2 Choose Add Software iscsi Adapter from the shortcut menu. On the dialog box that is displayed, click OK, as shown in Figure
54 7 Establishing iscsi Connections Figure 7-8 Adding iscsi initiators The newly added iscsi initiators are displayed, as shown in Figure 7-9. Figure 7-9 iscsi Software Adapter Step 3 Right-click a newly created iscsi initiator and choose Properties from the shortcut menu, as shown in Figure
55 7 Establishing iscsi Connections Figure 7-10 Initiator properties Step 4 On the dialog box that is displayed, click the Network Configuration tab and click Add, as shown in Figure Figure 7-11 iscsi initiator properties Step 5 Select a virtual network that you have created in section and click OK, as shown in Figure
56 7 Establishing iscsi Connections Figure 7-12 Binding with a new VMkernel network adapter Figure 7-13 shows the properties of an initiator bound to the virtual network. Figure 7-13 Initiator properties after virtual network binding Step 6 In the dialog box for configuring initiator properties, click the Dynamic Discovery tab, click Add, and enter the target IP address (service IP address of the storage system), as shown in Figure
57 7 Establishing iscsi Connections Figure 7-14 Adding send target server ----End Configuring CHAP Authentication If CHAP authentication is required between a storage system and a host, perform the following steps to configure CHAP authentication: Step 1 In the dialog box for configuring iscsi initiator properties, click the General tab and click CHAP in the left lower corner, as shown in Figure Figure 7-15 General tab page 46
58 7 Establishing iscsi Connections Step 2 In the CHAP Credentials dialog box that is displayed, choose Use CHAP from the Select option drop-down list. Enter the CHAP user name and password configured on the storage system, as shown in Figure Figure 7-16 CHAP credentials dialog box Click OK. ----End 7.2 Storage System Different versions of storage systems support different IP protocols. Specify the IP protocols for storage systems based on actual storage system versions and application scenarios. Observe the following principles when configuring IP addresses of iscsi ports on storage systems: The IP addresses of an iscsi host port and a management network port must reside on different network segments. The IP addresses of an iscsi host port and a maintenance network port must reside on different network segments. The IP addresses of an iscsi host port and a heartbeat network port must reside on different network segments. The IP addresses of iscsi host ports on the same controller must reside on different network segments. In some storage systems of the latest versions, IP addresses of iscsi host ports on the same controller can reside on the same network segment. However, this configuration is not recommended. The IP address of an iscsi host port communicates correctly with the IP address of the host' service network port to which this iscsi host port connects, or IP addresses of other storage devices' iscsi host ports connect to this iscsi host port. 47
59 7 Establishing iscsi Connections CAUTION Read-only users are not allowed to modify the IP address of an iscsi host port. Modifying the IP address of an iscsi host port will interrupt the services on the port. The IP address configuration varies with storage systems. The following explains how to configure IPv4 addresses on the OceanStor T series storage system and the OceanStor series enterprise storage system OceanStor T Series Storage System Perform the following steps to configure the iscsi service on the OceanStor T series storage system: Step 1 Configure the service IP address. In the ISM navigation tree, choose Device Info > Storage Unit > Ports. In the function pane, click iscsi Host Ports. Select a port and choose IP Address > Modify IPv4 Address in the tool bar, as shown in Figure Figure 7-17 Modifying IPv4 addresses In the dialog box that is displayed, enter the new IP address and subnet mask and click OK. If CHAP authentication is not required between the storage system and host, the host initiator configuration is completed. If CHAP authentication is required, proceed with the following steps to configure CHAP authentication on the storage system. Step 2 Configure CHAP authentication. In the ISM navigation tree, choose SAN Services > Mappings > Initiators. In the function pane, select the initiator whose CHAP authentication you want to configure and choose CHAP > CHAP Configuration in the navigation bar, as shown in Figure
60 7 Establishing iscsi Connections Figure 7-18 Initiator CHAP configuration Step 3 In the CHAP Configuration dialog box that is displayed, click Create in the lower right corner, as shown in Figure Figure 7-19 CHAP Configuration dialog box 49
61 7 Establishing iscsi Connections In the Create CHAP dialog box that is displayed, enter the CHAP user name and password, as shown in Figure Figure 7-20 Create CHAP dialog box CAUTION The CHAP user name contains 4 to 25 characters and the password contains 12 to 16 characters. The limitations to CHAP user name and password vary with storage systems. For details, see the help documentation of corresponding storage systems. Step 4 Assign the CHAP user name and password to the initiator, as shown in Figure
62 7 Establishing iscsi Connections Figure 7-21 Assigning the CHAP account to the initiator Step 5 Enable the CHAP account that is assigned to the host. In the ISM navigation tree, choose SAN Services > Mappings > Initiators. In the function pane, select the initiator whose CHAP account is to be enabled and choose CHAP > Status Settings in the navigation bar, as shown in Figure Figure 7-22 Setting CHAP status 51
63 7 Establishing iscsi Connections In the Status Settings dialog box that is displayed, choose Enabled from the CHAP Status drop-down list, as shown in Figure Figure 7-23 Enabling CHAP On the ISM, view the initiator status, as shown in Figure Figure 7-24 Initiator status after CHAP is enabled ----End OceanStor Series Enterprise Storage System Perform the following steps to configure the iscsi service on the OceanStor series enterprise storage system: Step 1 Go to the iscsi Host Port dialog box. Then perform the following steps: 1. On the right navigation bar, click. 2. In the basic information area of the function pane, click the device icon. 3. In the middle function pane, click the cabinet whose iscsi ports you want to view. 4. Click the controller enclosure where the desired iscsi host ports reside. The controller enclosure view is displayed. 5. Click to switch to the rear view. 52
64 7 Establishing iscsi Connections 6. Click the iscsi host port whose information you want to modify. 7. The iscsi Host Port dialog box is displayed. 8. Click Modify. Step 2 Modify the iscsi host port. 1. In IPv4 Address or IPv6 Address, enter the IP address of the iscsi host port. 2. In Subnet Mask or Prefix, enter the subnet mask or prefix of the iscsi host port. 3. In MTU (Byte), enter the maximum size of data packet that can be transferred between the iscsi host port and the host. The value is an integer ranging from 1500 to Step 3 Confirm the iscsi host port modification. 1. Click Apply. The Danger dialog box is displayed. 2. Carefully read the contents of the dialog box. Then click the check box next to the statement I have read the previous information and understood subsequences of the operation to confirm the information. 3. Click OK. The Success dialog box is displayed, indicating that the operation succeeded. 4. Click OK. Step 4 Configure CHAP authentication. 1. Select the initiator for whose CHAP authentication you want to configure. The initiator configuration dialog box is displayed. 2. Select Enable CHAP. The CHAP configuration dialog box is displayed. 3. Enter the user name and password of CHAP authentication and click OK. CHAP authentication is configured on the storage system. ----End 53
65 8 Mapping and Using LUNs 8 Mapping and Using LUNs 8.1 Mapping LUNs to a Host OceanStor T Series Storage System Prerequisites Procedure After a storage system is connected to a VMware host, map the storage system LUNs to the host. Two methods are available for mapping LUNs: Mapping LUNs to a host: This method is applicable to scenarios where only one small-scale client is deployed. Mapping LUNs to a host group: This method is applicable to cluster environments or scenarios where multiple clients are deployed. RAID groups have been created on the storage system. LUNs have been created on the RAID groups. This document explains how to map LUNs to a host. Perform the following steps to map LUNs to a host: Step 1 In the ISM navigation tree, choose SAN Services > Mappings >Hosts. Step 2 In the function pane, select the desired host. In the navigation bar, choose Mapping > Add LUN Mapping. The Add LUN Mapping dialog box is displayed. Step 3 Select LUNs that you want to map to the host and click OK. ----End CAUTION When mapping LUNs on a storage system to a host, ensure that the host LUN whose ID is 0 is mapped. 54
66 8 Mapping and Using LUNs OceanStor Series Enterprise Storage System Prerequisites Procedure After a storage system is connected to a VMware host, map the storage system LUNs to the host. LUNs, LUN groups, hosts, and host groups have been created. Step 1 Go to the Create Mapping View dialog box. Then perform the following steps: On the right navigation bar, click. 1. On the host management page, click Mapping View. 2. Click Create. The Create Mapping View dialog box is displayed. Step 2 Set basic properties for the mapping view. 1. In the Name text box, enter a name for the mapping view. 2. (Optional) In the Description text box, describe the mapping view. Step 3 Add a LUN group to the mapping view. 1. Click. The Select LUN Group dialog box is displayed. If your service requires a new LUN group, click Create to create one. You can select Shows only the LUN groups that do not belong to any mapping view to quickly locate LUN groups. 2. From the LUN group list, select the LUN groups you want to add to the mapping view. 3. Click OK. Step 4 Add a host group to the mapping view. 1. Click. If your service requires a new host group, click Create to create one. 2. The Select Host Group dialog box is displayed. 3. From the host group list, select the host groups you want to add to the mapping view. 4. Click OK. Step 5 (Optional) Add a port group to the mapping view. 1. Select Port Group. 2. Click. The Select Port Group dialog box is displayed. 55
67 8 Mapping and Using LUNs If your service requires a new port group, click Create to create one. 3. From the port group list, select the port group you want to add to the mapping view. 4. Click OK. Step 6 Confirm the creation of the mapping view. 1. Click OK. The Execution Result dialog box is displayed, indicating that the operation succeeded. 2. Click Close. ----End 8.2 Scanning for LUNs on a Host After LUNs are mapped on a storage system, scan for the mapped LUNs on the host, as shown in Figure 8-1. Figure 8-1 Scanning for the mapped LUNs 8.3 Using the Mapped LUNs After the mapped LUNs are detected on a host, you can directly use the raw devices to configure services or use the LUNs after creating a file system Mapping Raw Devices You can configure raw devices as disks for VMs by mapping the devices. Perform the following steps to map raw devices: 56
68 8 Mapping and Using LUNs Step 1 Right-click a VM and choose Edit Settings from the shortcut menu, as shown in Figure 8-2. Figure 8-2 Editing host settings Step 2 On the Hardware tab page, click Add. In the Add Hardware dialog box that is displayed, choose Hard Disk in Device Type and click Next, as shown in Figure 8-3. Figure 8-3 Adding disks Step 3 Select disks. You can create a new virtual disk, use an existing virtual disk, or use raw disk mappings, as shown in Figure
69 8 Mapping and Using LUNs Figure 8-4 Selecting disks Select Raw Device Mappings and click Next. Step 4 Select a target LUN and click Next, as shown in Figure 8-5. Figure 8-5 Selecting a target LUN Step 5 Select a datastore. The default datastore is under the same directory as the VM storage. Click Next, as shown in Figure
70 8 Mapping and Using LUNs Figure 8-6 Selecting a datastore Step 6 Select a compatibility mode. Select a compatibility mode based on site requirements and click Next, as shown in Figure 8-7. Snapshots are unavailable if the compatibility mode is specified to physical. Figure 8-7 Selecting a compatibility mode Step 7 In Advanced Options, keep the default virtual device node unchanged, as shown in Figure
71 8 Mapping and Using LUNs Figure 8-8 Selecting a virtual device node Step 8 In Ready to Complete, confirm the information about the disk to be added, as shown in Figure 8-9. Figure 8-9 Confirming the information about the disk to be added Click Finish. The system starts to add disks, as shown in Figure
72 8 Mapping and Using LUNs Figure 8-10 Adding raw disk mappings After a raw disk is mapped, the type of the newly created disk is Mapped Raw LUN. ----End Creating Datastores (File Systems) Create a file system before creating a virtual disk. A file system can be created using the file system disks in datastores. This section details how to create a datastore. Step 1 On the Configuration tab page, choose Storage in the navigation tree. On the Datastores tab page that is displayed, click Add Storage, as shown in Figure Figure 8-11 Adding storage Step 2 Select a storage type and click Next, as shown in Figure The default storage type is Disk/LUN. 61
73 8 Mapping and Using LUNs Figure 8-12 Selecting a storage type Step 3 On the Select Disk/LUN page that is displayed, select a desired disk and click Next, as shown in Figure Figure 8-13 Select a disk/lun Step 4 Select a file system version. VMFS-5 is selected in this example, as shown in Figure
74 8 Mapping and Using LUNs Figure 8-14 Selecting a file system version Step 5 View the current disk layout and device information, as shown in Figure Figure 8-15 Viewing the current disk layout Step 6 Enter the name of a datastore, as shown in Figure
75 8 Mapping and Using LUNs Figure 8-16 Entering a datastore name Step 7 Specify a disk capacity. Normally, Maximum available space is selected. If you want to test LUN expansion, customize a capacity, as shown in Figure Figure 8-17 Specifying a capacity Step 8 Confirm the disk layout. If the disk layout is correct, click Finish, as shown in Figure
76 8 Mapping and Using LUNs Figure 8-18 Confirming the disk layout ----End Mapping Virtual Disks Perform the following steps to add LUNs to VMs as virtual disks: Step 1 Right-click a VM and choose Edit Settings from the shortcut menu, as shown in Figure Figure 8-19 Editing VM settings Step 2 Click Add, select Hard Disk and click Next, as shown in Figure
77 8 Mapping and Using LUNs Figure 8-20 Adding disks Step 3 In Select a Disk, select Create a new virtual disk, as shown in Figure Figure 8-21 Creating a new virtual disk Step 4 Specify the disk capacity based on site requirements, as shown in Figure
78 8 Mapping and Using LUNs Figure 8-22 Specifying the disk capacity Step 5 Select a datastore. In this example, the datastore is disk1 and the file system type is VMFS-5, as shown in Figure Figure 8-23 Selecting a datastore Step 6 Select a virtual device node. If there are no special requirements, keep the default virtual device node unchanged, as shown in Figure
79 8 Mapping and Using LUNs Figure 8-24 Selecting a virtual device node Step 7 View the basic information about the virtual disk, as shown in Figure Figure 8-25 Viewing virtual disk information As shown in the preceding figure, hard disk 1 that you have added is a virtual disk. ----End Differences Between Raw Disks and Virtual Disks On the Hardware tab page of Virtual Machine Properties, you can modify the capacity of a disk mapped as a virtual disk, as shown in Figure
80 8 Mapping and Using LUNs Figure 8-26 Modifying the capacity of a virtual disk The capacity of a disk added using raw disk mappings cannot be modified, as shown in Figure Figure 8-27 Modifying the capacity of a disk added using raw disk mappings 69
81 9 Multipathing Management 9 Multipathing Management 9.1 Overview The VMware system has its own multipathing software Native Multipath Module (NMP), which is available without the need for extra configurations. This chapter details the NMP multipathing software. 9.2 VMware PSA Overview vsphere 4.0 incorporates a new module Pluggable Storage Architecture (PSA) that can be integrated with Third-Party Multipathing Plugin (MPP) or NMP to provide storage-specific plug-ins such as Storage Array Type Plug-in (SATP) and Path Selection Plugin (PSP), enabling the optimal path selection and I/O performance. Figure 9-1 VMware PSA 70
82 9 Multipathing Management VMware NMP VMware PSP Built-in PSP Third-Party Software NMP is the default multipathing module of VMware. This module provides two submodules to implement failover and load balancing. SATP: monitors path availability, reports path status to NMP, and implements failover. PSP: selects optimal I/O paths. PSA is compatible with the following third-party multipathing plugins: Third-party SATP: Storage vendors can use the VMware API to customize SATPs for their storage features and optimize VMware path selection. Third-party PSP: Storage vendors or third-party software vendors can use the VMware API to develop more sophisticated I/O load balancing algorithms and achieve larger throughput from multiple paths. By default, the PSP of most ESX operating systems supports three I/O policies: Most Recently Use (MRU), Round Robin, and Fixed. ESX 4.1 supports an additional policy: Fixed AP. For details, see section "PSPs in Different ESX Versions." Third-Party Multipathing Plug-in (MPP) supports comprehensive fault tolerance and performance processing, and runs on the same layer as NMP. For some storage systems, Third-Party MPP can substitute NMP to implement path failover and load balancing. 9.3 Software Functions and Features To manage storage multipathing, ESX/ESXi uses a special VMkernel layer, Pluggable Storage Architecture (PSA). The PSA is an open modular framework that coordinates the simultaneous operations of multiple plugins (MPPs). The VMkernel multipathing plugin that ESX/ESXi provides, by default, is VMware Native Multipathing (NMP). NMP is an extensible module that manages subplugins. There are two types of NMP plugins: Storage Array Type Plugins (SATPs), and Path Selection Plugins (PSPs). Figure 9-2 shows the architecture of VMkernel. 71
83 9 Multipathing Management Figure 9-2 VMkernel architecture If more multipathing functionality is required, a third party can also provide an MPP to run in addition to, or as a replacement for, the default NMP. When coordinating with the VMware NMP and any installed third-party MPPs, PSA performs the following tasks: Loads and unloads multipathing plug-ins. Hides virtual machine specifics from a particular plug-in. Routes I/O requests for a specific logical device to the MPP managing that device. Handles I/O queuing to the logical devices. Implements logical device bandwidth sharing between virtual machines. Handles I/O queuing to the physical storage HBAs. Handles physical path discovery and removal. Provides logical device and physical path I/O statistics. 9.4 Multipathing Selection Policy Policies and Differences VMware supports the following path selection policies, as described in Table 9-1. Table 9-1 Path selection policies Policy/Controller Active/Active Active/Passive Most Recently Used Fixed Administrator action is required to fail back after path failure. VMkernel resumes using the preferred path when connectivity is restored. Administrator action is required to fail back after path failure. VMkernel attempts to resume using the preferred path. This can cause path thrashing or failure when another SP now owns the LUN. 72
84 9 Multipathing Management Policy/Controller Active/Active Active/Passive Round Robin No failback. No path in round robin scheduling is selected. Fixed AP For ALUA arrays, VMkernel picks the path set to be the preferred path. For both A/A, A/P, and ALUA arrays, VMkernel resumes using the preferred path, but only if the path-thrashing avoidance algorithm allows the failback. The following details each policy. Most Recently Used (VMW_PSP_MRU) The host selects the path that is used recently. When the path becomes unavailable, the host selects an alternative path. The host does not revert back to the original path when the path becomes available again. There is no preferred path setting with the MRU policy. MRU is the default policy for active-passive storage devices. Working principle: uses the most recently used path for I/O transfer. When the path fails, I/O is automatically switched to the last used path among the multiple available paths (if any). When the failed path recovers, I/O is not switched back to that path. Round Robin (VMW_PSP_RR) Fixed (VMW_PSP_FIXED) The host uses an automatic path selection algorithm rotating through all available active paths to enable load balancing across the paths. Load balancing is a process to distribute host I/Os on all available paths. The purpose of load balancing is to achieve the optimal throughput performance (IPOS, MB/s, and response time). Working principle: uses all available paths for I/O transfer. The host always uses the preferred path to the disk when that path is available. If the host cannot access the disk through the preferred path, it tries the alternative paths. The default policy for active-active storage devices is Fixed. After the preferred path recovers from fault, VMkernel continues to use the preferred path. This attempt may results in path thrashing or failure because another SP now owns the LUN. Working principle: uses the fixed path for I/O transfer. When the current path fails, I/O is automatically switched to a random path among the multiple available paths (if any). When the original path recovers, I/O will be switched back to the original path. Fixed AP (VMW_PSP_FIXED_AP) This policy is only supported by ESX/ESXi 4.1 and is incorporated to VMW_PSP_FIXED in later ESX versions. Fixed AP extends the Fixed functionality to active-passive and ALUA mode arrays. 73
85 9 Multipathing Management PSPs in Different ESX Versions ESX/ESXi 4.0 Run the following command to display the PSPs supported by the operating system: ~]# esxcli nmp psp list Name Description VMW_PSP_MRU Most Recently Used Path Selection VMW_PSP_RR Round Robin Path Selection VMW_PSP_FIXED Fixed Path Selection ~]# Versions from VMware ESX GA to VMware ESX Update 4 support the same PSPs. ESX/ESXi 4.1 Run the following command to display the PSPs supported by the operating system: [root@esx4 ~]# esxcli nmp psp list Name Description VMW_PSP_FIXED_AP Fixed Path Selection with Array Preference VMW_PSP_MRU Most Recently Used Path Selection VMW_PSP_RR Round Robin Path Selection VMW_PSP_FIXED Fixed Path Selection [root@esx4 ~]# Versions from VMware ESX GA to VMware ESX Update 2 support the same PSPs. ESXi 5.0 Run the following command to display the PSPs supported by the operating system: ~ # esxcli storage nmp psp list Name Description VMW_PSP_MRU Most Recently Used Path Selection VMW_PSP_RR Round Robin Path Selection VMW_PSP_FIXED Fixed Path Selection ~ # Versions from VMware ESXi GA to VMware ESXi Update 3 support the same PSPs. ESXi 5.1 Run the following command to display the PSPs supported by the operating system: ~ # esxcli storage nmp psp list Name Description VMW_PSP_MRU Most Recently Used Path Selection VMW_PSP_RR Round Robin Path Selection VMW_PSP_FIXED Fixed Path Selection 74
86 9 Multipathing Management ~ # Versions from VMware ESXi GA to VMware ESXi Update 1 support the same PSPs. ESXi 5.5 Run the following command to display the PSPs supported by the operating system: ~ # esxcli storage nmp psp list Name Description VMW_PSP_MRU Most Recently Used Path Selection VMW_PSP_RR Round Robin Path Selection VMW_PSP_FIXED Fixed Path Selection ~ # 9.5 VMware SATPs VMware supports multiple SATPs, which vary with VMware versions. The following details SATPs for different VMware versions. ESX/ESXi 4.0 Run the following command to display the SATPs supported by the operating system: [root@e4 ~]# esxcli nmp satp list Name Default PSP Description VMW_SATP_ALUA_CX VMW_PSP_FIXED Supports EMC CX that use the ALUA protocol VMW_SATP_SVC VMW_PSP_FIXED Supports IBM SVC VMW_SATP_MSA VMW_PSP_MRU Supports HP MSA VMW_SATP_EQL VMW_PSP_FIXED Supports EqualLogic arrays VMW_SATP_INV VMW_PSP_FIXED Supports EMC Invista VMW_SATP_SYMM VMW_PSP_FIXED Supports EMC Symmetrix VMW_SATP_LSI VMW_PSP_MRU Supports LSI and other arrays compatible with the SIS 6.10 in non-avt mode VMW_SATP_EVA VMW_PSP_FIXED Supports HP EVA VMW_SATP_DEFAULT_AP VMW_PSP_MRU Supports non-specific active/passive arrays VMW_SATP_CX VMW_PSP_MRU Supports EMC CX that do not use the ALUA protocol VMW_SATP_ALUA VMW_PSP_MRU Supports non-specific arrays that use the ALUA protocol VMW_SATP_DEFAULT_AA VMW_PSP_FIXED Supports non-specific active/active arrays VMW_SATP_LOCAL VMW_PSP_FIXED Supports direct attached devices [root@e4 ~]# Versions from VMware ESX GA to VMware ESX Update 4 support the same SATPs. ESX/ESXi 4.1 Run the following command to display the SATPs supported by the operating system: [root@esx4 ~]# esxcli nmp satp list Name Default PSP Description 75
87 9 Multipathing Management VMW_SATP_SYMM VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_SVC VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_MSA VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_LSI VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_INV VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_EVA VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_EQL VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_DEFAULT_AP VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_ALUA_CX VMW_PSP_FIXED_AP Placeholder (plugin not loaded) VMW_SATP_CX VMW_PSP_MRU Supports EMC CX that do not use the ALUA protocol VMW_SATP_ALUA VMW_PSP_MRU Supports non-specific arrays that use the ALUA protocol VMW_SATP_DEFAULT_AA VMW_PSP_FIXED Supports non-specific active/active arrays VMW_SATP_LOCAL VMW_PSP_FIXED Supports direct attached devices ~]# Versions from VMware ESX GA to VMware ESX Update 2 support the same SATPs. ESXi 5.0 Run the following command to display the SATPs supported by the operating system: ~ # esxcli storage nmp satp list Name Default PSP Description VMW_SATP_MSA VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_ALUA VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_DEFAULT_AP VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_SVC VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_EQL VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_INV VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_EVA VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_ALUA_CX VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_SYMM VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_CX VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_LSI VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_DEFAULT_AA VMW_PSP_FIXED Supports non-specific active/active arrays VMW_SATP_LOCAL VMW_PSP_FIXED Supports direct attached devices ~ # Versions from VMware ESX GA to VMware ESX Update 3 support the same SATPs. ESXi 5.1 Run the following command to display the SATPs supported by the operating system: ~ # esxcli storage nmp satp list Name Default PSP Description VMW_SATP_MSA VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_ALUA VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_DEFAULT_AP VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_SVC VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_EQL VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_INV VMW_PSP_FIXED Placeholder (plugin not loaded) 76
88 9 Multipathing Management VMW_SATP_EVA VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_ALUA_CX VMW_PSP_RR Placeholder (plugin not loaded) VMW_SATP_SYMM VMW_PSP_RR Placeholder (plugin not loaded) VMW_SATP_CX VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_LSI VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_DEFAULT_AA VMW_PSP_FIXED Supports non-specific active/active arrays VMW_SATP_LOCAL VMW_PSP_FIXED Supports direct attached devices ~ # Versions from VMware ESX GA to VMware ESX Update 1 support the same SATPs. ESXi 5.5 Run the following command to display the SATPs supported by the operating system: ~ # esxcli storage nmp satp list Name Default PSP Description VMW_SATP_ALUA VMW_PSP_MRU Supports non-specific arrays that use the ALUA protocol VMW_SATP_MSA VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_DEFAULT_AP VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_SVC VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_EQL VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_INV VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_EVA VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_ALUA_CX VMW_PSP_RR Placeholder (plugin not loaded) VMW_SATP_SYMM VMW_PSP_RR Placeholder (plugin not loaded) VMW_SATP_CX VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_LSI VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_DEFAULT_AA VMW_PSP_FIXED Supports non-specific active/active arrays VMW_SATP_LOCAL VMW_PSP_FIXED Supports direct attached devices ~ # The following details SATPs for different arrays: VMW_SATP_LOCAL Applies to local disks. VMW_SATP_DEFAULT_AP, VMW_SATP_DEFAULT_AA, and VMW_SATP_ALUA Applies to external storage arrays that have no customized plugins in ESX/ESXi operating systems. The plugins vary with external storage array types. Other SATPs Customized plugins in ESX/ESXi operating systems for storage arrays 9.6 Policy Configuration Policies supported by operating systems vary with operating system versions. This section describes the default and recommended policies after a VMware host is connected to a Huawei storage system. 77
89 9 Multipathing Management CAUTION A recommended policy applies to common scenarios and may not be the optimal one for a specific environment. For example, VMW_PSP_RR allows better performance than VMW_PSP_FIXED but has some usage limitations. If you want to configure an optimal PSP, contact Huawei Customer Service Center OceanStor T Series Storage System ESX/ESXi 4.0 Table 9-2 describes the default policies when a host running VMware ESX/ESXi 4.0 operating system is connected to a Huawei storage system. Table 9-2 Default policies for the ESX/ESXi 4.0 operating system Storage Information Operating System Default Policy Remarks ALUA enabled GA SATP VMW_SATP_DEFAULT_AA See the PSP VMW_PSP_FIXED note Update 4 SATP VMW_SATP_DEFAULT_AA See the PSP VMW_PSP_FIXED note. ALUA disabled GA SATP VMW_SATP_DEFAULT_AA See the PSP VMW_PSP_FIXED note Update 4 SATP VMW_SATP_DEFAULT_AA See the PSP VMW_PSP_FIXED note. 1. The preferred path cannot be selected. As a result, I/Os of all LUNs are transferred to one controller and the workload is unevenly distributed. 2. After a path recovers from a fault, its services can be switched back. To address limitations of the default policies, you can use the recommended policies as described in Table 9-3. Table 9-3 Recommended policies for the ESX/ESXi 4.0 operating system Storage Information Operatin g System Recommended Policy Remarks ALUA enabled GA to Update 4 SATP VMW_SATP_DEFAULT_AA See 1, 2, PSP VMW_PSP_FIXED and 3 in the note. 78
90 9 Multipathing Management Storage Information Operatin g System Recommended Policy Remarks ALUA disabled GA to Update 4 SATP VMW_SATP_DEFAULT_AA See 1, 2, PSP VMW_PSP_FIXED and 4 in the note. 1. You need to manually specify the preferred path for each LUN on the management software of VMware. 2. After a path recovers from a fault, its services can be switched back. 3. This configuration mode is recommended for storage that supports ALUA. 4. This configuration mode is recommended for storage that does not support ALUA. There is no need to change the default SATP or PSP. You need only to manually specify a preferred path for each LUN. Select the preferred active path according to the working controller of the LUN. The procedure is as follows: Step 2 Go to the menu of the storage adapter, as shown in Figure 9-3. Figure 9-3 Menu of the storage adapter Step 3 Select the device in the previous figure, and right-click a LUN. The shortcut menu is displayed, as shown in Figure
91 9 Multipathing Management Figure 9-4 Shortcut menu of a LUN Step 4 In the path management dialog box, select the path where the owning controller resides, and set the path as the preferred path in the shortcut menu, as shown in Figure 9-5. Figure 9-5 Configuring the management path Click Close. The preferred path for the LUN is specified. Step 5 Repeat the steps 1 to 3 to specify preferred paths for the remaining LUNs. ----End 80
92 9 Multipathing Management ESX/ESXi 4.1 Table 9-4 describes the default policies when a host running VMware ESX/ESXi 4.1 operating system is connected to a Huawei storage system. Table 9-4 Default policies for the ESX/ESXi 4.1 operating system Storage Information Operating System Default Policy Remarks ALUA enabled GA SATP VMW_SATP_ALUA See 1 and PSP VMW_PSP_MRU 2 in the note Update 2 SATP VMW_SATP_ALUA See 1 and PSP VMW_PSP_MRU 2 in the note. ALUA disabled GA SATP VMW_SATP_DEFAULT_AA See 3 and PSP VMW_PSP_FIXED 4 in the note Update 2 SATP VMW_SATP_DEFAULT_AA See 3 and PSP VMW_PSP_FIXED 4 in the note. 1. The preferred path is selected when LUNs are mapped for the first time. The recently used path is selected for I/O transfer until this path is failed. After a host restarts, it continues to use the path used before the restart. As a result, I/Os of all LUNs are transferred to one controller and the workload is unevenly distributed. 2. After a path recovers from a fault, its services can be switched back. 3. The preferred path cannot be selected. As a result, I/Os of all LUNs are transferred to one controller and the workload is unevenly distributed. 4. After a path recovers from a fault, its services can be switched back. To address limitations of the default policies, you can use the recommended policies as described in Table 9-5. Table 9-5 Recommended policies for the ESX/ESXi 4.1 operating system Storage Information Operating System Recommended Policy Remarks ALUA enabled ALUA disabled GA to Update GA to Update 2 SATP VMW_SATP_ALUA See 1, 3, PSP VMW_PSP_FIXED_AP and 4 in the note. SATP VMW_SATP_DEFAULT_AA See 2, 3, PSP VMW_PSP_FIXED and 5 in the note. 81
93 9 Multipathing Management 1. The following command must be run on the VMware CLI to add a rule: esxcli nmp satp addrule -V HUAWEI -M S2600T -s VMW_SATP_ALUA -P VMW_PSP_FIXED_AP -c tpgs_on The part in bold face can be specified based on site requirements. After the command is executed, restart the host for the new rule to take effect. Then the preferred path is selected. 2. You need to manually specify the preferred path for each LUN on the management software of VMware. 3. After a path recovers from a fault, its services can be switched back. 4. This configuration mode is recommended for storage that supports ALUA. 5. This configuration mode is recommended for storage that does not support ALUA. For storage with ALUA enabled, a newly added rule takes effect immediately after the host is restarted. CAUTION If a path policy or preferred path is set on VMware before or after rules are added, this setting prevails. The newly added rule will not be applied to a LUN that has a path policy or preferred path. For storage with ALUA disabled, there is no need to change the default SATP or PSP. You need only to manually specify a preferred path for each LUN. The method of specifying the preferred active path for LUNs is similar to that for ESX/ESXi 4.0. ESXi 5.0 Table 9-6 describes the default policies when a host running VMware ESXi 5.0 operating system is connected to a Huawei storage system. Table 9-6 Default policies for the ESXi 5.0 operating system Storage Information Operating System Default Policy Remarks ALUA enabled GA SATP VMW_SATP_ALUA See 1 and PSP VMW_PSP_MRU 2 in the note Update Update 2 SATP VMW_SATP_ALUA See 1 and PSP VMW_PSP_MRU 2 in the note. SATP VMW_SATP_ALUA See 2 and PSP VMW_PSP_MRU 4 in the note Update SATP VMW_SATP_ALUA See 1 and 82
94 9 Multipathing Management Storage Information Operating System Default Policy 3 PSP VMW_PSP_MRU Remarks 2 in the note. ALUA disabled GA SATP VMW_SATP_DEFAULT_AA See 2 and PSP VMW_PSP_FIXED 3 in the note Update Update Update 3 SATP VMW_SATP_DEFAULT_AA See 2 and PSP VMW_PSP_FIXED 3 in the note. SATP VMW_SATP_DEFAULT_AA See 2 and PSP VMW_PSP_FIXED 3 in the note. SATP VMW_SATP_DEFAULT_AA See 2 and PSP VMW_PSP_FIXED 3 in the note. 1. The preferred path cannot be selected. After a host restarts, it continues to use the path used before the restart. As a result, I/Os of all LUNs are transferred to one controller and the workload is unevenly distributed. 2. After a path recovers from a fault, its services can be switched back. 3. The preferred path cannot be selected. As a result, I/Os of all LUNs are transferred to one controller and the workload is unevenly distributed. 4. The preferred path is selected when LUNs are mapped for the first time. The recently used path is selected for I/O transfer until this path is failed. After a host restarts, it continues to use the path used before the restart. As a result, I/Os of all LUNs are transferred to one controller and the workload is unevenly distributed. To address limitations of the default policies, you can use the recommended policies as described in Table 9-7. Table 9-7 Recommended policies for the ESXi 5.0 operating system Storage Information Operating System Recommended Policy Remarks ALUA enabled ALUA disabled GA to Update GA to Update 3 SATP VMW_SATP_ALUA See 1, 3, PSP VMW_PSP_FIXED and 4 in the note. SATP VMW_SATP_DEFAULT_AA See 2, 3, PSP VMW_PSP_FIXED and 5 in the note. 1. The following command must be run on the VMware CLI to add a rule: esxcli storage nmp satp rule add -V HUAWEI -M S2600T -s VMW_SATP_ALUA -P VMW_PSP_FIXED -c tpgs_on 83
95 9 Multipathing Management The part in bold face can be specified based on site requirements. After the command is executed, restart the host for the new rule to take effect. Then the preferred path is selected. 2. You need to manually specify the preferred path for each LUN on the management software of VMware. For a LUN that already has a preferred path, change the path to a non-preferred one and specify the preferred path for the LUN again. 3. After a path recovers from a fault, its services can be switched back. 4. This configuration mode is recommended for storage that supports ALUA. 5. This configuration mode is recommended for storage that does not support ALUA. For storage with ALUA enabled, a newly added rule takes effect immediately after the host is restarted. CAUTION If a path policy or preferred path is set on VMware before or after rules are added, this setting prevails. The newly added rule will not be applied to a LUN that has a path policy or preferred path. Note that the rule adding command for ESXi 5.0 is different from that for ESXi 4.1. For storage with ALUA disabled, there is no need to change the default SATP or PSP. You need only to manually specify a preferred path for each LUN. The method of specifying the preferred active path for LUNs is similar to that for ESX/ESXi 4.0. CAUTION In default policy configuration, path failover cannot be implemented when storage systems have ALUA disabled. In this case, you need to manually specify a preferred path for each LUN. You also need to specify a preferred path for a LUN that already has a default preferred path. Change the default path to a non-preferred one and specify the preferred path for the LUN again. Only after this configuration can services be switched back to the path after a fault recovery. ESXi 5.1 Table 9-8 describes the default policies when a host running VMware ESXi 5.1 operating system is connected to a Huawei storage system. 84
96 9 Multipathing Management Table 9-8 Default policies for the ESXi 5.1 operating system Storage Information Operating System Default Policy Remarks ALUA enabled GA SATP VMW_SATP_ALUA See 1 and PSP VMW_PSP_MRU 2 in the note Update 1 SATP VMW_SATP_ALUA See 1 and PSP VMW_PSP_MRU 2 in the note. ALUA disabled GA SATP VMW_SATP_DEFAULT_AA See 2 and PSP VMW_PSP_FIXED 3 in the note Update 1 SATP VMW_SATP_DEFAULT_AA See 2 and PSP VMW_PSP_FIXED 3 in the note. 1. The preferred path is selected when LUNs are mapped for the first time. The recently used path is selected for I/O transfer until this path is failed. After a host restarts, it continues to use the path used before the restart. As a result, I/Os of all LUNs are transferred to one controller and the workload is unevenly distributed. 2. After a path recovers from a fault, its services can be switched back. 3. The preferred path cannot be selected. As a result, I/Os of all LUNs are transferred to one controller and the workload is unevenly distributed. To address limitations of the default policies, you can use the recommended policies as described in Table 9-9. Table 9-9 Recommended policies for the ESXi 5.1 operating system Storage Information Operating System Recommended Policy Remarks ALUA enabled ALUA disabled GA to Update GA to Update 1 SATP VMW_SATP_ALUA See 1, 3, PSP VMW_PSP_FIXED and 4 in the note. SATP VMW_SATP_DEFAULT_AA See 2, 3, PSP VMW_PSP_FIXED and 5 in the note. 1. The following command must be run on the VMware CLI to add a rule: esxcli storage nmp satp rule add -V HUAWEI -M S2600T -s VMW_SATP_ALUA -P VMW_PSP_FIXED -c tpgs_on The part in bold face can be specified based on site requirements. After the command is executed, restart the host for the new rule to take effect. Then the preferred path is selected. 85
97 9 Multipathing Management 2. You need to manually specify the preferred path for each LUN on the management software of VMware. For a LUN that already has a preferred path, change the path to a non-preferred one and specify the preferred path for the LUN again. 3. After a path recovers from a fault, its services can be switched back. 4. This configuration mode is recommended for storage that supports ALUA. 5. This configuration mode is recommended for storage that does not support ALUA. For storage with ALUA enabled, a newly added rule takes effect immediately after the host is restarted. CAUTION If a path policy or preferred path is set on VMware before or after rules are added, this setting prevails. The newly added rule will not be applied to a LUN that has a path policy or preferred path. Note that the rule adding command for ESXi 5.1 is different from that for ESXi 4.1. For storage with ALUA disabled, there is no need to change the default SATP or PSP. You need only to manually specify a preferred path for each LUN. The method of specifying the preferred active path for LUNs is similar to that for ESX/ESXi 4.0. CAUTION In default policy configuration, path failover cannot be implemented when storage systems have ALUA disabled. In this case, you need to manually specify a preferred path for each LUN. You also need to specify a preferred path for a LUN that already has a default preferred path. Change the default path to a non-preferred one and specify the preferred path for the LUN again. Only after this configuration can services be switched back to the path after a fault recovery. ESXi 5.5 Table 9-10 describes the default policies when a host running VMware ESXi 5.5 operating system is connected to a Huawei storage system. Table 9-10 Default policies for the ESXi 5.5 operating system Storage Information Operating System Recommended Policy Remarks ALUA enabled GA SATP VMW_SATP_ALUA See 1 and PSP VMW_PSP_MRU 2 in the note. ALUA disabled GA SATP VMW_SATP_DEFAULT_AA See 2 and 86
98 9 Multipathing Management Storage Information Operating System Recommended Policy PSP VMW_PSP_FIXED Remarks 3 in the note. 1. The preferred path cannot be selected. The recently used path is selected for I/O transfer until this path is failed. After a host restarts, it continues to use the path used before the restart. As a result, I/Os of all LUNs are transferred to one controller and the workload is unevenly distributed. 2. After a path recovers from a fault, its services can be switched back. 3. The preferred path cannot be selected. As a result, I/Os of all LUNs are transferred to one controller and the workload is unevenly distributed. To address limitations of the default policies, you can use the recommended policies as described in Table Table 9-11 Recommended policies for the ESXi 5.5 operating system Storage Information Operatin g System Recommended Policy Remarks ALUA enabled GA SATP VMW_SATP_ALUA See 1, 3, PSP VMW_PSP_FIXED and 4 in the note. ALUA disabled GA SATP VMW_SATP_DEFAULT_AA See 2, 3, PSP VMW_PSP_FIXED and 5 in the note. 1. The following command must be run on the VMware CLI to add a rule: esxcli storage nmp satp rule add -V HUAWEI -M S2600T -s VMW_SATP_ALUA -P VMW_PSP_FIXED -c tpgs_on The part in bold face can be specified based on site requirements. After the command is executed, restart the host for the new rule to take effect. Then the preferred path is selected. 2. You need to manually specify the preferred path for each LUN on the management software of VMware. For a LUN that already has a preferred path, change the path to a non-preferred one and specify the preferred path for the LUN again. 3. After a path recovers from a fault, its services can be switched back. 4. This configuration mode is recommended for storage that supports ALUA. 5. This configuration mode is recommended for storage that does not support ALUA. For storage with ALUA enabled, a newly added rule takes effect immediately after the host is restarted. CAUTION If a path policy or preferred path is set on VMware before or after rules are added, this setting prevails. 87
99 9 Multipathing Management The newly added rule will not be applied to a LUN that has a path policy or preferred path. Note that the rule adding command for ESXi 5.5 is different from that for ESXi 4.1. For storage with ALUA disabled, there is no need to change the default SATP or PSP. You need only to manually specify a preferred path for each LUN. The method of specifying the preferred active path for LUNs is similar to that for ESX/ESXi 4.0. CAUTION In default policy configuration, path failover cannot be implemented when storage systems have ALUA disabled. In this case, you need to manually specify a preferred path for each LUN. You also need to specify a preferred path for a LUN that already has a default preferred path. Change the default path to a non-preferred one and specify the preferred path for the LUN again. Only after this configuration can services be switched back to the path after a fault recovery OceanStor Series Enterprise Storage System OceanStor series enterprise storage system supports multiple controllers (number of controllers >=2). When the storage system has two controllers, it supports ALUA and A/A. When the storage system has more than two controllers, it supports only A/A but not ALUA (by the release of this document). To facilitate future capacity expansion, you are advised to disable ALUA on the OceanStor series enterprise storage system and its host. Table 9-12 describes the policy configuration. Table 9-12 Recommended policies Storage Information Recommended Policy Remarks ALUA disabled SATP VMW_SATP_DEFAULT_AA See 1 and 2 in PSP VMW_PSP_FIXED the note. 1. You need to manually specify the preferred path for each LUN on the management software of VMware. For a LUN that already has a preferred path, change the path to a non-preferred one and specify the preferred path for the LUN again. 2. After a path recovers from a fault, its services can be switched back. 88
100 9 Multipathing Management 9.7 LUN Failure Policy A LUN becomes inaccessible after all its paths failed. However, a VMware host can still receive I/Os sent by this LUN for a certain period of time. During this period of time, if any path of the LUN is recovered, the LUN continues to send I/Os over this path. If no path of the LUN is recovered during this period of time, each I/O returns with an error flag. 9.8 Path Policy Query and Modification ESX/ESXi 4.0 This section describes how to use the CLI to query and modify path policies. Commands for querying and modifying path policies vary with host operating system versions. The following details the commands for different host operating systems. The following is an example command for querying path policies: [root@e4 ~]# esxcli nmp device list -d naa f naa f Device Display Name: HUASY iscsi Disk (naa f ) Storage Array Type: VMW_SATP_DEFAULT_AA Storage Array Type Device Config: Path Selection Policy: VMW_PSP_FIXED Path Selection Policy Device Config: {preferred=vmhba33:c0:t1:l0;current=vmhba33:c0:t0:l0} Working Paths: vmhba33:c0:t0:l0 [root@e4 ~]# ESX/ESXi 4.1 Query The following is an example command for querying path policies: [root@tongrenyuan ~]# esxcli corestorage device list naa.60022a b2a9d a Display Name: HUASY Fibre Channel Disk (naa.60022a b2a9d a) Size: Device Type: Direct-Access Multipath Plugin: NMP Devfs Path: /vmfs/devices/disks/naa.60022a b2a9d a Vendor: HUASY Model: S5600T Revision: 2105 SCSI Level: 4 Is Pseudo: false Status: on Is RDM Capable: true Is Local: false Is Removable: false Attached Filters: 89
101 9 Multipathing Management VAAI Status: unknown Other UIDs: vml a b2a9d a naa.60022a b2a Display Name: HUASY Fibre Channel Disk (naa.60022a b2a ) Size: Device Type: Direct-Access Multipath Plugin: NMP Devfs Path: /vmfs/devices/disks/naa.60022a b2a Vendor: HUASY Model: S5600T Revision: 2105 SCSI Level: 4 Is Pseudo: false Status: on Is RDM Capable: true Is Local: false Is Removable: false Attached Filters: VAAI Status: unknown Other UIDs: vml a b2a naa e b573f30ad Display Name: LSILOGIC Serial Attached SCSI Disk (naa e b573f30ad ) Size: Device Type: Direct-Access Multipath Plugin: NMP Devfs Path: /vmfs/devices/disks/naa e b573f30ad Vendor: LSILOGIC Model: Logical Volume Revision: 3000 SCSI Level: 2 Is Pseudo: false Status: on Is RDM Capable: true Is Local: false Is Removable: false Attached Filters: VAAI Status: unknown Other UIDs: vml e b573f30ad c6f [root@tongrenyuan ~]# [root@tongrenyuan ~]# esxcli nmp device list naa.60022a b2a9d a Device Display Name: HUASY Fibre Channel Disk (naa.60022a b2a9d a) Storage Array Type: VMW_SATP_ALUA Storage Array Type Device Config: {implicit_support=on;explicit_support=on; explicit_allow=on;alua_followover=on;{tpg_id=2,tpg_state=ao}} Path Selection Policy: VMW_PSP_MRU Path Selection Policy Device Config: Current Path=vmhba1:C0:T0:L1 Working Paths: vmhba1:c0:t0:l1 naa.60022a b2a
102 9 Multipathing Management Device Display Name: HUASY Fibre Channel Disk (naa.60022a b2a ) Storage Array Type: VMW_SATP_ALUA Storage Array Type Device Config: {implicit_support=on;explicit_support=on; explicit_allow=on;alua_followover=on;{tpg_id=2,tpg_state=ao}} Path Selection Policy: VMW_PSP_MRU Path Selection Policy Device Config: Current Path=vmhba1:C0:T0:L0 Working Paths: vmhba1:c0:t0:l0 naa e b573f30ad Device Display Name: LSILOGIC Serial Attached SCSI Disk (naa e b573f30ad ) Storage Array Type: VMW_SATP_LOCAL Storage Array Type Device Config: SATP VMW_SATP_LOCAL does not support device configuration. Path Selection Policy: VMW_PSP_FIXED Path Selection Policy Device Config: {preferred=vmhba0:c1:t0:l0;current=vmhba0:c1:t0:l0} Working Paths: vmhba0:c1:t0:l0 ~]# esxcli corestorage device list is used to display existing disks. esxcli nmp device list is used to display disk paths. Configuration ESXi 5.0 You can run the following command to query the path policy parameters that can be modified: ~]# esxcli nmp psp getconfig --device naa e b573f30ad {preferred=vmhba0:c1:t0:l0;current=vmhba0:c1:t0:l0} The preceding output shows that parameters preferred and current of device naa e b573f30ad can be modified. Run the following command to modify path policy parameters: esxcli nmp psp setconfig --preferred new_value --device naa e b573f30ad The following is an example command for querying path policies: ~ # esxcli storage nmp device list naa b85d Device Display Name: HUASY iscsi Disk (naa b85d ) Storage Array Type: VMW_SATP_ALUA Storage Array Type Device Config: {implicit_support=on;explicit_support=on; explicit_allow=on;alua_followover=on;{tpg_id=1,tpg_state=ao}{tpg_id=2,tpg_state=an O}} Path Selection Policy: VMW_PSP_MRU Path Selection Policy Device Config: Current Path=vmhba39:C0:T0:L2 Path Selection Policy Device Custom Config: Working Paths: vmhba39:c0:t0:l2 91
103 9 Multipathing Management ESXi 5.1 naa b2d c The following is an example command for querying path policies: ~ # esxcli storage nmp device list naa.60026b904a3e e Device Display Name: Local DELL Disk (naa.60026b904a3e e ) Storage Array Type: VMW_SATP_LOCAL Storage Array Type Device Config: SATP VMW_SATP_LOCAL does not support device configuration. Path Selection Policy: VMW_PSP_FIXED Path Selection Policy Device Config: {preferred=vmhba1:c2:t0:l0;current=vmhba1:c2:t0:l0} Path Selection Policy Device Custom Config: Working Paths: vmhba1:c2:t0:l0 Is Local SAS Device: false Is Boot USB Device: false naa f Device Display Name: HUASY iscsi Disk (naa f) Storage Array Type: VMW_SATP_DEFAULT_AA Storage Array Type Device Config: SATP VMW_SATP_DEFAULT_AA does not support device configuration. Path Selection Policy: VMW_PSP_FIXED Path Selection Policy Device Config: {preferred=vmhba35:c0:t1:l0;current=vmhba35:c0:t1:l0} Path Selection Policy Device Custom Config: Working Paths: vmhba35:c0:t1:l0 Is Local SAS Device: false Is Boot USB Device: false mpx.vmhba34:c0:t0:l0 Device Display Name: Local TEAC CD-ROM (mpx.vmhba34:c0:t0:l0) Storage Array Type: VMW_SATP_LOCAL Storage Array Type Device Config: SATP VMW_SATP_LOCAL does not support device configuration. Path Selection Policy: VMW_PSP_FIXED Path Selection Policy Device Config: {preferred=vmhba34:c0:t0:l0;current=vmhba34:c0:t0:l0} Path Selection Policy Device Custom Config: Working Paths: vmhba34:c0:t0:l0 Is Local SAS Device: false Is Boot USB Device: false t10.dp BACKPLANE Device Display Name: Local DP Enclosure Svc Dev (t10.dp BACKPLANE000000) Storage Array Type: VMW_SATP_LOCAL Storage Array Type Device Config: SATP VMW_SATP_LOCAL does not support device configuration. Path Selection Policy: VMW_PSP_FIXED Path Selection Policy Device Config: {preferred=vmhba1:c0:t32:l0;current=vmhba1:c0:t32:l0} Path Selection Policy Device Custom Config: 92
104 9 Multipathing Management ESXi 5.5 Working Paths: vmhba1:c0:t32:l0 Is Local SAS Device: false Is Boot USB Device: false ~ # The following is an example command for querying path policies: ~ # esxcli storage nmp device list naa Device Display Name: HUAWEI Fibre Channel Disk (naa ) Storage Array Type: VMW_SATP_ALUA Storage Array Type Device Config: {implicit_support=on;explicit_support=on; explicit_allow=on;alua_followover=on;{tpg_id=2,tpg_state=ano}{tpg_id=1,tpg_state=a O}} Path Selection Policy: VMW_PSP_MRU Path Selection Policy Device Config: Current Path=vmhba3:C0:T0:L0 Path Selection Policy Device Custom Config: Working Paths: vmhba3:c0:t0:l0 Is Local SAS Device: false Is Boot USB Device: false naa abcde01a3b00fa0ec82e34 Device Display Name: Local LSI Disk (naa abcde01a3b00fa0ec82e34) Storage Array Type: VMW_SATP_LOCAL Storage Array Type Device Config: SATP VMW_SATP_LOCAL does not support device configuration. Path Selection Policy: VMW_PSP_FIXED Path Selection Policy Device Config: {preferred=vmhba1:c2:t0:l0;current=vmhba1:c2:t0:l0} Path Selection Policy Device Custom Config: Working Paths: vmhba1:c2:t0:l0 Is Local SAS Device: false Is Boot USB Device: false naa bff Device Display Name: HUAWEI Fibre Channel Disk (naa bff ) Storage Array Type: VMW_SATP_ALUA Storage Array Type Device Config: {implicit_support=on;explicit_support=on; explicit_allow=on;alua_followover=on;{tpg_id=2,tpg_state=ano}{tpg_id=1,tpg_state=a O}} Path Selection Policy: VMW_PSP_MRU Path Selection Policy Device Config: Current Path=vmhba3:C0:T0:L1 Path Selection Policy Device Custom Config: Working Paths: vmhba3:c0:t0:l1 Is Local SAS Device: false Is Boot USB Device: false ~ # ~ # 93
105 9 Multipathing Management 9.9 Differences Between iscsi Multi-Path Networks with Single and Multiple HBAs This section describes differences between iscsi multi-path networks with single and multiple HBAs iscsi Multi-Path Network with a Single HBA Usually, a blade server has only one HBA apart from the one used for management. For example, an IBM HS22 with eight network ports can provide only one HBA during VMkernel creation. In this case, you can bind two VMkernels to the HBA. This configuration is proven applicable by practical experience. Figure 9-6 shows this configuration. Figure 9-6 iscsi network with a single HBA iscsi Multi-Path Network with Multiple HBAs If two or more HBAs are available, you can bind VMkernels to different HBAs to set up a cross-connection network. Figure 9-7 shows a parallel network where two VMkernels are bound to network ports of the HBAs on controller A. Figure 9-7 iscsi network A with multiple HBAs Figure 9-8 shows the port mapping. 94
106 9 Multipathing Management Figure 9-8 Port mapping of iscsi network A with multiple HBAs Figure 9-9 shows a cross-connection network where two VMkernels are bound to two HBAs, one of which resides on controller A and the other on controller B. Figure 9-9 iscsi network B with multiple HBAs In this configuration, both NIC 1 and NIC 2 (VMkernels) are bound to controller A and controller B, forming a cross-connection network. Services are not affected when any path fails. Figure 9-10 shows the port mapping. Figure 9-10 Port mapping of iscsi network B with multiple HBAs 95
107 10 Common Commands 10 Common Commands This chapter describes command commands for VMware. Viewing the Version Run the following commands to view the VMware version: ~ # vmware -l VMware ESXi GA ~ # vmware -v VMware ESXi build ~ # Viewing Hardware Information Run the following commands to view hardware information such as ESX hardware and kernel: esxcfg -info a (Displays all.) esxcfg info w (Displays ESX hardware information.) Configuring Firewalls Run the following commands to configure firewalls: esxcfg fireware e sshclient (Opens the firewall SSH port.) esxcfg fireware d sshclient (Closes the firewall SSH port.) Obtaining Help Documentation Command syntax varies with host system versions. You can perform the following steps to obtain help documentation of different versions of host systems. Step 1 Log in to the VMware official website. The official site of VMware: Step 2 Select a VMware version. 96
108 10 Common Commands The latest VMware version 5.1 is used as an example. Then click vsphere Command-Line Interface Reference, as shown in Figure Figure 10-1 Selecting a VMware version You are navigated to the help page of the selected VMware version. ----End 97
109 11 Host High-Availability 11 Host High-Availability 11.1 Overview As services grow, key applications must be available all the time and a system must have the fault tolerance capability. However, the systems with fault tolerance capability are costly. To lower the system costs, economical applications that provide the fault tolerance capacity are required. A high availability (HA) solution ensures the availability of applications and data in an event of any system component fault. This solution aims at eliminating single points of failure and minimizing the impact of expected or unexpected system downtimes Working Principle and Functions Working Principle Functions VMware HA continuously monitors all virtualized servers in a resource pool and detects physical server and operating system failures. To monitor physical servers, an agent on each server maintains a heartbeat with other servers in the resource pool such that a loss of heartbeat automatically initiates the restart of all affected virtual machines on other servers in the resource pool. VMware HA ensures that sufficient resources are available in the resource pool at all times to be able to restart virtual machines on different physical servers in the event of server failure. Safe restart of virtual machines is made possible by the locking technology in the ESX Server storage stack, which allows multiple ESX Server hosts to have simultaneous access to the same virtual machine files. VMware HA initiates failover for all running VMs specified in the failover capacity in an event of an ESX Server host failure. VMware HA automatically can detect server faults and restart VMs without human intervention. VMware HA interworks with Distributed Resource Scheduler (DRS) to implement dynamic and intelligent resource allocation and VM optimization after failover. After a host becomes faulty and its VMs are restarted on another host, DRS can provide further migration suggestions or directly migrate VMs to achieve the optimal placement of virtual machines and balanced resource allocation. 98
110 11 Host High-Availability Relationship Among VMware HA, DRS, and vmotion VMware vmotion dynamically migrates VMs among different physical hosts (ESX hosts). VMware HA employs vmotion to migrate VMs to normal ESX hosts in real time when VMs fail or ESX host encounters an error. Incorporating vmotion and HA, VMware DRS can dynamically migrate VMs to ESX hosts carrying lighter load according to the CPU or memory usage of ESX hosts. You can use DRS to migrate VMs on one ESX host to different ESX hosts for load balancing Installation and Configuration For information about how to install and configure VMware HA, visit: df Huawei also provides VMware HA configuration guides. You can obtain the guides from the Huawei customer service center Log Collection Perform the following steps to collect host logs: Step 1 Use vsphere Client to log in to the ESX server. Step 2 Choose System Management > System Log. Step 3 Click Export System Logs, as shown in Figure Figure 11-1 Host logs 99
111 12 Acronyms and Abbreviations 12 Acronyms and Abbreviations C CDFS CLI CD-ROM File System Command Line Interface D DRS Distributed Resource Scheduler F Fibre Channel FT Fibre Channel Fault Tolerance G GOS Guest Operating System H HA HBA High Availability Host Bus Adapter I IP ISM iscsi Internet Protocol Integrated Storage Manager Internet Small Computer Systems Interface 100
112 12 Acronyms and Abbreviations L LUN LV Logical Unit Number Logical Volume N NIC Network Interface Card P PSP Path Selection Plug-in R RAID RDM Redundant Array of Independent Disks Raw Device Mapping S SATP Storage Array Type Plug-in V VM VMFS Virtual Machine Virtual Machine File System W WWN World Wide Name 101
Setup for Failover Clustering and Microsoft Cluster Service
Setup for Failover Clustering and Microsoft Cluster Service ESX 4.0 ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until the
Dell PowerVault MD32xx Deployment Guide for VMware ESX4.1 Server
Dell PowerVault MD32xx Deployment Guide for VMware ESX4.1 Server A Dell Technical White Paper PowerVault MD32xx Storage Array www.dell.com/md32xx THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND
Drobo How-To Guide. Deploy Drobo iscsi Storage with VMware vsphere Virtualization
The Drobo family of iscsi storage arrays allows organizations to effectively leverage the capabilities of a VMware infrastructure, including vmotion, Storage vmotion, Distributed Resource Scheduling (DRS),
Configuration Maximums
Topic Configuration s VMware vsphere 5.1 When you select and configure your virtual and physical equipment, you must stay at or below the maximums supported by vsphere 5.1. The limits presented in the
VMware vsphere 5.1 Advanced Administration
Course ID VMW200 VMware vsphere 5.1 Advanced Administration Course Description This powerful 5-day 10hr/day class is an intensive introduction to VMware vsphere 5.0 including VMware ESX 5.0 and vcenter.
Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION
Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION v 1.0/Updated APRIl 2012 Table of Contents Introduction.... 3 Storage Protocol Comparison Table....4 Conclusion...10 About the
Setup for Failover Clustering and Microsoft Cluster Service
Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESXi 5.1 vcenter Server 5.1 This document supports the version of each product listed and supports all subsequent versions until the
Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4
Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Application Note Abstract This application note explains the configure details of using Infortrend FC-host storage systems
Setup for Failover Clustering and Microsoft Cluster Service
Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESX 4.0 ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until
QNAP in vsphere Environment
QNAP in vsphere Environment HOW TO USE QNAP NAS AS A VMWARE DATASTORE VIA NFS Copyright 2009. QNAP Systems, Inc. All Rights Reserved. V1.8 How to use QNAP NAS as a VMware Datastore via NFS QNAP provides
HUAWEI OceanStor 9000. Load Balancing Technical White Paper. Issue 01. Date 2014-06-20 HUAWEI TECHNOLOGIES CO., LTD.
HUAWEI OceanStor 9000 Load Balancing Technical Issue 01 Date 2014-06-20 HUAWEI TECHNOLOGIES CO., LTD. Copyright Huawei Technologies Co., Ltd. 2014. All rights reserved. No part of this document may be
SAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
Configuration Maximums
Configuration s vsphere 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions
VMware Certified Professional 5 Data Center Virtualization (VCP5-DCV) Exam
Exam : VCP5-DCV Title : VMware Certified Professional 5 Data Center Virtualization (VCP5-DCV) Exam Version : DEMO 1 / 9 1.Click the Exhibit button. An administrator has deployed a new virtual machine on
Setup for Failover Clustering and Microsoft Cluster Service
Setup for Failover Clustering and Microsoft Cluster Service ESXi 5.0 vcenter Server 5.0 This document supports the version of each product listed and supports all subsequent versions until the document
N_Port ID Virtualization
A Detailed Review Abstract This white paper provides a consolidated study on the (NPIV) feature and usage in different platforms and on NPIV integration with the EMC PowerPath on AIX platform. February
VMware vsphere 5.0 Boot Camp
VMware vsphere 5.0 Boot Camp This powerful 5-day 10hr/day class is an intensive introduction to VMware vsphere 5.0 including VMware ESX 5.0 and vcenter. Assuming no prior virtualization experience, this
Setup for Failover Clustering and Microsoft Cluster Service
Setup for Failover Clustering and Microsoft Cluster Service ESX 4.1 ESXi 4.1 vcenter Server 4.1 This document supports the version of each product listed and supports all subsequent versions until the
Fibre Channel SAN Configuration Guide ESX Server 3.5, ESX Server 3i version 3.5 VirtualCenter 2.5
Fibre Channel SAN Configuration Guide ESX Server 3.5, ESX Server 3i version 3.5 VirtualCenter 2.5 This document supports the version of each product listed and supports all subsequent versions until the
Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server
Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server How to deploy Converged Networking with VMware ESX Server 3.5 Using Emulex FCoE Technology Table of Contents Introduction...
How To Use A Virtualization Server With A Sony Memory On A Node On A Virtual Machine On A Microsoft Vpx Vx/Esxi On A Server On A Linux Vx-X86 On A Hyperconverged Powerpoint
ESX 4.0 ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent
Configuration Maximums VMware vsphere 4.1
Topic Configuration s VMware vsphere 4.1 When you select and configure your virtual and physical equipment, you must stay at or below the maximums supported by vsphere 4.1. The limits presented in the
Fibre Channel and Converged Network Adapters for VMware ESX/ESXi 4.0
Fibre Channel and Converged Network Adapters for VMware ESX/ESXi 4.0 User s Guide FC0054607-00 A Fibre Channel and Converged Network Adapters for VMware ESX/ESXi 4.0 User s Guide Information furnished
vsphere Storage ESXi 6.0 vcenter Server 6.0 EN-001522-03
ESXi 6.0 vcenter Server 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions
Setup for Microsoft Cluster Service ESX Server 3.0.1 and VirtualCenter 2.0.1
ESX Server 3.0.1 and VirtualCenter 2.0.1 Setup for Microsoft Cluster Service Revision: 20060818 Item: XXX-ENG-QNNN-NNN You can find the most up-to-date technical documentation on our Web site at http://www.vmware.com/support/
Direct Attached Storage
, page 1 Fibre Channel Switching Mode, page 1 Configuring Fibre Channel Switching Mode, page 2 Creating a Storage VSAN, page 3 Creating a VSAN for Fibre Channel Zoning, page 4 Configuring a Fibre Channel
vrealize Operations Manager Customization and Administration Guide
vrealize Operations Manager Customization and Administration Guide vrealize Operations Manager 6.0.1 This document supports the version of each product listed and supports all subsequent versions until
VMware Virtual Machine File System: Technical Overview and Best Practices
VMware Virtual Machine File System: Technical Overview and Best Practices A VMware Technical White Paper Version 1.0. VMware Virtual Machine File System: Technical Overview and Best Practices Paper Number:
Advanced VMware Training
Goals: Demonstrate VMware Fault Tolerance in action Demonstrate Host Profile Usage How to quickly deploy and configure several vsphere servers Discuss Storage vmotion use cases Demonstrate vcenter Server
VMware vsphere 4.1 with ESXi and vcenter
VMware vsphere 4.1 with ESXi and vcenter This powerful 5-day class is an intense introduction to virtualization using VMware s vsphere 4.1 including VMware ESX 4.1 and vcenter. Assuming no prior virtualization
Enterprise. ESXi in the. VMware ESX and. Planning Deployment of. Virtualization Servers. Edward L. Haletky
VMware ESX and ESXi in the Enterprise Planning Deployment of Virtualization Servers Edward L. Haletky PRENTICE HALL Upper Saddle River, NJ Boston Indianapolis San Francisco New York Toronto Montreal London
Setup for Failover Clustering and Microsoft Cluster Service
Setup for Failover Clustering and Microsoft Cluster Service ESXi 5.5 vcenter Server 5.5 This document supports the version of each product listed and supports all subsequent versions until the document
VMWARE VSPHERE 5.0 WITH ESXI AND VCENTER
VMWARE VSPHERE 5.0 WITH ESXI AND VCENTER CORPORATE COLLEGE SEMINAR SERIES Date: April 15-19 Presented by: Lone Star Corporate College Format: Location: Classroom instruction 8 a.m.-5 p.m. (five-day session)
vsphere Networking vsphere 5.5 ESXi 5.5 vcenter Server 5.5 EN-001074-02
vsphere 5.5 ESXi 5.5 vcenter Server 5.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more
End-to-end Data integrity Protection in Storage Systems
End-to-end Data integrity Protection in Storage Systems Issue V1.1 Date 2013-11-20 HUAWEI TECHNOLOGIES CO., LTD. 2013. All rights reserved. No part of this document may be reproduced or transmitted in
How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine
Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest
VMware for Bosch VMS. en Software Manual
VMware for Bosch VMS en Software Manual VMware for Bosch VMS Table of Contents en 3 Table of contents 1 Introduction 4 1.1 Restrictions 4 2 Overview 5 3 Installing and configuring ESXi server 6 3.1 Installing
EMC Data Domain Management Center
EMC Data Domain Management Center Version 1.1 Initial Configuration Guide 302-000-071 REV 04 Copyright 2012-2015 EMC Corporation. All rights reserved. Published in USA. Published June, 2015 EMC believes
VMware vsphere: Install, Configure, Manage [V5.0]
VMware vsphere: Install, Configure, Manage [V5.0] Gain hands-on experience using VMware ESXi 5.0 and vcenter Server 5.0. In this hands-on, VMware -authorized course based on ESXi 5.0 and vcenter Server
E-SPIN's Virtualization Management, System Administration Technical Training with VMware vsphere Enterprise (7 Day)
Class Schedule E-SPIN's Virtualization Management, System Administration Technical Training with VMware vsphere Enterprise (7 Day) Date: Specific Pre-Agreed Upon Date Time: 9.00am - 5.00pm Venue: Pre-Agreed
Windows Host Utilities 6.0 Installation and Setup Guide
Windows Host Utilities 6.0 Installation and Setup Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 4-NETAPP
HBA Virtualization Technologies for Windows OS Environments
HBA Virtualization Technologies for Windows OS Environments FC HBA Virtualization Keeping Pace with Virtualized Data Centers Executive Summary Today, Microsoft offers Virtual Server 2005 R2, a software
vsphere Networking vsphere 6.0 ESXi 6.0 vcenter Server 6.0 EN-001391-01
vsphere 6.0 ESXi 6.0 vcenter Server 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more
Virtual SAN Design and Deployment Guide
Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore
Windows Host Utilities 6.0.2 Installation and Setup Guide
Windows Host Utilities 6.0.2 Installation and Setup Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 463-8277
VMware Best Practice and Integration Guide
VMware Best Practice and Integration Guide Dot Hill Systems Introduction 1 INTRODUCTION Today s Data Centers are embracing Server Virtualization as a means to optimize hardware resources, energy resources,
Bosch Video Management System High availability with VMware
Bosch Video Management System High availability with VMware en Technical Note Bosch Video Management System Table of contents en 3 Table of contents 1 Introduction 4 1.1 Restrictions 4 2 Overview 5 3
VMware vsphere-6.0 Administration Training
VMware vsphere-6.0 Administration Training Course Course Duration : 20 Days Class Duration : 3 hours per day (Including LAB Practical) Classroom Fee = 20,000 INR Online / Fast-Track Fee = 25,000 INR Fast
Khóa học dành cho các kỹ sư hệ thống, quản trị hệ thống, kỹ sư vận hành cho các hệ thống ảo hóa ESXi, ESX và vcenter Server
1. Mục tiêu khóa học. Khóa học sẽ tập trung vào việc cài đặt, cấu hình và quản trị VMware vsphere 5.1. Khóa học xây dựng trên nền VMware ESXi 5.1 và VMware vcenter Server 5.1. 2. Đối tượng. Khóa học dành
Multipathing Configuration for Software iscsi Using Port Binding
Multipathing Configuration for Software iscsi Using Port Binding Technical WHITE PAPER Table of Contents Multipathing for Software iscsi.... 3 Configuring vmknic-based iscsi Multipathing.... 3 a) Configuring
DVS Enterprise. Reference Architecture. VMware Horizon View Reference
DVS Enterprise Reference Architecture VMware Horizon View Reference THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED
vsphere Networking ESXi 5.0 vcenter Server 5.0 EN-000599-01
ESXi 5.0 vcenter Server 5.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions
Configuration Maximums
Topic Configuration s VMware vsphere 5.0 When you select and configure your virtual and physical equipment, you must stay at or below the maximums supported by vsphere 5.0. The limits presented in the
HP SN1000E 16 Gb Fibre Channel HBA Evaluation
HP SN1000E 16 Gb Fibre Channel HBA Evaluation Evaluation report prepared under contract with Emulex Executive Summary The computing industry is experiencing an increasing demand for storage performance
Configuration Maximums VMware Infrastructure 3
Technical Note Configuration s VMware Infrastructure 3 When you are selecting and configuring your virtual and physical equipment, you must stay at or below the maximums supported by VMware Infrastructure
IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org
IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator
Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V
Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Implementation Guide By Eduardo Freitas and Ryan Sokolowski February 2010 Summary Deploying
InForm OS 2.2.3/2.2.4 VMware ESX Server 3.0-4.0 QLogic/Emulex HBA Implementation Guide
InForm OS 2.2.3/2.2.4 VMware ESX Server 3.0-4.0 QLogic/Emulex HBA Implementation Guide InForm OS 2.2.3/2.2.4 VMware ESX Server 3.0-4.0 FC QLogic/Emulex HBA Implementation Guide In this guide 1.0 Notices
Study Guide. Professional vsphere 4. VCP VMware Certified. (ExamVCP4IO) Robert Schmidt. IVIC GratAf Hill
VCP VMware Certified Professional vsphere 4 Study Guide (ExamVCP4IO) Robert Schmidt McGraw-Hill is an independent entity from VMware Inc. and is not affiliated with VMware Inc. in any manner.this study/training
Deploying SAP on Microsoft SQL Server 2008 Environments Using the Hitachi Virtual Storage Platform
1 Deploying SAP on Microsoft SQL Server 2008 Environments Using the Hitachi Virtual Storage Platform Implementation Guide By Sean Siegmund June 2011 Feedback Hitachi Data Systems welcomes your feedback.
ESXi Configuration Guide
ESXi 4.1 vcenter Server 4.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions
EMC ViPR Controller. User Interface Virtual Data Center Configuration Guide. Version 2.4 302-002-416 REV 01
EMC ViPR Controller Version 2.4 User Interface Virtual Data Center Configuration Guide 302-002-416 REV 01 Copyright 2014-2015 EMC Corporation. All rights reserved. Published in USA. Published November,
Best Practice of Server Virtualization Using Qsan SAN Storage System. F300Q / F400Q / F600Q Series P300Q / P400Q / P500Q / P600Q Series
Best Practice of Server Virtualization Using Qsan SAN Storage System F300Q / F400Q / F600Q Series P300Q / P400Q / P500Q / P600Q Series Version 1.0 July 2011 Copyright Copyright@2011, Qsan Technology, Inc.
Configuration Maximums VMware vsphere 4.0
Topic Configuration s VMware vsphere 4.0 When you select and configure your virtual and physical equipment, you must stay at or below the maximums supported by vsphere 4.0. The limits presented in the
Configuration Maximums
Configuration s vsphere 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions
QNAP in vsphere Environment
QNAP in vsphere Environment HOW TO USE QNAP NAS AS A VMWARE DATASTORE VIA ISCSI Copyright 2010. QNAP Systems, Inc. All Rights Reserved. V1.8 Document revision history: Date Version Changes Jan 2010 1.7
VMware vsphere: Fast Track [V5.0]
VMware vsphere: Fast Track [V5.0] Experience the ultimate in vsphere 5 skills-building and VCP exam-preparation training. In this intensive, extended-hours course, you will focus on installing, configuring,
Implementation Guide for EMC for VSPEX Private Cloud Environments. CloudLink Solution Architect Team
VSPEX IMPLEMENTATION GUIDE CloudLink SecureVSA Implementation Guide for EMC for VSPEX Private Cloud Environments CloudLink Solution Architect Team Abstract This Implementation Guide describes best practices
Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters
Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters Highlights a Brocade-EMC solution with EMC CLARiiON, EMC Atmos, Brocade Fibre Channel (FC) switches, Brocade FC HBAs, and Brocade
Hitachi Unified Compute Platform (UCP) Pro for VMware vsphere
Test Validation Hitachi Unified Compute Platform (UCP) Pro for VMware vsphere Author:, Sr. Partner, Evaluator Group April 2013 Enabling you to make the best technology decisions 2013 Evaluator Group, Inc.
SAN Implementation Course SANIW; 3 Days, Instructor-led
SAN Implementation Course SANIW; 3 Days, Instructor-led Course Description In this workshop course, you learn how to connect Windows, vsphere, and Linux hosts via Fibre Channel (FC) and iscsi protocols
Configuring VMware vsphere 5.1 with Oracle ZFS Storage Appliance and Oracle Fabric Interconnect
An Oracle Technical White Paper October 2013 Configuring VMware vsphere 5.1 with Oracle ZFS Storage Appliance and Oracle Fabric Interconnect An IP over InfiniBand configuration overview for VMware vsphere
Building the Virtual Information Infrastructure
Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage
Emulex 8Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter IBM BladeCenter at-a-glance guide
Emulex 8Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter IBM BladeCenter at-a-glance guide The Emulex 8Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter enables high-performance connection
IMPROVING VMWARE DISASTER RECOVERY WITH EMC RECOVERPOINT Applied Technology
White Paper IMPROVING VMWARE DISASTER RECOVERY WITH EMC RECOVERPOINT Applied Technology Abstract EMC RecoverPoint provides full support for data replication and disaster recovery for VMware ESX Server
Video Surveillance Storage and Verint Nextiva NetApp Video Surveillance Storage Solution
Technical Report Video Surveillance Storage and Verint Nextiva NetApp Video Surveillance Storage Solution Joel W. King, NetApp September 2012 TR-4110 TABLE OF CONTENTS 1 Executive Summary... 3 1.1 Overview...
VMware vsphere Design. 2nd Edition
Brochure More information from http://www.researchandmarkets.com/reports/2330623/ VMware vsphere Design. 2nd Edition Description: Achieve the performance, scalability, and ROI your business needs What
EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter
EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, VMware vcenter Converter A Detailed Review EMC Information Infrastructure Solutions Abstract This white paper
How To Set Up An Iscsi Isci On An Isci Vsphere 5 On An Hp P4000 Lefthand Sano On A Vspheron On A Powerpoint Vspheon On An Ipc Vsphee 5
HP P4000 LeftHand SAN Solutions with VMware vsphere Best Practices Technical whitepaper Table of contents Executive summary...2 New Feature Challenge...3 Initial iscsi setup of vsphere 5...4 Networking
Setup for Failover Clustering and Microsoft Cluster Service
Setup for Failover Clustering and Microsoft Cluster Service Update 2 ESXi 5.5 vcenter Server 5.5 This document supports the version of each product listed and supports all subsequent versions until the
VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS
VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS Successfully configure all solution components Use VMS at the required bandwidth for NAS storage Meet the bandwidth demands of a 2,200
FlexArray Virtualization
Updated for 8.2.1 FlexArray Virtualization Installation Requirements and Reference Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support
IP SAN Best Practices
IP SAN Best Practices A Dell Technical White Paper PowerVault MD3200i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.
SAN System Design and Deployment Guide. Second Edition
SAN System Design and Deployment Guide Second Edition Latest Revision: August 2008 2008 VMware, Inc. All rights reserved. Protected by one or more U.S. Patent Nos. 6,397,242, 6,496,847, 6,704,925, 6,711,672,
The HBAs tested in this report are the Brocade 825 and the Emulex LPe12002 and LPe12000.
Emulex HBA Product Evaluation Evaluation report prepared under contract with Emulex Corporation Introduction Emulex Corporation commissioned Demartek to evaluate its 8 Gbps Fibre Channel host bus adapters
vsphere Host Profiles
ESXi 5.1 vcenter Server 5.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions
Install and Configure an ESXi 5.1 Host
Install and Configure an ESXi 5.1 Host This document will walk through installing and configuring an ESXi host. It will explore various types of installations, from Single server to a more robust environment
HP Intelligent Management Center v7.1 Virtualization Monitor Administrator Guide
HP Intelligent Management Center v7.1 Virtualization Monitor Administrator Guide Abstract This guide describes the Virtualization Monitor (vmon), an add-on service module of the HP Intelligent Management
Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i
Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i Application Note Abstract: This document describes how VMware s vsphere Storage APIs (VAAI) can be integrated and used for accelerating
Basic System Administration ESX Server 3.0.1 and Virtual Center 2.0.1
Basic System Administration ESX Server 3.0.1 and Virtual Center 2.0.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a
The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer
The Future of Computing Cisco Unified Computing System Markus Kunstmann Channels Systems Engineer 2009 Cisco Systems, Inc. All rights reserved. Data Centers Are under Increasing Pressure Collaboration
vsphere Storage ESXi 5.0 vcenter Server 5.0 EN-000603-02
ESXi 5.0 vcenter Server 5.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions
Migrating to ESXi: How To
ILTA Webinar Session Migrating to ESXi: How To Strategies, Procedures & Precautions Server Operations and Security Technology Speaker: Christopher Janoch December 29, 2010 Migrating to ESXi: How To Strategies,
Setup for Failover Clustering and Microsoft Cluster Service
Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESXi 6.0 vcenter Server 6.0 This document supports the version of each product listed and supports all subsequent versions until the
Compellent Storage Center
Compellent Storage Center Microsoft Multipath IO (MPIO) Best Practices Guide Dell Compellent Technical Solutions Group October 2012 THIS BEST PRACTICES GUIDE IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY
Bosch Video Management System High Availability with Hyper-V
Bosch Video Management System High Availability with Hyper-V en Technical Service Note Bosch Video Management System Table of contents en 3 Table of contents 1 Introduction 4 1.1 General Requirements
Table of Contents. vsphere 4 Suite 24. Chapter Format and Conventions 10. Why You Need Virtualization 15 Types. Why vsphere. Onward, Through the Fog!
Table of Contents Introduction 1 About the VMware VCP Program 1 About the VCP Exam 2 Exam Topics 3 The Ideal VCP Candidate 7 How to Prepare for the Exam 9 How to Use This Book and CD 10 Chapter Format
iscsi SAN Configuration Guide
ESX 4.1 ESXi 4.1 vcenter Server 4.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent
Microsoft SMB File Sharing Best Practices Guide
Technical White Paper Microsoft SMB File Sharing Best Practices Guide Tintri VMstore, Microsoft SMB 3.0 Protocol, and VMware 6.x Author: Neil Glick Version 1.0 06/15/2016 @tintri www.tintri.com Contents
