HUAWEI SAN Storage Host Connectivity Guide for VMware



Similar documents
Setup for Failover Clustering and Microsoft Cluster Service

Dell PowerVault MD32xx Deployment Guide for VMware ESX4.1 Server

Drobo How-To Guide. Deploy Drobo iscsi Storage with VMware vsphere Virtualization

Configuration Maximums

VMware vsphere 5.1 Advanced Administration

Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION

Setup for Failover Clustering and Microsoft Cluster Service

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

Setup for Failover Clustering and Microsoft Cluster Service

QNAP in vsphere Environment

HUAWEI OceanStor Load Balancing Technical White Paper. Issue 01. Date HUAWEI TECHNOLOGIES CO., LTD.

SAN Conceptual and Design Basics

Configuration Maximums

VMware Certified Professional 5 Data Center Virtualization (VCP5-DCV) Exam

Setup for Failover Clustering and Microsoft Cluster Service

N_Port ID Virtualization

VMware vsphere 5.0 Boot Camp

Setup for Failover Clustering and Microsoft Cluster Service

Fibre Channel SAN Configuration Guide ESX Server 3.5, ESX Server 3i version 3.5 VirtualCenter 2.5

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server

How To Use A Virtualization Server With A Sony Memory On A Node On A Virtual Machine On A Microsoft Vpx Vx/Esxi On A Server On A Linux Vx-X86 On A Hyperconverged Powerpoint

Configuration Maximums VMware vsphere 4.1

Fibre Channel and Converged Network Adapters for VMware ESX/ESXi 4.0

vsphere Storage ESXi 6.0 vcenter Server 6.0 EN

Setup for Microsoft Cluster Service ESX Server and VirtualCenter 2.0.1

Direct Attached Storage

vrealize Operations Manager Customization and Administration Guide

VMware Virtual Machine File System: Technical Overview and Best Practices

Advanced VMware Training

VMware vsphere 4.1 with ESXi and vcenter

Enterprise. ESXi in the. VMware ESX and. Planning Deployment of. Virtualization Servers. Edward L. Haletky

Setup for Failover Clustering and Microsoft Cluster Service

VMWARE VSPHERE 5.0 WITH ESXI AND VCENTER

vsphere Networking vsphere 5.5 ESXi 5.5 vcenter Server 5.5 EN

End-to-end Data integrity Protection in Storage Systems

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

VMware for Bosch VMS. en Software Manual

EMC Data Domain Management Center

VMware vsphere: Install, Configure, Manage [V5.0]

E-SPIN's Virtualization Management, System Administration Technical Training with VMware vsphere Enterprise (7 Day)

Windows Host Utilities 6.0 Installation and Setup Guide

HBA Virtualization Technologies for Windows OS Environments

vsphere Networking vsphere 6.0 ESXi 6.0 vcenter Server 6.0 EN

Virtual SAN Design and Deployment Guide

Windows Host Utilities Installation and Setup Guide

VMware Best Practice and Integration Guide

Bosch Video Management System High availability with VMware

VMware vsphere-6.0 Administration Training

Khóa học dành cho các kỹ sư hệ thống, quản trị hệ thống, kỹ sư vận hành cho các hệ thống ảo hóa ESXi, ESX và vcenter Server

Multipathing Configuration for Software iscsi Using Port Binding

DVS Enterprise. Reference Architecture. VMware Horizon View Reference

vsphere Networking ESXi 5.0 vcenter Server 5.0 EN

Configuration Maximums

HP SN1000E 16 Gb Fibre Channel HBA Evaluation

Configuration Maximums VMware Infrastructure 3

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V

InForm OS 2.2.3/2.2.4 VMware ESX Server QLogic/Emulex HBA Implementation Guide

Study Guide. Professional vsphere 4. VCP VMware Certified. (ExamVCP4IO) Robert Schmidt. IVIC GratAf Hill

Deploying SAP on Microsoft SQL Server 2008 Environments Using the Hitachi Virtual Storage Platform

ESXi Configuration Guide

EMC ViPR Controller. User Interface Virtual Data Center Configuration Guide. Version REV 01

Best Practice of Server Virtualization Using Qsan SAN Storage System. F300Q / F400Q / F600Q Series P300Q / P400Q / P500Q / P600Q Series

Configuration Maximums VMware vsphere 4.0

Configuration Maximums

QNAP in vsphere Environment

VMware vsphere: Fast Track [V5.0]

Implementation Guide for EMC for VSPEX Private Cloud Environments. CloudLink Solution Architect Team

Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters

Hitachi Unified Compute Platform (UCP) Pro for VMware vsphere

SAN Implementation Course SANIW; 3 Days, Instructor-led

Configuring VMware vsphere 5.1 with Oracle ZFS Storage Appliance and Oracle Fabric Interconnect

Building the Virtual Information Infrastructure

Emulex 8Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter IBM BladeCenter at-a-glance guide

IMPROVING VMWARE DISASTER RECOVERY WITH EMC RECOVERPOINT Applied Technology

Video Surveillance Storage and Verint Nextiva NetApp Video Surveillance Storage Solution

VMware vsphere Design. 2nd Edition

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter

How To Set Up An Iscsi Isci On An Isci Vsphere 5 On An Hp P4000 Lefthand Sano On A Vspheron On A Powerpoint Vspheon On An Ipc Vsphee 5

Setup for Failover Clustering and Microsoft Cluster Service

VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS

FlexArray Virtualization

IP SAN Best Practices

SAN System Design and Deployment Guide. Second Edition

The HBAs tested in this report are the Brocade 825 and the Emulex LPe12002 and LPe12000.

vsphere Host Profiles

Install and Configure an ESXi 5.1 Host

HP Intelligent Management Center v7.1 Virtualization Monitor Administrator Guide

Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i

Basic System Administration ESX Server and Virtual Center 2.0.1

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

vsphere Storage ESXi 5.0 vcenter Server 5.0 EN

Migrating to ESXi: How To

Setup for Failover Clustering and Microsoft Cluster Service

Compellent Storage Center

Bosch Video Management System High Availability with Hyper-V

Table of Contents. vsphere 4 Suite 24. Chapter Format and Conventions 10. Why You Need Virtualization 15 Types. Why vsphere. Onward, Through the Fog!

iscsi SAN Configuration Guide

Microsoft SMB File Sharing Best Practices Guide

Transcription:

Technical White Paper HUAWEI SAN Storage Host Connectivity Guide for VMware OceanStor Storage VMware Huawei Technologies Co., Ltd. 2014-01

Copyright Huawei Technologies Co., Ltd. 2014. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of Huawei Technologies Co., Ltd. Trademarks and Permissions and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd. All other trademarks and trade names mentioned in this document are the property of their respective holders. Notice The purchased products, services and features are stipulated by the contract made between Huawei and the customer. All or part of the products, services and features described in this document may not be within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information, and recommendations in this document are provided "AS IS" without warranties, guarantees or representations of any kind, either express or implied. The information in this document is subject to change without notice. Every effort has been made in the preparation of this document to ensure accuracy of the contents, but all statements, information, and recommendations in this document do not constitute a warranty of any kind, express or implied. Huawei Technologies Co., Ltd. Address: Website: Huawei Industrial Base Bantian, Longgang Shenzhen 518129 People's Republic of China http://enterprise.huawei.com i

About This Document About This Document Overview This document details the configuration methods and precautions for connecting Huawei SAN storage devices to VMware hosts. Intended Audience This document is intended for: Huawei technical support engineers Technical engineers of Huawei's partners Conventions Symbol Conventions The symbols that may be found in this document are defined as follows: Symbol Description Indicates a hazard with a high level of risk, which if not avoided, will result in death or serious injury. Indicates a hazard with a medium or low level of risk, which if not avoided, could result in minor or moderate injury. Indicates a potentially hazardous situation, which if not avoided, could result in equipment damage, data loss, performance degradation, or unexpected results. Indicates a tip that may help you solve a problem or save time. Provides additional information to emphasize or supplement important points of the main text. ii

About This Document General Conventions Convention Times New Roman Boldface Italic Courier New Description Normal paragraphs are in Times New Roman. Names of files, directories, folders, and users are in boldface. For example, log in as user root. Book titles are in italics. Examples of information displayed on the screen are in Courier New. Command Conventions Format Boldface Italic Description The keywords of a command line are in boldface. Command arguments are in italics. iii

Contents Contents About This Document...ii 1 VMware... 1 1.1 VMware Infrastructure... 1 1.2 File Systems in VMware... 3 1.3 VMware RDM... 5 1.4 VMware Cluster... 6 1.5 VMware vmotion... 6 1.6 VMware DRS... 6 1.7 VMware FT and VMware HA... 7 1.8 Specifications... 7 2 Network Planning... 10 2.1 Fibre Channel Network Diagram... 10 2.1.1 Multi-Path Direct-Connection Network... 10 2.1.2 Multi-Path Switch-based Network... 11 2.2 iscsi Network Diagram... 13 2.2.1 Multi-Path Direct-Connection Network... 13 2.2.2 Multi-Path Switch-based Network... 14 3 Preparations Before Configuration (on a Host)... 16 3.1 HBA Identification... 16 3.2 HBA Information... 17 3.2.1 Versions Earlier than ESXi 5.5... 17 3.2.2 ESXi 5.5... 18 4 Preparations Before Configuration (on a Storage System)... 25 5 Configuring Switches... 26 5.1 Fibre Channel Switch... 26 5.1.1 Querying the Switch Model and Version... 26 5.1.2 Configuring Zones... 29 5.1.3 Precautions... 32 5.2 Ethernet Switch... 32 5.2.1 Configuring VLANs... 33 5.2.2 Binding Ports... 33 iv

Contents 6 Establishing Fibre Channel Connections... 36 6.1 Checking Topology Modes... 36 6.1.1 OceanStor T Series Storage System... 36 6.1.2 OceanStor 18000 Series Enterprise Storage System... 37 6.2 Adding Initiators... 38 6.3 Establishing Connections... 38 7 Establishing iscsi Connections... 39 7.1 Host Configurations... 39 7.1.1 Configuring Service IP Addresses... 39 7.1.2 Configuring Host Initiators... 42 7.1.3 Configuring CHAP Authentication... 46 7.2 Storage System... 47 7.2.1 OceanStor T Series Storage System... 48 7.2.2 OceanStor 18000 Series Enterprise Storage System... 52 8 Mapping and Using LUNs... 54 8.1 Mapping LUNs to a Host... 54 8.1.1 OceanStor T Series Storage System... 54 8.1.2 OceanStor 18000 Series Enterprise Storage System... 55 8.2 Scanning for LUNs on a Host... 56 8.3 Using the Mapped LUNs... 56 8.3.1 Mapping Raw Devices... 56 8.3.2 Creating Datastores (File Systems)... 61 8.3.3 Mapping Virtual Disks... 65 8.3.4 Differences Between Raw Disks and Virtual Disks... 68 9 Multipathing Management... 70 9.1 Overview... 70 9.2 VMware PSA... 70 9.2.1 Overview... 70 9.2.2 VMware NMP... 71 9.2.3 VMware PSP... 71 9.3 Software Functions and Features... 71 9.4 Multipathing Selection Policy... 72 9.4.1 Policies and Differences... 72 9.4.2 PSPs in Different ESX Versions... 74 9.5 VMware SATPs... 75 9.6 Policy Configuration... 77 9.6.1 OceanStor T Series Storage System... 78 9.6.2 OceanStor 18000 Series Enterprise Storage System... 88 9.7 LUN Failure Policy... 89 9.8 Path Policy Query and Modification... 89 v

Contents 9.8.1 ESX/ESXi 4.0... 89 9.8.2 ESX/ESXi 4.1... 89 9.8.3 ESXi 5.0... 91 9.8.4 ESXi 5.1... 92 9.8.5 ESXi 5.5... 93 9.9 Differences Between iscsi Multi-Path Networks with Single and Multiple HBAs... 94 9.9.1 iscsi Multi-Path Network with a Single HBA... 94 9.9.2 iscsi Multi-Path Network with Multiple HBAs... 94 10 Common Commands... 96 11 Host High-Availability... 98 11.1 Overview... 98 11.1.1 Working Principle and Functions... 98 11.1.2 Relationship Among VMware HA, DRS, and vmotion... 99 11.2 Installation and Configuration... 99 11.3 Log Collection... 99 12 Acronyms and Abbreviations... 100 vi

Figures Figures Figure 1-1 VMware Infrastructure virtual data center... 2 Figure 1-2 Storage architecture in VMware Infrastructure... 3 Figure 1-3 VMFS architecture... 4 Figure 1-4 Structure of a VMFS volume... 5 Figure 1-5 RDM mechanism... 6 Figure 2-2 Fibre Channel multi-path direct-connection network diagram (dual-controller)... 11 Figure 2-3 Fibre Channel multi-path direct-connection network diagram (four-controller)... 11 Figure 2-4 Fibre Channel multi-path switch-based network diagram (dual-controller)... 12 Figure 2-5 Fibre Channel multi-path switch-based network diagram (four-controller)... 12 Figure 2-6 iscsi multi-path direct-connection network diagram (dual-controller)... 13 Figure 2-7 iscsi multi-path direct-connection network diagram (four-controller)... 14 Figure 2-8 iscsi multi-path switch-based network diagram (dual-controller)... 14 Figure 2-9 iscsi multi-path switch-based network diagram (four-controller)... 15 Figure 3-1 Viewing the HBA information... 16 Figure 5-1 Switch information... 27 Figure 5-2 Switch port indicator status... 29 Figure 5-3 Zone tab page... 30 Figure 5-4 Zone configuration... 30 Figure 5-5 Zone Config tab page... 31 Figure 5-6 Name Server page... 32 Figure 6-1 Fibre Channel port details... 37 Figure 6-2 Fibre Channel port details... 37 Figure 7-1 Adding VMkernel... 39 Figure 7-2 Creating a vsphere standard switch... 40 Figure 7-3 Specifying the network label... 40 Figure 7-4 Entering the iscsi service IP address... 41 vii

Figures Figure 7-5 Information summary... 41 Figure 7-6 iscsi multi-path network with dual adapters... 42 Figure 7-7 Adding storage adapters... 42 Figure 7-8 Adding iscsi initiators... 43 Figure 7-9 iscsi Software Adapter... 43 Figure 7-10 Initiator properties... 44 Figure 7-11 iscsi initiator properties... 44 Figure 7-12 Binding with a new VMkernel network adapter... 45 Figure 7-13 Initiator properties after virtual network binding... 45 Figure 7-14 Adding send target server... 46 Figure 7-15 General tab page... 46 Figure 7-16 CHAP credentials dialog box... 47 Figure 7-17 Modifying IPv4 addresses... 48 Figure 7-18 Initiator CHAP configuration... 49 Figure 7-19 CHAP Configuration dialog box... 49 Figure 7-20 Create CHAP dialog box... 50 Figure 7-21 Assigning the CHAP account to the initiator... 51 Figure 7-22 Setting CHAP status... 51 Figure 7-23 Enabling CHAP... 52 Figure 7-24 Initiator status after CHAP is enabled... 52 Figure 8-1 Scanning for the mapped LUNs... 56 Figure 8-2 Editing host settings... 57 Figure 8-3 Adding disks... 57 Figure 8-4 Selecting disks... 58 Figure 8-5 Selecting a target LUN... 58 Figure 8-6 Selecting a datastore... 59 Figure 8-7 Selecting a compatibility mode... 59 Figure 8-8 Selecting a virtual device node... 60 Figure 8-9 Confirming the information about the disk to be added... 60 Figure 8-10 Adding raw disk mappings... 61 Figure 8-11 Adding storage... 61 Figure 8-12 Selecting a storage type... 62 Figure 8-13 Select a disk/lun... 62 viii

Figures Figure 8-14 Selecting a file system version... 63 Figure 8-15 Viewing the current disk layout... 63 Figure 8-16 Entering a datastore name... 64 Figure 8-17 Specifying a capacity... 64 Figure 8-18 Confirming the disk layout... 65 Figure 8-19 Editing VM settings... 65 Figure 8-20 Adding disks... 66 Figure 8-21 Creating a new virtual disk... 66 Figure 8-22 Specifying the disk capacity... 67 Figure 8-23 Selecting a datastore... 67 Figure 8-24 Selecting a virtual device node... 68 Figure 8-25 Viewing virtual disk information... 68 Figure 8-26 Modifying the capacity of a virtual disk... 69 Figure 8-27 Modifying the capacity of a disk added using raw disk mappings... 69 Figure 9-1 VMware PSA... 70 Figure 9-2 VMkernel architecture... 72 Figure 9-3 Menu of the storage adapter... 79 Figure 9-4 Shortcut menu of a LUN... 80 Figure 9-5 Configuring the management path... 80 Figure 9-6 iscsi network with a single HBA... 94 Figure 9-7 iscsi network A with multiple HBAs... 94 Figure 9-8 Port mapping of iscsi network A with multiple HBAs... 95 Figure 9-9 iscsi network B with multiple HBAs... 95 Figure 9-10 Port mapping of iscsi network B with multiple HBAs... 95 Figure 10-1 Selecting a VMware version... 97 Figure 11-1 Host logs... 99 ix

Tables Tables Table 1-1 Major specifications of VMware... 7 Table 2-1 Networking modes... 10 Table 5-1 Mapping between switch types and names... 27 Table 5-2 Comparison of link aggregation modes... 34 Table 9-1 Path selection policies... 72 Table 9-2 Default policies for the ESX/ESXi 4.0 operating system... 78 Table 9-3 Recommended policies for the ESX/ESXi 4.0 operating system... 78 Table 9-4 Default policies for the ESX/ESXi 4.1 operating system... 81 Table 9-5 Recommended policies for the ESX/ESXi 4.1 operating system... 81 Table 9-6 Default policies for the ESXi 5.0 operating system... 82 Table 9-7 Recommended policies for the ESXi 5.0 operating system... 83 Table 9-8 Default policies for the ESXi 5.1 operating system... 85 Table 9-9 Recommended policies for the ESXi 5.1 operating system... 85 Table 9-10 Default policies for the ESXi 5.5 operating system... 86 Table 9-11 Recommended policies for the ESXi 5.5 operating system... 87 Table 9-12 Recommended policies... 88 x

1 VMware 1 VMware 1.1 VMware Infrastructure Today's x86 computers are designed merely for running single operating system or application program. Therefore, most of these computers are under utilization. With virtualization technologies, a physical machine can host multiple virtual machines (VMs), and resources on this physical machine can be shared among multiple environments. A physical machine can host multiple VMs running different operating systems and applications, improving x86 hardware utilization. VMware virtualization adds a condensed software layer on the computer hardware or in the host operating system. This software layer includes a VM monitor utility that allocates hardware resources in a dynamic and transparent way. Each operating system or application can access desired resources anytime as required. As an outstanding software solution for x86 virtualization, VMware enables users to manage their virtual environments in an effective and easy manner. Figure 1-1 shows the VMware Infrastructure virtual data center that consists of x86 computing servers, storage networks, storage arrays, IP networks, management servers, and desktop clients. 1

1 VMware Figure 1-1 VMware Infrastructure virtual data center Figure 1-2 provides an example of storage architecture in VMware Infrastructure. A Virtual Machine File System (VMFS) volume contains one or more LUNs belonging to different storage arrays. Multiple ESX servers share one VMFS volume and create virtual disks on the VMFS volume for VMs. 2

1 VMware Figure 1-2 Storage architecture in VMware Infrastructure VMware uses VMFS to centrally manage storage systems. VMFS is a shared cluster file system designed for VMs. This file system employs the distributed lock function to enable independent access to disks, ensuring that a VM is accessed by one physical host at a time. Raw Device Mapping (RDM) acts as the agent for raw devices on a VMFS volume. 1.2 File Systems in VMware Features of VMFS VMware VMFS is a high-performance cluster file system that allows multiple systems to concurrently access the shared storage, laying a solid foundation for the management of VMware clusters and dynamic resources. Automated maintenance directory structure File lock mechanism Distributed logical volume management Dynamic capacity expansion Cluster file system Journal logging Optimized VM data storage 3

1 VMware Advantages of VMFS Architecture of VMFS Improved storage utilization Simplified storage management ESX server clusters of enhanced performance and reliability In the VMFS architecture shown in Figure 1-3, a LUN is formatted into a VMFS file system, whose storage space is shared by three ESX servers each carrying two VMs. Each VM has a Virtual Machine Disk (VMDK) file that is stored in a directory (named after a VM) automatically generated by VMFS. VMFS adds a lock for each VMDK to prevent a VMDK from being accessed by two VMs at the same time. Figure 1-3 VMFS architecture Structure of a VMFS Volume Figure 1-4 shows the structure of a VMFS volume. A VMFS volume consists of one or more partitions that are arranged in lines. Only after the first partition is used out can the following partitions be used. The identity information about the VMFS volume is recorded in the first partition. 4

1 VMware Figure 1-4 Structure of a VMFS volume VMFS divides each extent into multiple blocks, each of which is then divided into smaller blocks. This block-based management is typically suitable for VMs. Files stored on VMs can be categorized as large files (such as VMDK files, snapshots, and memory swap files) and small files (such as log files, configuration files, and VM BIOS files). Large and small blocks are allocated to large and small files respectively. In this way, storage space is effectively utilized and the number of fragments in the file system is minimized, improving the storage performance of VMs. The VMFS-3 file system supports four data block sizes: 1 MB, 2 MB, 4 MB, and 8 MB. Sizes of files and volumes supported by VMFS-3 file systems vary with a file system's block size. 1.3 VMware RDM VMware RDM enables VMs to directly access storage. As shown in Figure 1-5, an RDM disk exists as an address mapping file on the VMFS volume. This mapping file can be considered as a symbolic link that maps a VM's access to an RDM disk to LUNs. 5

1 VMware Figure 1-5 RDM mechanism RDM provides two compatible modes, both of which supports vmotion, Distributed Resource Scheduler (DRS), and High Availability (HA). Virtual compatibility: fully simulates VMDK files and supports snapshots. Physical compatibility: directly accesses SCSI devices and does not support snapshots. RDMs are applicable in the following scenarios: Physical to Virtual (P2V): migrates services from a physical machine to a virtual machine. Virtual to Physical (V2P): migrates services from a virtual machine to a physical machine. Clustering physical machines and virtual machines. 1.4 VMware Cluster VMware Cluster consists of a group of ESX servers that jointly manage VMs, dynamically assign hardware resources, and automatically allocate VMs. With VMware Cluster, loads on VMs can be dynamically transferred among ESX hosts. VMware Cluster is the foundation for Fault Tolerance (FT), High Availability (HA), and Distributed Resource Scheduler (DRS). 1.5 VMware vmotion VMware vmotion can migrate running VMs to facilitate the maintenance of physical machines. VMs can be automatically migrated within a VMware cluster. Free VM migration balances loads among physical machines, improving application performance. VMware vmotion has demanding requirements on the CPU compatibility of physical hosts. VMs can only be migrated among physical machines that run CPUs of the same series. 1.6 VMware DRS VMware DRS constantly monitors the usage of resource pools in a VMware cluster, and intelligently allocates resources to VMs based on service requirements. Deploying VMs onto 6

1 VMware a small number of physical machines may result in unexpected resource bottlenecks. Resources required by VMs may exceed those available on physical machines. VMware DRS offers an automated mechanism to constantly balance capacities and migrate VMs onto physical machines with sufficient resources. As a result, each VM can invoke resources in a timely manner regardless of locations. 1.7 VMware FT and VMware HA VMware FT and VMware HA provide effective failover for physical hardware and VM operating systems. VMware FT and VMware HA can: Monitor VM status to detect faults on physical hardware and operating systems. Restart VMs on another physical machine automatically when the current physical machine where the VMs reside becomes faulty. Restart VMs to protect applications upon operating system faults. 1.8 Specifications VMware specifications vary with VMware versions. Table 1-1 lists major VMware specifications. Table 1-1 Major specifications of VMware Category Specifications Max. Value 4.0 4.1 5.0 5.1 5.5 iscsi Physical LUNs per Server 256 1 256 256 256 256 Paths to a LUN 8 8 8 8 8 Number of total paths on a server 1024 1024 1024 1024 1024 Fibre Channel LUNs per host 256 1 256 256 256 256 LUN size 512 B to 2 TB 512 B to 2 TB --- 64 TB 64 TB LUN ID 255 255 255 255 255 Number of paths to a LUN Number of total paths on a server Number of HBAs of any type 16 32 32 32 32 1024 1024 1024 1024 1024 8 8 8 8 8 HBA ports 16 16 16 16 16 Targets per HBA 256 256 256 256 256 7

1 VMware FCoE Software FCoE adapters --- --- 4 4 4 NAS NFS mounts per host --- 64 256 256 256 NFS Default NFS datastores 8 --- --- --- --- NFS datastores 64 (requir es change s to advanc ed settings ) --- --- --- --- VMFS Raw device mapping (RDM) size 512 B to 2 TB 512 B to 2 TB --- --- --- Volume Size 16 KB to 64 to TB 64 TB --- --- 64 TB Volume Per host 256 256 256 256 256 VMFS-2 Files per volume 256 + (64 x additio nal extents ) Block size 256 MB --- --- --- --- --- --- --- --- VMFS-3 VMFS-3 volumes configured per host 256 --- --- --- --- Files per volume ~30,72 ~30,72 ~30,72 ~30,72 ~30,72 0 2 0 2 0 2 0 2 0 2 Block size 8 MB 8 MB 8 MB 8 MB 8 MB Volume Size --- --- 64 TB 3 64 TB 3 --- VMFS-5 Volume Size --- --- 64 TB 4 64 TB 4 --- Files per volume --- --- ~13069 0 ~13069 0 ~13069 0 1. Local disks are included. 2. The file quantity is sufficient to support the maximum number of VMs. 3. If the block size supported by the file system is 1 MB, the maximum volume size is 50 TB. 8

1 VMware 4. The volume size is also subject to RAID controllers or adapter drivers. The table lists only part of specifications. For more information, see: VMware vsphere Configuration Maximums (4.0) VMware vsphere Configuration Maximums (4.1) VMware vsphere Configuration Maximums (5.0) VMware vsphere Configuration Maximums (5.1) VMware vsphere Configuration Maximums (5.5) 9

2 Network Planning 2 Network Planning VMware hosts and storage systems can be networked based on different criteria. Table 2-1 describes the typical networking modes. Table 2-1 Networking modes Criteria Interface module type Whether switches are used Whether multiple paths exist Networking Mode Fibre Channel network/iscsi network Direct-connection network (no switches are used)/switch-based network (switches are used) Single-path network/multi-path network The Fibre Channel network is the most widely used network for VMware operating systems. To ensure service data security, both direct-connection network and switch-based network are multi-path networks. The following details commonly used Fibre Channel and iscsi networks. 2.1 Fibre Channel Network Diagram 2.1.1 Multi-Path Direct-Connection Network Dual-Controller Huawei provides dual-controller and multi-controller storage systems, whose network diagrams differ. The following describes network diagrams of dual-controller and multi-controller storage systems respectively. The following uses HUAWEI OceanStor S5500T as an example to explain how to connect a VMware host to a storage system over a Fibre Channel multi-path direct-connection network, as shown in Figure 2-2. 10

2 Network Planning Figure 2-1 Fibre Channel multi-path direct-connection network diagram (dual-controller) Multi-Controller On this network, both controllers of the storage system are connected to the host's HBAs through optical fibers. The following uses HUAWEI OceanStor 18800 (four-controller) as an example to explain how to connect a VMware host to a storage system over a Fibre Channel multi-path direct-connection network, as shown in Figure 2-2. Figure 2-2 Fibre Channel multi-path direct-connection network diagram (four-controller) On this network, the four controllers of the storage system are connected to the host's HBAs through optical fibers. 2.1.2 Multi-Path Switch-based Network Dual-Controller Huawei provides dual-controller and multi-controller storage systems, whose network diagrams differ. The following describes network diagrams of dual-controller and multi-controller storage systems respectively. The following uses HUAWEI OceanStor S5500T as an example to explain how to connect a VMware host to a storage system over a Fibre Channel multi-path switch-based network, as shown in Figure 2-3. 11

2 Network Planning Figure 2-3 Fibre Channel multi-path switch-based network diagram (dual-controller) Multi-Controller On this network, the storage system is connected to the host via two switches. Both controllers of the storage system are connected to the switches through optical fibers and both switches are connected to the host through optical fibers. To ensure the connectivity between the host and the storage system, each zone contains only one storage port and its corresponding host port. The following uses HUAWEI OceanStor 18800 (four-controller) as an example to explain how to connect a VMware host to a storage system over a Fibre Channel multi-path switch-based network, as shown in Figure 2-4. Figure 2-4 Fibre Channel multi-path switch-based network diagram (four-controller) 12

2 Network Planning On this network, the storage system is connected to the host via two switches. All controllers of the storage system are connected to the switches through optical fibers and both switches are connected to the host through optical fibers. To ensure the connectivity between the host and the storage system, each zone contains only one storage port and its corresponding host port. 2.2 iscsi Network Diagram 2.2.1 Multi-Path Direct-Connection Network Dual-Controller Huawei provides dual-controller and multi-controller storage systems, whose network diagrams differ. The following describes network diagrams of dual-controller and multi-controller storage systems respectively. The following uses HUAWEI OceanStor S5500T as an example to explain how to connect a VMware host to a storage system over an iscsi multi-path direct-connection network, as shown in Figure 2-5. Figure 2-5 iscsi multi-path direct-connection network diagram (dual-controller) Multi-Controller On this network, both controllers of the storage system are connected to the host's network adapter through Ethernet cables. The following uses HUAWEI OceanStor 18800 (four-controller) as an example to explain how to connect a VMware host to a storage system over an iscsi multi-path direct-connection network, as shown in Figure 2-6. 13

2 Network Planning Figure 2-6 iscsi multi-path direct-connection network diagram (four-controller) On this network, the four controllers of the storage system are connected to the host's network adapter through Ethernet cables. 2.2.2 Multi-Path Switch-based Network Dual-Controller Huawei provides dual-controller and multi-controller storage systems, whose network diagrams differ. The following describes network diagrams of dual-controller and multi-controller storage systems respectively. The following uses HUAWEI OceanStor S5500T as an example to explain how to connect a VMware host to a storage system over an iscsi multi-path switch-based network, as shown in Figure 2-7. Figure 2-7 iscsi multi-path switch-based network diagram (dual-controller) 14

2 Network Planning On this network, the storage system is connected to the host via two Ethernet switches. Both controllers of the storage system are connected to the switches through Ethernet cables and both switches are connected to the host's network adapter through Ethernet cables. To ensure the connectivity between the host and the storage system, each VLAN contains only one storage port and its corresponding host port. Multi-Controller The following uses HUAWEI OceanStor 18800 (four-controller) as an example to explain how to connect a VMware host to a storage system over an iscsi multi-path switch-based network, as shown in Figure 2-8. Figure 2-8 iscsi multi-path switch-based network diagram (four-controller) On this network, the storage system is connected to the host via two Ethernet switches. All controllers of the storage system are connected to the switches through Ethernet cables and both switches are connected to the host's network adapter through Ethernet cables. To ensure the connectivity between the host and the storage system, each VLAN contains only one storage port and its corresponding host port. 15

3 Preparations Before Configuration (on a Host) 3 Preparations Before Configuration (on a Host) Before connecting a host to a storage system, make sure that the host HBAs are identified and working correctly. You also need to obtain the WWNs of HBA ports. The WWNs will be used in subsequent configuration on the storage system. This chapter details how to check the HBA status and query WWNs of HBA ports. 3.1 HBA Identification After an HBA is installed on a host, view information about the HBA on the host. Go to the page for configuration management and choose Storage Adapters in the navigation tree. In the function pane, hardware devices on the host are displayed, as shown in Figure 3-1. Figure 3-1 Viewing the HBA information 16

3 Preparations Before Configuration (on a Host) 3.2 HBA Information After a host identifies a newly installed HBA, you can view properties of the HBA on the host. The method of querying HBA information varies with operating system versions. The following details how to query HBA information on ESXi 5.5 and versions earlier than ESXi 5.5. 3.2.1 Versions Earlier than ESXi 5.5 The command for viewing the HBA properties varies according to the HBA type. The details are as follows: QLogic HBA The command syntax is as follows: cat /proc/scsi/qla2xxx/n The following is an example: ~ # cat /proc/scsi/qla2xxx/4 QLogic PCI to Fibre Channel Host Adapter for QMI2572: FC Firmware version 5.03.15 (d5), Driver version 901.k1.1-14vmw Host Device Name vmhba1 BIOS version 2.09 FCODE version 3.14 EFI version 2.27 Flash FW version 5.03.09 ISP: ISP2532 Request Queue = 0x7810000, Response Queue = 0x7851000 Request Queue count = 2048, Response Queue count = 512 Number of response queues for multi-queue operation: 0 Total number of interrupts = 11346570 Device queue depth = 0x40 Number of free request entries = 675 Total number of outstanding commands: 0 Number of mailbox timeouts = 0 Number of ISP aborts = 0 Number of loop resyncs = 1 Host adapter:loop State = <READY>, flags = 0x1a268 Link speed = <4 Gbps> Dpc flags = 0x0 Link down Timeout = 030 Port down retry = 005 Login retry count = 008 Execution throttle = 2048 ZIO mode = 0x6, ZIO timer = 1 Commands retried with dropped frame(s) = 0 Product ID = 4953 5020 2532 0002 NPIV Supported : Yes Max Virtual Ports = 254 SCSI Device Information: scsi-qla0-adapter-node=20000024ff32f612:010300:0; scsi-qla0-adapter-port=21000024ff32f612:010300:0; 17

3 Preparations Before Configuration (on a Host) FC Target-Port List: scsi-qla0-target-0=20090022a10bc8ee:010f00:81:online; The previous output provides information such as the HBA driver version, topology, WWN, and negotiated rate. Emulex HBA The command syntax is as follows: cat /proc/scsi/lpfcxxx/n The following is an example: ~ # cat /proc/scsi/lpfc820/8 Emulex LightPulse FC SCSI 8.2.2.1-18vmw IBM 42D0494 8Gb 2-Port PCIe FC HBA for System x on PCI bus 0000:81 device 01 irq 65 port 1 BoardNum: 1 ESX Adapter: vmhba4 Firmware Version: 1.11A5 (US1.11A5) Portname: 10:00:00:00:c9:d4:82:83 Nodename: 20:00:00:00:c9:d4:82:83 SLI Rev: 3 0 MQ: Unavailable NPIV Supported: VPIs max 255 VPIs used 1 RPIs max 4096 RPIs used 9 IOCBs inuse 0 IOCB max 8 txq cnt 0 txq max 0 txcmplq Vport List: ESX Adapter: vmhba37 Vport DID 0x30101, vpi 1, state 0x20 Portname: 20:aa:00:0c:29:00:07:1a Nodename: 20:aa:00:0c:29:00:06:1a Link Up - Ready: PortID 0x30100 Fabric Current speed 8G Port Discovered Nodes: Count 1 t0000 DID 030500 WWPN 20:0a:30:30:37:30:30:37 WWNN 21:00:30:30:37:30:30:37 qdepth 8192 max 31 active 1 busy 0 ~ # The previous output provides information such as HBA model and driver. Brocade HBA cat /proc/scsi/bfaxxx/n 3.2.2 ESXi 5.5 Since ESXi 5.5, the /proc/scsi/ directory contains no content. Run the following commands to query HBA information: 18

3 Preparations Before Configuration (on a Host) ~ # esxcli storage core adapter list HBA Name Driver Link State UID Description -------- ------------ ---------- ------------------------------------ -------------------------------------------------------------------------- vmhba0 ahci link-n/a sata.vmhba0 (0:0:31.2) Intel Corporation Patsburg 6 Port SATA AHCI Controller vmhba1 megaraid_sas link-n/a unknown.vmhba1 (0:3:0.0) LSI / Symbios Logic MegaRAID SAS Fusion Controller vmhba2 rste link-n/a pscsi.vmhba2 (0:4:0.0) Intel Corporation Patsburg 4-Port SATA Storage Control Unit vmhba3 qlnativefc link-up fc.50014380062d222d:50014380062d222c (0:129:0.0) QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI Express HBA vmhba4 qlnativefc link-up fc.50014380062d222f:50014380062d222e (0:129:0.1) QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI Express HBA vmhba32 ahci link-n/a sata.vmhba32 (0:0:31.2) Intel Corporation Patsburg 6 Port SATA AHCI Controller vmhba33 ahci link-n/a sata.vmhba33 (0:0:31.2) Intel Corporation Patsburg 6 Port SATA AHCI Controller vmhba34 ahci link-n/a sata.vmhba34 (0:0:31.2) Intel Corporation Patsburg 6 Port SATA AHCI Controller vmhba35 ahci link-n/a sata.vmhba35 (0:0:31.2) Intel Corporation Patsburg 6 Port SATA AHCI Controller vmhba36 ahci link-n/a sata.vmhba36 (0:0:31.2) Intel Corporation Patsburg 6 Port SATA AHCI Controller ~ # ~ # ~ # esxcfg-module -i qlnativefc esxcfg-module module information input file: /usr/lib/vmware/vmkmod/qlnativefc License: GPLv2 Version: 1.0.12.0-1vmw.550.0.0.1331820 Name-space: Required name-spaces: com.vmware.vmkapi@v2_2_0_0 Parameters: ql2xallocfwdump: int Option to enable allocation of memory for a firmware dump during HBA initialization. Memory allocation requirements vary by ISP type. Default is 1 - allocate memory. ql2xattemptdumponpanic: int Attempt fw dump for each function on PSOD Default is 0 - Don't attempt fw dump. ql2xbypass_log_throttle: int Option to bypass log throttling.default is 0 - Throttling enabled. 1 - Log all errors. ql2xcmdtimeout: int Timeout value in seconds for scsi command, default is 20 ql2xcmdtimermin: int Minimum command timeout value. Default is 30 seconds. ql2xdevdiscgoldfw: int Option to enable device discovery with golden firmware Default is 0 - no discovery. 1 - discover device. ql2xdisablenpiv: int Option to disable/enable NPIV feature globally. 0 - NPIV enabled. ql2xenablemsi2422: int 1 - NPIV disabled. Default is Enables MSI interrupt scheme on 2422sDefault is 0 - disable MSI-X/MSI. 1 - enable MSI-X/MSI. 19

3 Preparations Before Configuration (on a Host) ql2xenablemsi24xx: int Enables MSIx/MSI interrupt scheme on 24xx cardsdefault is 1 - enable MSI-X/MSI. 0 - disable MSI-X/MSI. ql2xenablemsix: int Set to enable MSI or MSI-X interrupt mechanism. 0 = enable traditional pin-based interrupt mechanism. 1 = enable MSI-X interrupt mechanism (Default). 2 = enable MSI interrupt mechanism. ql2xexecution_throttle: int IOCB exchange count for HBA.Default is 0, set intended value to override Firmware defaults. ql2xextended_error_logging: int Option to enable extended error logging, Default is 0 - no logging. 1 - log errors. ql2xfdmienable: int Enables FDMI registratons Default is 1 - perfom FDMI. 0 - no FDMI. ql2xfwloadbin: int Option to specify location from which to load ISP firmware: 2 -- load firmware via the request_firmware() (hotplug) interface. 1 -- load firmware from flash. 0 -- use default semantics. ql2xiidmaenable: int Enables iidma settings Default is 1 - perform iidma. 0 - no iidma. ql2xintrdelaytimer: int ZIO: Waiting time for Firmware before it generates an interrupt to the host to notify completion of request. ql2xioctltimeout: int IOCTL timeout value in seconds for pass-thur commands. Default is 66 seconds. ql2xioctltimertest: int IOCTL timer test enable - set to enable ioctlcommand timeout value to trigger before fw cmdtimeout value. Default is disabled ql2xloginretrycount: int Specify an alternate value for the NVRAM login retry count. ql2xlogintimeout: int Login timeout value in seconds. ql2xmaxlun: int Defines the maximum LUNs to register with the SCSI midlayer. Default is 256. Maximum allowed is 65535. ql2xmaxqdepth: int Maximum queue depth to report for target devices. ql2xmaxsgs: int Maximum scatter/gather entries per request,default is the Max the OS Supports. ql2xmqcpuaffinity: int Enables CPU affinity settings for the driver Default is 0 for no affinity of request and response IO. Set it to 1 to turn on the cpu affinity. ql2xmqqos: int Enables MQ settings Default is 1. Set it to enable queues in MQ QoS mode. ql2xoperationmode: int Option to disable ZIO mode for ISP24XX: Default is 1, set 0 to disable ql2xplogiabsentdevice: int Option to enable PLOGI to devices that are not present after a Fabric scan. This is needed for several broken switches. Default is 0 - no PLOGI. 1 - perfom PLOGI. ql2xqfullrampup: int Number of seconds to wait to begin to ramp-up the queue depth for a device after a queue-full condition has been detected. Default is 120 seconds. ql2xqfulltracking: int 20

3 Preparations Before Configuration (on a Host) Controls whether the driver tracks queue full status returns and dynamically adjusts a scsi device's queue depth. Default is 1, perform tracking. Set to 0 to disable dynamic tracking and adjustment of queue depth. ql2xusedefmaxrdreq: int Default is 0 - adjust PCIe Maximum Read Request Size. 1 - use system default. qlport_down_retry: int Maximum number of command retries to a port that returns a PORT-DOWN status. ~ # The previous output provides information such as HBA model, WWN, and driver. You can run the following command to obtain more HBA details: # /usr/lib/vmware/vmkmgmt_keyval/vmkmgmt_keyval -a Listing all system keys: Key Value Instance: QLNATIVEFC/qlogic Listing keys: Name: 0 Type: string value: QLogic PCI to Fibre Channel Host Adapter for HPAJ764A: FC Firmware version 5.09.00 (90d5), Driver version 1.0.12.0 Host Device Name vmhba3 BIOS version 2.12 FCODE version 2.03 EFI version 2.05 Flash FW version 4.04.04 ISP: ISP2532, Serial# MY5001219T MSI-X enabled Request Queue = 0x41094e3b6000, Response Queue = 0x41094e3d7000 Request Queue count = 2048, Response Queue count = 512 Number of response queues for multi-queue operation: 2 CPU Affinity mode enabled Total number of MSI-X interrupts on vector 0 (handler = ff40) = 371 Total number of MSI-X interrupts on vector 1 (handler = ff41) = 29 Total number of MSI-X interrupts on vector 2 (handler = ff42) = 2173 Total number of MSI-X interrupts on vector 3 (handler = ff43) = 6916 Device queue depth = 0x40 Number of free request entries = 238 Total number of outstanding commands: 0 Number of mailbox timeouts = 0 Number of ISP aborts = 0 Number of loop resyncs = 14 Host adapter:loop State = [DEAD], flags = 0x205a260 Link speed = [Unknown] Dpc flags = 0x0 Link down Timeout = 008 Port down retry = 010 Login retry count = 010 Execution throttle = 2048 ZIO mode = 0x6, ZIO timer = 1 Commands retried with dropped frame(s) = 0 21

3 Preparations Before Configuration (on a Host) Product ID = 4953 5020 2532 0002 NPIV Supported : Yes Max Virtual Ports = 254 Number of Virtual Ports in Use = 1 SCSI Device Information: scsi-qla0-adapter-node=50014380062d222d:000000:0; scsi-qla0-adapter-port=50014380062d222c:000000:0; FC Target-Port List: scsi-qla0-target-0=200a303037303037:030900:1000:[offline]; scsi-qla0-target-1=200b303037303037:030900:0:[offline]; Virtual Port Information: Virtual Port WWNN:WWPN:ID =2636000c29000128:2636000c29000328:000000; Virtual Port 1:VP State = [FAILED], Vp Flags = 0x0 Virtual Port 1:Request-Q ID = [2] FC Port Information for Virtual Port 1: scsi-qla3-port-0=2100303037303037:200a303037303037:030900:1000; scsi-qla3-port-1=2100303037303037:200b303037303037:030900:0; Name: 1 Type: string value: QLogic PCI to Fibre Channel Host Adapter for HPAJ764A: FC Firmware version 5.09.00 (90d5), Driver version 1.0.12.0 Host Device Name vmhba4 BIOS version 2.12 FCODE version 2.03 EFI version 2.05 Flash FW version 4.04.04 ISP: ISP2532, Serial# MY5001219T MSI-X enabled Request Queue = 0x41094e42b000, Response Queue = 0x41094e44c000 Request Queue count = 2048, Response Queue count = 512 Number of response queues for multi-queue operation: 2 CPU Affinity mode enabled Total number of MSI-X interrupts on vector 0 (handler = ff44) = 384 Total number of MSI-X interrupts on vector 1 (handler = ff45) = 39 Total number of MSI-X interrupts on vector 2 (handler = ff46) = 144185 Total number of MSI-X interrupts on vector 3 (handler = ff47) = 11278 Device queue depth = 0x40 Number of free request entries = 546 Total number of outstanding commands: 0 Number of mailbox timeouts = 0 Number of ISP aborts = 0 Number of loop resyncs = 14 Host adapter:loop State = [DEAD], flags = 0x204a260 Link speed = [Unknown] Dpc flags = 0x0 Link down Timeout = 008 22

3 Preparations Before Configuration (on a Host) Port down retry = 010 Login retry count = 010 Execution throttle = 2048 ZIO mode = 0x6, ZIO timer = 1 Commands retried with dropped frame(s) = 0 Product ID = 4953 5020 2532 0002 NPIV Supported : Yes Max Virtual Ports = 254 Number of Virtual Ports in Use = 1 SCSI Device Information: scsi-qla1-adapter-node=50014380062d222f:000000:0; scsi-qla1-adapter-port=50014380062d222e:000000:0; FC Target-Port List: scsi-qla1-target-0=2018303037303037:030a00:1000:[offline]; scsi-qla1-target-1=2019303037303037:030a00:0:[offline]; Virtual Port Information: Virtual Port WWNN:WWPN:ID =2636000c29000128:2636000c29000228:000000; Virtual Port 1:VP State = [FAILED], Vp Flags = 0x0 Virtual Port 1:Request-Q ID = [2] FC Port Information for Virtual Port 1: scsi-qla2-port-0=2100303037303037:2018303037303037:030a00:1000; scsi-qla2-port-1=2100303037303037:2019303037303037:030a00:0; Name: 2 Type: string value: Instance not found on this system. Name: 15 Type: string value: Instance not found on this system. Name: DRIVERINFO Type: string value: Driver version 1.0.12.0 Module Parameters ql2xlogintimeout = 20 qlport_down_retry = 10 ql2xplogiabsentdevice = 0 ql2xloginretrycount = 0 ql2xallocfwdump = 1 ql2xioctltimeout = 66 ql2xioctltimertest = 0 ql2xextended_error_logging = 0 ql2xdevdiscgoldfw = 0 ql2xattemptdumponpanic= 0 ql2xfdmienable = 1 23

3 Preparations Before Configuration (on a Host) ql2xmaxqdepth = 64 ql2xqfulltracking = 1 ql2xqfullrampup = 120 ql2xiidmaenable = 1 ql2xusedefmaxrdreq = 0 ql2xenablemsix = 1 ql2xenablemsi24xx = 1 ql2xenablemsi2422 = 0 ql2xoperationmode = 1 ql2xintrdelaytimer = 1 ql2xcmdtimeout = 20 ql2xexecution_throttle = 0 ql2xmaxsgs = 0 ql2xmaxlun = 256 ql2xmqqos = 1 ql2xmqcpuaffinity = 1 ql2xfwloadbin = 0 ql2xbypass_log_throttle = 0 ~ # ~ # The previous output provides more detailed HBA information. For more information, visit: http://kb.vmware.com/selfservice/microsites/search.do?language=en_us&cmd=displaykc&e xternalid=1031534 For details about how to modify the HBA queue depth, visit: http://kb.vmware.com/selfservice/microsites/search.do?language=en_us&cmd=displaykc&e xternalid=1267 24

4 Preparations Before Configuration (on a Storage System) 4 Preparations Before Configuration (on a Storage System) Make sure that RAID groups, LUNs, and hosts are correctly created on the storage systems. These configurations are common and therefore not detailed here. 25

5 Configuring Switches 5 Configuring Switches VMware hosts and storage systems can be connected over a Fibre Channel switch-based network and an iscsi switch-based network. A Fibre Channel switch-based network uses Fibre Channel switches and an iscsi network uses Ethernet switches. This chapter describes how to configure a Fibre Channel switch and an Ethernet switch respectively. 5.1 Fibre Channel Switch The commonly used Fibre Channel switches are mainly from Brocade, Cisco, and QLogic. The following uses a Brocade switch as an example to explain how to configure switches. 5.1.1 Querying the Switch Model and Version Perform the following steps to query the switch model and version: Step 1 Log in to the Brocade switch from a web page. On the web page, enter the IP address of the Brocade switch. The Web Tools switch login dialog box is displayed. Enter the account and password. The default account and password are admin and password. The switch management page is displayed. CAUTION Web Tools works correctly only when Java is installed on the host. Java 1.6 or later is recommended. Step 2 View the switch information. On the switch management page that is displayed, click Switch Information. The switch information is displayed, as shown in Figure 5-1. 26

5 Configuring Switches Figure 5-1 Switch information Tue June Note the following parameters: Fabric OS version: indicates the switch version information. The interoperability between switches and storage systems varies with the switch version. Only switches of authenticated versions can interconnect correctly with storage systems. Type: This parameter is a decimal consists of an integer and a decimal fraction. The integer indicates the switch model and the decimal fraction indicates the switch template version. You only need to pay attention to the switch model. Table 5-1Table 5-1 describes switch model mapping. Table 5-1 Mapping between switch types and names Switch Type Switch Name Switch Type Switch Name 1 Brocade 1000 Switch 58 Brocade 5000 Switch 2,6 Brocade 2800 Switch 61 Brocade 4424 Embedded Switch 3 Brocade 2100, 2400 Switches 62 Brocade DCX Backbone 4 Brocade 20x0, 2010, 2040, 2050 Switches 5 Brocade 22x0, 2210, 2240, 2250 Switches 64 Brocade 5300 Switch 66 Brocade 5100 Switch 7 Brocade 2000 Switch 67 Brocade Encryption Switch 27

5 Configuring Switches Switch Type Switch Name Switch Type Switch Name 9 Brocade 3800 Switch 69 Brocade 5410 Blade 10 Brocade 12000 Director 70 Brocade 5410 Embedded Switch 12 Brocade 3900 Switch 71 Brocade 300 Switch 16 Brocade 3200 Switch 72 Brocade 5480 Embedded Switch 17 Brocade 3800VL 73 Brocade 5470 Embedded Switch 18 Brocade 3000 Switch 75 Brocade M5424 Embedded Switch 21 Brocade 24000 Director 76 Brocade 8000 Switch 22 Brocade 3016 Switch 77 Brocade DCX-4S Backbone 26 Brocade 3850 Switch 83 Brocade 7800 Extension Switch 27 Brocade 3250 Switch 86 Brocade 5450 Embedded Switch 29 Brocade 4012 Embedded Switch 87 Brocade 5460 Embedded Switch 32 Brocade 4100 Switch 90 Brocade 8470 Embedded Switch 33 Brocade 3014 Switch 92 Brocade VA-40FC Switch 34 Brocade 200E Switch 95 Brocade VDX 6720-24 Data Center Switch 37 Brocade 4020 Embedded Switch 96 Brocade VDX 6730-32 Data Center Switch 38 Brocade 7420 SAN Router 97 Brocade VDX 6720-60 Data Center Switch 40 Fibre Channel Routing (FCR) Front Domain 41 Fibre Channel Routing, (FCR) Xlate Domain 98 Brocade VDX 6730-76 Data Center Switch 108 Dell M8428-k FCoE Embedded Switch 42 Brocade 48000 Director 109 Brocade 6510 Switch 28

5 Configuring Switches Switch Type Switch Name Switch Type Switch Name 43 Brocade 4024 Embedded Switch 116 Brocade VDX 6710 Data Center Switch 44 Brocade 4900 Switch 117 Brocade 6547 Embedded Switch 45 Brocade 4016 Embedded Switch 118 Brocade 6505 Switch 46 Brocade 7500 Switch 120 Brocade DCX 8510-8 Backbone 51 Brocade 4018 Embedded Switch 121 Brocade DCX 8510-4 Backbone 55.2 Brocade 7600 Switch Ethernet IPv4: indicates the switch IP address. Effective Configuration: indicates the currently effective configurations. This parameter is important and is related to zone configurations. In this example, the currently effective configuration is ss. ----End 5.1.2 Configuring Zones Zone configuration is important for Fibre Channel switches. Perform the following steps to configure switch zones: Step 1 Log in to the Brocade switch from a web page. This step is the same as that in section 5.1.1 "Querying the Switch Model and Version." Step 2 Check the switch port status. Normally, the switch port indicators are steady green, as shown in Figure 5-2. Figure 5-2 Switch port indicator status If the port indicators are abnormal, check the topology mode and rate. Proceed with the next step after all indicators are normal. Step 3 Go to the Zone Admin page. In the navigation tree of Web Tools, choose Task > Manage > Zone Admin. You can also choose Manage > Zone Admin in the navigation bar. 29

5 Configuring Switches Step 4 Check whether the switch identifies hosts and storage systems. On the Zone Admin page, click the Zone tab. In Ports&Attached Devices, check whether all related ports are identified, as shown in Figure 5-3. Figure 5-3 Zone tab page The preceding figure shows that ports 1,8 and 1,9 in use are correctly identified by the switch. Step 5 Create a zone. On the Zone tab page, click New Zone to create a new zone and name it zone_8_9. Select ports 1,8 and 1,9 and click Add Member to add them to the new zone, as shown in Figure 5-4. Figure 5-4 Zone configuration 30

5 Configuring Switches CAUTION To ensure data is transferred separately, ensure that each zone contains one initiator and one target only. Step 6 Add the new zone to the configuration file and activate the new zone. On the Zone Admin page, click the Zone Config tab. In the Name drop-down list, choose the currently effective configuration ss. In Member Selection List, select zone zone_8_9 and click Add Member to add it to the configuration file. Click Save Config to save the configuration and click Enable Config to make the configuration effective. Figure 5-5 shows the Zone Config page. Figure 5-5 Zone Config tab page Step 7 Verify that the configuration takes effect. In the navigation tree of Web Tools, choose Task > Monitor > Name Server to go to the Name Server page. You can also choose Monitor > Name Server in the navigation bar. Figure 5-6 shows the Name Server page. 31

5 Configuring Switches Figure 5-6 Name Server page The preceding figure shows that ports 8 and 9 are members of zone_8_9 that is now effective. An effective zone is marked by an asterisk (*). ----End 5.1.3 Precautions Note the following when connecting a Brocade switch to a storage system at a rate of 8 Gbit/s: The topology mode of the storage system must be set to switch. fill word of ports through which the switch is connected to the storage system must be set to 0. To configure this parameter, run the portcfgfillword <port number> 0 command on the switch. Note the following when connecting a Brocade switch to a storage system at a rate of 8 Gbit/s: When the switch is connected to module HP VC 8Gb 20-port FC or HP VC FlexFabric 10Gb/24-port, change the switch configuration. For details, visit: https://h20566.www2.hp.com/portal/site/hpsc/template.page/public/psi/troubleshootdis play/?javax.portlet.prp_efb5c0793523e51970c8fa22b053ce01=wsrp-navigationalstate% 3DdocId%3Demr_na-c02619780%7CdocLocale%3Dzh_CN&lang=en&javax.portlet.be gcachetok=com.vignette.cachetoken&sp4ts.oid=3984629&javax.portlet.endcachetok= com.vignette.cachetoken&javax.portlet.tpst=efb5c0793523e51970c8fa22b053ce01&hpa ppid=sp4ts&cc=us&ac.admitted=1337927146324.876444892.199480143 5.2 Ethernet Switch This section describes how to configure Ethernet switches, including configuring VLANs and binding ports. 32

5 Configuring Switches 5.2.1 Configuring VLANs On an Ethernet network to which many hosts are connected, a large number of broadcast packets are generated during the host communication. Broadcast packets sent from one host will be received by all other hosts on the network, consuming more bandwidth. Moreover, all hosts on the network can access each other, resulting data security risks. To save bandwidth and prevent security risks, hosts on an Ethernet network are divided into multiple logical groups. Each logical group is a VLAN. The following uses HUAWEI Quidway 2700 Ethernet switch as an example to explain how to configure VLANs. In the following example, two VLANs (VLAN 1000 and VLAN 2000) are created. VLAN 1000 contains ports GE 1/0/1 to 1/0/16. VLAN 2000 contains ports GE 1/0/20 to 1/0/24. Step 1 Go to the system view. <Quidway>system-view System View: return to User View with Ctrl+Z. Step 2 Create VLAN 1000 and add ports to it. [Quidway]VLAN 1000 [Quidway-vlan1000]port GigabitEthernet 1/0/1 to GigabitEthernet 1/0/16 Step 3 Configure the IP address of VLAN 1000. [Quidway-vlan1000]interface VLAN 1000 [Quidway-Vlan-interface1000]ip address 1.0.0.1 255.255.255.0 Step 4 Create VLAN 2000, add ports, and configure the IP address. [Quidway]VLAN 2000 [Quidway-vlan2000]port GigabitEthernet 1/0/20 to GigabitEthernet 1/0/24 [Quidway-vlan2000]interface VLAN 2000 [Quidway-Vlan-interface2000]ip address 2.0.0.1 255.255.255.0 ----End 5.2.2 Binding Ports When storage systems and hosts are connected in point-to-point mode, existing bandwidth may be insufficient for storage data transmission. Moreover, devices cannot be redundantly connected in point-to-point mode. To address these problems, ports are bound (link aggregation). Port binding can improve bandwidth and balance load among multiple links. Link Aggregation Modes Three Ethernet link aggregation modes are available: Manual aggregation Manually run a command to add ports to an aggregation group. Ports added to the aggregation group must have the same link type. Static aggregation Manually run a command to add ports to an aggregation group. Ports added to the aggregation group must have the same link type and LACP enabled. Dynamic aggregation 33

5 Configuring Switches The protocol dynamically adds ports to an aggregation group. Ports added in this way must have LACP enabled and the same speed, duplex mode, and link type. Table 5-2 compares the three link aggregation modes. Table 5-2 Comparison of link aggregation modes Link Aggregation Mode Packet Exchange Port Detection CPU Usage Manual aggregation No No Low Static aggregation Yes Yes High Dynamic aggregation Yes Yes High Configuration HUAWEI OceanStor storage devices support 802.3ad link aggregation (dynamic aggregation). In this link aggregation mode, multiple network ports are in an active aggregation group and work in duplex mode and at the same speed. After binding iscsi host ports on a storage device, enable aggregation for their peer ports on a switch. Otherwise, links are unavailable between the storage device and the switch. This section uses switch ports GE 1/0/1 and GE 1/0/2 and iscsi host ports P2 and P3 as examples to explain how to bind ports. You can adjust related parameters based on site requirements. Bind the iscsi host ports. Step 2 Log in to the ISM and go to the page for binding ports. In the ISM navigation tree, choose Device Info > Storage Unit > Ports. In the function pane, click iscsi Host Ports. Step 3 Bind ports. Select the ports that you want to bind and choose Bind Ports > Bind in the menu bar. In this example, the ports to be bound are P2 and P3. The Bind iscsi Port dialog box is displayed. In Bond name, enter the name for the port bond and click OK. The Warning dialog box is displayed. In the Warning dialog box, select I have read the warning message carefully and click OK. The Information dialog box is displayed, indicating that the operation succeeded. Click OK. After the storage system ports are bound, configure link aggregation on the switch. Run the following command on the switch: <Quidway>system-view System View: return to User View with Ctrl+Z. [Quidway-Switch]interface GigabitEthernet 1/0/1 [Quidway-Switch-GigabitEthernet1/0/19]lacp enable LACP is already enabled on the port! [Quidway-Switch-GigabitEthernet1/0/19]quit 34

5 Configuring Switches [Quidway-Switch]interface GigabitEthernet 1/0/2 [Quidway-Switch-GigabitEthernet1/0/20]lacp enable LACP is already enabled on the port! [Quidway-Switch-GigabitEthernet1/0/20]quit After the command is executed, LACP is enabled for ports GE 1/0/1 and GE 1/0/2. Then the ports can be automatically detected and added to an aggregation group. ----End 35

6 Establishing Fibre Channel Connections 6 Establishing Fibre Channel Connections After connecting a host to a storage system, check the topology modes of the host and the storage system. Fibre Channel connections are established between the host and the storage system after host initiators are identified by the storage system. The following describes how to check topology modes and add initiators. 6.1 Checking Topology Modes On direct-connection networks, HBAs support specific topology modes. The topology mode of a storage system must be consistent with that of supported by host HBAs. You can use the ISM to manually change the topology mode of a storage system to that supported by host HBAs. If the storage ports connected to host HBAs are adaptive, there is no need to manually change the storage system topology mode. The method for checking topology modes varies with storage systems. The following describes how to check the topology mode of the OceanStor T series storage system and the OceanStor 18000 series enterprise storage system. 6.1.1 OceanStor T Series Storage System The check method is as follows: In the ISM navigation tree, choose Device Info > Storage Unit > Ports. In the function pane, click FC Host Ports. Select a port connected to the host and then view the port details, Figure 6-1 shows the details about a Fibre Channel port. 36

6 Establishing Fibre Channel Connections Figure 6-1 Fibre Channel port details As shown in the preceding figure, the topology mode of the OceanStor T series storage system is Public Loop. 6.1.2 OceanStor 18000 Series Enterprise Storage System In the ISM navigation tree, choose System. Then click the device view icon in the upper right corner. Choose Controller Enclosure ENG0 > Controller > Interface Module > FC Port and click the port whose details that you want to view, as shown in Figure 6-2. In the navigation tree, you can see controller A and controller B, each of which has different interface modules. Choose a controller and an interface module based on actual conditions. Figure 6-2 Fibre Channel port details As shown in the preceding figure, the port working mode of the OceanStor 18000 storage system is P2P. 37

6 Establishing Fibre Channel Connections 6.2 Adding Initiators This section describes how to add host HBA initiators on a storage system. Perform the following steps to add initiators: Step 1 Check HBA WWNs on the host. Step 2 Check host WWNs on the storage system and add the identified WWNs to the host. The method for checking host WWNs varies with storage systems. The following describes how to check WWNs on the OceanStor T series storage system and the OceanStor 18000 storage system. OceanStor T series storage system (V100 and V200R001) Log in to the ISM and choose SAN Services > Mappings > Initiators in the navigation tree. In the function pane, check the initiator information. Ensure that the WWNs in step 1 are identified. If the WWNs are not identified, check the Fibre Channel port status. Ensure that the port status is normal. OceanStor 18000 Series Enterprise Storage System ----End Log in to the ISM and choose Host in the navigation tree. On the Initiator tab page, click Add Initiator and check that the WWNs in step 1 are found. If the WWNs are not identified, check the Fibre Channel port status. Ensure that the port status is normal. 6.3 Establishing Connections Add the WWNs (initiators) to the host and ensure that the initiator connection status is Online. If the initiator status is Online, Fibre Channel connections are established correctly. If the initiator status is Offline, check the physical links and topology mode. 38

7 Establishing iscsi Connections 7 Establishing iscsi Connections Both a host and a storage system need to be configured before establishing iscsi connections between the host and the storage system. This chapter describes how to configure a host and a storage system before establishing iscsi connections. 7.1 Host Configurations 7.1.1 Configuring Service IP Addresses You can configure services IP addresses on a VMware host by adding virtual networks. Perform the following steps: Step 1 In vsphere Client, choose Network > Add Network. Step 2 In Add Network Wizard that is displayed, select VMkernel, as shown in Figure 7-1. Figure 7-1 Adding VMkernel Click Next. Step 3 Select the iscsi service network port, as shown in Figure 7-2. 39

7 Establishing iscsi Connections Figure 7-2 Creating a vsphere standard switch Step 4 Specify the network label, as shown in Figure 7-3. Figure 7-3 Specifying the network label Step 5 Enter the iscsi service IP address, as shown in Figure 7-4. 40

7 Establishing iscsi Connections Figure 7-4 Entering the iscsi service IP address Step 6 Confirm the information that you have configured, as shown in Figure 7-5. Figure 7-5 Information summary For a single-path network, the configuration is completed. For a multi-path network, proceed with the next step. Step 7 Repeat steps 1 to 6 to create another virtual network. Figure 7-6 shows the configuration completed for a multi-path network. 41

7 Establishing iscsi Connections Figure 7-6 iscsi multi-path network with dual adapters ----End 7.1.2 Configuring Host Initiators Host initiator configuration includes creating host initiators, binding initiators to virtual networks created in section 7.1.1 "Configuring Service IP Addresses", and discovering targets. In VMware ESX 4.1 and earlier versions, storage adapters have iscsi adapters. You only need to enable those adapters. In VMware ESXi 5.0 and later versions, you need to manually add iscsi initiators. This section uses VMware ESXi 5.0 as an example to explain how to configure host initiators. Step 1 Choose Storage Adapters and right-click the function pane, as shown in Figure 7-7. Figure 7-7 Adding storage adapters Step 2 Choose Add Software iscsi Adapter from the shortcut menu. On the dialog box that is displayed, click OK, as shown in Figure 7-8. 42

7 Establishing iscsi Connections Figure 7-8 Adding iscsi initiators The newly added iscsi initiators are displayed, as shown in Figure 7-9. Figure 7-9 iscsi Software Adapter Step 3 Right-click a newly created iscsi initiator and choose Properties from the shortcut menu, as shown in Figure 7-10. 43

7 Establishing iscsi Connections Figure 7-10 Initiator properties Step 4 On the dialog box that is displayed, click the Network Configuration tab and click Add, as shown in Figure 7-11. Figure 7-11 iscsi initiator properties Step 5 Select a virtual network that you have created in section 7.1.1 and click OK, as shown in Figure 7-12. 44

7 Establishing iscsi Connections Figure 7-12 Binding with a new VMkernel network adapter Figure 7-13 shows the properties of an initiator bound to the virtual network. Figure 7-13 Initiator properties after virtual network binding Step 6 In the dialog box for configuring initiator properties, click the Dynamic Discovery tab, click Add, and enter the target IP address (service IP address of the storage system), as shown in Figure 7-14. 45

7 Establishing iscsi Connections Figure 7-14 Adding send target server ----End 7.1.3 Configuring CHAP Authentication If CHAP authentication is required between a storage system and a host, perform the following steps to configure CHAP authentication: Step 1 In the dialog box for configuring iscsi initiator properties, click the General tab and click CHAP in the left lower corner, as shown in Figure 7-15. Figure 7-15 General tab page 46

7 Establishing iscsi Connections Step 2 In the CHAP Credentials dialog box that is displayed, choose Use CHAP from the Select option drop-down list. Enter the CHAP user name and password configured on the storage system, as shown in Figure 7-16. Figure 7-16 CHAP credentials dialog box Click OK. ----End 7.2 Storage System Different versions of storage systems support different IP protocols. Specify the IP protocols for storage systems based on actual storage system versions and application scenarios. Observe the following principles when configuring IP addresses of iscsi ports on storage systems: The IP addresses of an iscsi host port and a management network port must reside on different network segments. The IP addresses of an iscsi host port and a maintenance network port must reside on different network segments. The IP addresses of an iscsi host port and a heartbeat network port must reside on different network segments. The IP addresses of iscsi host ports on the same controller must reside on different network segments. In some storage systems of the latest versions, IP addresses of iscsi host ports on the same controller can reside on the same network segment. However, this configuration is not recommended. The IP address of an iscsi host port communicates correctly with the IP address of the host' service network port to which this iscsi host port connects, or IP addresses of other storage devices' iscsi host ports connect to this iscsi host port. 47

7 Establishing iscsi Connections CAUTION Read-only users are not allowed to modify the IP address of an iscsi host port. Modifying the IP address of an iscsi host port will interrupt the services on the port. The IP address configuration varies with storage systems. The following explains how to configure IPv4 addresses on the OceanStor T series storage system and the OceanStor 18000 series enterprise storage system. 7.2.1 OceanStor T Series Storage System Perform the following steps to configure the iscsi service on the OceanStor T series storage system: Step 1 Configure the service IP address. In the ISM navigation tree, choose Device Info > Storage Unit > Ports. In the function pane, click iscsi Host Ports. Select a port and choose IP Address > Modify IPv4 Address in the tool bar, as shown in Figure 7-17. Figure 7-17 Modifying IPv4 addresses In the dialog box that is displayed, enter the new IP address and subnet mask and click OK. If CHAP authentication is not required between the storage system and host, the host initiator configuration is completed. If CHAP authentication is required, proceed with the following steps to configure CHAP authentication on the storage system. Step 2 Configure CHAP authentication. In the ISM navigation tree, choose SAN Services > Mappings > Initiators. In the function pane, select the initiator whose CHAP authentication you want to configure and choose CHAP > CHAP Configuration in the navigation bar, as shown in Figure 7-18. 48

7 Establishing iscsi Connections Figure 7-18 Initiator CHAP configuration Step 3 In the CHAP Configuration dialog box that is displayed, click Create in the lower right corner, as shown in Figure 7-19. Figure 7-19 CHAP Configuration dialog box 49

7 Establishing iscsi Connections In the Create CHAP dialog box that is displayed, enter the CHAP user name and password, as shown in Figure 7-20. Figure 7-20 Create CHAP dialog box CAUTION The CHAP user name contains 4 to 25 characters and the password contains 12 to 16 characters. The limitations to CHAP user name and password vary with storage systems. For details, see the help documentation of corresponding storage systems. Step 4 Assign the CHAP user name and password to the initiator, as shown in Figure 7-21. 50

7 Establishing iscsi Connections Figure 7-21 Assigning the CHAP account to the initiator Step 5 Enable the CHAP account that is assigned to the host. In the ISM navigation tree, choose SAN Services > Mappings > Initiators. In the function pane, select the initiator whose CHAP account is to be enabled and choose CHAP > Status Settings in the navigation bar, as shown in Figure 7-22. Figure 7-22 Setting CHAP status 51

7 Establishing iscsi Connections In the Status Settings dialog box that is displayed, choose Enabled from the CHAP Status drop-down list, as shown in Figure 7-23. Figure 7-23 Enabling CHAP On the ISM, view the initiator status, as shown in Figure 7-24. Figure 7-24 Initiator status after CHAP is enabled ----End 7.2.2 OceanStor 18000 Series Enterprise Storage System Perform the following steps to configure the iscsi service on the OceanStor 18000 series enterprise storage system: Step 1 Go to the iscsi Host Port dialog box. Then perform the following steps: 1. On the right navigation bar, click. 2. In the basic information area of the function pane, click the device icon. 3. In the middle function pane, click the cabinet whose iscsi ports you want to view. 4. Click the controller enclosure where the desired iscsi host ports reside. The controller enclosure view is displayed. 5. Click to switch to the rear view. 52

7 Establishing iscsi Connections 6. Click the iscsi host port whose information you want to modify. 7. The iscsi Host Port dialog box is displayed. 8. Click Modify. Step 2 Modify the iscsi host port. 1. In IPv4 Address or IPv6 Address, enter the IP address of the iscsi host port. 2. In Subnet Mask or Prefix, enter the subnet mask or prefix of the iscsi host port. 3. In MTU (Byte), enter the maximum size of data packet that can be transferred between the iscsi host port and the host. The value is an integer ranging from 1500 to 9216. Step 3 Confirm the iscsi host port modification. 1. Click Apply. The Danger dialog box is displayed. 2. Carefully read the contents of the dialog box. Then click the check box next to the statement I have read the previous information and understood subsequences of the operation to confirm the information. 3. Click OK. The Success dialog box is displayed, indicating that the operation succeeded. 4. Click OK. Step 4 Configure CHAP authentication. 1. Select the initiator for whose CHAP authentication you want to configure. The initiator configuration dialog box is displayed. 2. Select Enable CHAP. The CHAP configuration dialog box is displayed. 3. Enter the user name and password of CHAP authentication and click OK. CHAP authentication is configured on the storage system. ----End 53

8 Mapping and Using LUNs 8 Mapping and Using LUNs 8.1 Mapping LUNs to a Host 8.1.1 OceanStor T Series Storage System Prerequisites Procedure After a storage system is connected to a VMware host, map the storage system LUNs to the host. Two methods are available for mapping LUNs: Mapping LUNs to a host: This method is applicable to scenarios where only one small-scale client is deployed. Mapping LUNs to a host group: This method is applicable to cluster environments or scenarios where multiple clients are deployed. RAID groups have been created on the storage system. LUNs have been created on the RAID groups. This document explains how to map LUNs to a host. Perform the following steps to map LUNs to a host: Step 1 In the ISM navigation tree, choose SAN Services > Mappings >Hosts. Step 2 In the function pane, select the desired host. In the navigation bar, choose Mapping > Add LUN Mapping. The Add LUN Mapping dialog box is displayed. Step 3 Select LUNs that you want to map to the host and click OK. ----End CAUTION When mapping LUNs on a storage system to a host, ensure that the host LUN whose ID is 0 is mapped. 54

8 Mapping and Using LUNs 8.1.2 OceanStor 18000 Series Enterprise Storage System Prerequisites Procedure After a storage system is connected to a VMware host, map the storage system LUNs to the host. LUNs, LUN groups, hosts, and host groups have been created. Step 1 Go to the Create Mapping View dialog box. Then perform the following steps: On the right navigation bar, click. 1. On the host management page, click Mapping View. 2. Click Create. The Create Mapping View dialog box is displayed. Step 2 Set basic properties for the mapping view. 1. In the Name text box, enter a name for the mapping view. 2. (Optional) In the Description text box, describe the mapping view. Step 3 Add a LUN group to the mapping view. 1. Click. The Select LUN Group dialog box is displayed. If your service requires a new LUN group, click Create to create one. You can select Shows only the LUN groups that do not belong to any mapping view to quickly locate LUN groups. 2. From the LUN group list, select the LUN groups you want to add to the mapping view. 3. Click OK. Step 4 Add a host group to the mapping view. 1. Click. If your service requires a new host group, click Create to create one. 2. The Select Host Group dialog box is displayed. 3. From the host group list, select the host groups you want to add to the mapping view. 4. Click OK. Step 5 (Optional) Add a port group to the mapping view. 1. Select Port Group. 2. Click. The Select Port Group dialog box is displayed. 55

8 Mapping and Using LUNs If your service requires a new port group, click Create to create one. 3. From the port group list, select the port group you want to add to the mapping view. 4. Click OK. Step 6 Confirm the creation of the mapping view. 1. Click OK. The Execution Result dialog box is displayed, indicating that the operation succeeded. 2. Click Close. ----End 8.2 Scanning for LUNs on a Host After LUNs are mapped on a storage system, scan for the mapped LUNs on the host, as shown in Figure 8-1. Figure 8-1 Scanning for the mapped LUNs 8.3 Using the Mapped LUNs After the mapped LUNs are detected on a host, you can directly use the raw devices to configure services or use the LUNs after creating a file system. 8.3.1 Mapping Raw Devices You can configure raw devices as disks for VMs by mapping the devices. Perform the following steps to map raw devices: 56

8 Mapping and Using LUNs Step 1 Right-click a VM and choose Edit Settings from the shortcut menu, as shown in Figure 8-2. Figure 8-2 Editing host settings Step 2 On the Hardware tab page, click Add. In the Add Hardware dialog box that is displayed, choose Hard Disk in Device Type and click Next, as shown in Figure 8-3. Figure 8-3 Adding disks Step 3 Select disks. You can create a new virtual disk, use an existing virtual disk, or use raw disk mappings, as shown in Figure 8-4. 57

8 Mapping and Using LUNs Figure 8-4 Selecting disks Select Raw Device Mappings and click Next. Step 4 Select a target LUN and click Next, as shown in Figure 8-5. Figure 8-5 Selecting a target LUN Step 5 Select a datastore. The default datastore is under the same directory as the VM storage. Click Next, as shown in Figure 8-6. 58

8 Mapping and Using LUNs Figure 8-6 Selecting a datastore Step 6 Select a compatibility mode. Select a compatibility mode based on site requirements and click Next, as shown in Figure 8-7. Snapshots are unavailable if the compatibility mode is specified to physical. Figure 8-7 Selecting a compatibility mode Step 7 In Advanced Options, keep the default virtual device node unchanged, as shown in Figure 8-8. 59

8 Mapping and Using LUNs Figure 8-8 Selecting a virtual device node Step 8 In Ready to Complete, confirm the information about the disk to be added, as shown in Figure 8-9. Figure 8-9 Confirming the information about the disk to be added Click Finish. The system starts to add disks, as shown in Figure 8-10. 60

8 Mapping and Using LUNs Figure 8-10 Adding raw disk mappings After a raw disk is mapped, the type of the newly created disk is Mapped Raw LUN. ----End 8.3.2 Creating Datastores (File Systems) Create a file system before creating a virtual disk. A file system can be created using the file system disks in datastores. This section details how to create a datastore. Step 1 On the Configuration tab page, choose Storage in the navigation tree. On the Datastores tab page that is displayed, click Add Storage, as shown in Figure 8-11. Figure 8-11 Adding storage Step 2 Select a storage type and click Next, as shown in Figure 8-12. The default storage type is Disk/LUN. 61

8 Mapping and Using LUNs Figure 8-12 Selecting a storage type Step 3 On the Select Disk/LUN page that is displayed, select a desired disk and click Next, as shown in Figure 8-13. Figure 8-13 Select a disk/lun Step 4 Select a file system version. VMFS-5 is selected in this example, as shown in Figure 8-14. 62

8 Mapping and Using LUNs Figure 8-14 Selecting a file system version Step 5 View the current disk layout and device information, as shown in Figure 8-15. Figure 8-15 Viewing the current disk layout Step 6 Enter the name of a datastore, as shown in Figure 8-16. 63

8 Mapping and Using LUNs Figure 8-16 Entering a datastore name Step 7 Specify a disk capacity. Normally, Maximum available space is selected. If you want to test LUN expansion, customize a capacity, as shown in Figure 8-17. Figure 8-17 Specifying a capacity Step 8 Confirm the disk layout. If the disk layout is correct, click Finish, as shown in Figure 8-18. 64

8 Mapping and Using LUNs Figure 8-18 Confirming the disk layout ----End 8.3.3 Mapping Virtual Disks Perform the following steps to add LUNs to VMs as virtual disks: Step 1 Right-click a VM and choose Edit Settings from the shortcut menu, as shown in Figure 8-19. Figure 8-19 Editing VM settings Step 2 Click Add, select Hard Disk and click Next, as shown in Figure 8-20. 65

8 Mapping and Using LUNs Figure 8-20 Adding disks Step 3 In Select a Disk, select Create a new virtual disk, as shown in Figure 8-21. Figure 8-21 Creating a new virtual disk Step 4 Specify the disk capacity based on site requirements, as shown in Figure 8-22. 66

8 Mapping and Using LUNs Figure 8-22 Specifying the disk capacity Step 5 Select a datastore. In this example, the datastore is disk1 and the file system type is VMFS-5, as shown in Figure 8-23. Figure 8-23 Selecting a datastore Step 6 Select a virtual device node. If there are no special requirements, keep the default virtual device node unchanged, as shown in Figure 8-24. 67

8 Mapping and Using LUNs Figure 8-24 Selecting a virtual device node Step 7 View the basic information about the virtual disk, as shown in Figure 8-25. Figure 8-25 Viewing virtual disk information As shown in the preceding figure, hard disk 1 that you have added is a virtual disk. ----End 8.3.4 Differences Between Raw Disks and Virtual Disks On the Hardware tab page of Virtual Machine Properties, you can modify the capacity of a disk mapped as a virtual disk, as shown in Figure 8-26. 68

8 Mapping and Using LUNs Figure 8-26 Modifying the capacity of a virtual disk The capacity of a disk added using raw disk mappings cannot be modified, as shown in Figure 8-27. Figure 8-27 Modifying the capacity of a disk added using raw disk mappings 69

9 Multipathing Management 9 Multipathing Management 9.1 Overview The VMware system has its own multipathing software Native Multipath Module (NMP), which is available without the need for extra configurations. This chapter details the NMP multipathing software. 9.2 VMware PSA 9.2.1 Overview vsphere 4.0 incorporates a new module Pluggable Storage Architecture (PSA) that can be integrated with Third-Party Multipathing Plugin (MPP) or NMP to provide storage-specific plug-ins such as Storage Array Type Plug-in (SATP) and Path Selection Plugin (PSP), enabling the optimal path selection and I/O performance. Figure 9-1 VMware PSA 70

9 Multipathing Management 9.2.2 VMware NMP 9.2.3 VMware PSP Built-in PSP Third-Party Software NMP is the default multipathing module of VMware. This module provides two submodules to implement failover and load balancing. SATP: monitors path availability, reports path status to NMP, and implements failover. PSP: selects optimal I/O paths. PSA is compatible with the following third-party multipathing plugins: Third-party SATP: Storage vendors can use the VMware API to customize SATPs for their storage features and optimize VMware path selection. Third-party PSP: Storage vendors or third-party software vendors can use the VMware API to develop more sophisticated I/O load balancing algorithms and achieve larger throughput from multiple paths. By default, the PSP of most ESX operating systems supports three I/O policies: Most Recently Use (MRU), Round Robin, and Fixed. ESX 4.1 supports an additional policy: Fixed AP. For details, see section 9.4.2 "PSPs in Different ESX Versions." Third-Party Multipathing Plug-in (MPP) supports comprehensive fault tolerance and performance processing, and runs on the same layer as NMP. For some storage systems, Third-Party MPP can substitute NMP to implement path failover and load balancing. 9.3 Software Functions and Features To manage storage multipathing, ESX/ESXi uses a special VMkernel layer, Pluggable Storage Architecture (PSA). The PSA is an open modular framework that coordinates the simultaneous operations of multiple plugins (MPPs). The VMkernel multipathing plugin that ESX/ESXi provides, by default, is VMware Native Multipathing (NMP). NMP is an extensible module that manages subplugins. There are two types of NMP plugins: Storage Array Type Plugins (SATPs), and Path Selection Plugins (PSPs). Figure 9-2 shows the architecture of VMkernel. 71

9 Multipathing Management Figure 9-2 VMkernel architecture If more multipathing functionality is required, a third party can also provide an MPP to run in addition to, or as a replacement for, the default NMP. When coordinating with the VMware NMP and any installed third-party MPPs, PSA performs the following tasks: Loads and unloads multipathing plug-ins. Hides virtual machine specifics from a particular plug-in. Routes I/O requests for a specific logical device to the MPP managing that device. Handles I/O queuing to the logical devices. Implements logical device bandwidth sharing between virtual machines. Handles I/O queuing to the physical storage HBAs. Handles physical path discovery and removal. Provides logical device and physical path I/O statistics. 9.4 Multipathing Selection Policy 9.4.1 Policies and Differences VMware supports the following path selection policies, as described in Table 9-1. Table 9-1 Path selection policies Policy/Controller Active/Active Active/Passive Most Recently Used Fixed Administrator action is required to fail back after path failure. VMkernel resumes using the preferred path when connectivity is restored. Administrator action is required to fail back after path failure. VMkernel attempts to resume using the preferred path. This can cause path thrashing or failure when another SP now owns the LUN. 72

9 Multipathing Management Policy/Controller Active/Active Active/Passive Round Robin No failback. No path in round robin scheduling is selected. Fixed AP For ALUA arrays, VMkernel picks the path set to be the preferred path. For both A/A, A/P, and ALUA arrays, VMkernel resumes using the preferred path, but only if the path-thrashing avoidance algorithm allows the failback. The following details each policy. Most Recently Used (VMW_PSP_MRU) The host selects the path that is used recently. When the path becomes unavailable, the host selects an alternative path. The host does not revert back to the original path when the path becomes available again. There is no preferred path setting with the MRU policy. MRU is the default policy for active-passive storage devices. Working principle: uses the most recently used path for I/O transfer. When the path fails, I/O is automatically switched to the last used path among the multiple available paths (if any). When the failed path recovers, I/O is not switched back to that path. Round Robin (VMW_PSP_RR) Fixed (VMW_PSP_FIXED) The host uses an automatic path selection algorithm rotating through all available active paths to enable load balancing across the paths. Load balancing is a process to distribute host I/Os on all available paths. The purpose of load balancing is to achieve the optimal throughput performance (IPOS, MB/s, and response time). Working principle: uses all available paths for I/O transfer. The host always uses the preferred path to the disk when that path is available. If the host cannot access the disk through the preferred path, it tries the alternative paths. The default policy for active-active storage devices is Fixed. After the preferred path recovers from fault, VMkernel continues to use the preferred path. This attempt may results in path thrashing or failure because another SP now owns the LUN. Working principle: uses the fixed path for I/O transfer. When the current path fails, I/O is automatically switched to a random path among the multiple available paths (if any). When the original path recovers, I/O will be switched back to the original path. Fixed AP (VMW_PSP_FIXED_AP) This policy is only supported by ESX/ESXi 4.1 and is incorporated to VMW_PSP_FIXED in later ESX versions. Fixed AP extends the Fixed functionality to active-passive and ALUA mode arrays. 73

9 Multipathing Management 9.4.2 PSPs in Different ESX Versions ESX/ESXi 4.0 Run the following command to display the PSPs supported by the operating system: [root@e4 ~]# esxcli nmp psp list Name Description VMW_PSP_MRU Most Recently Used Path Selection VMW_PSP_RR Round Robin Path Selection VMW_PSP_FIXED Fixed Path Selection [root@e4 ~]# Versions from VMware ESX 4.0.0 GA to VMware ESX 4.0.0 Update 4 support the same PSPs. ESX/ESXi 4.1 Run the following command to display the PSPs supported by the operating system: [root@esx4 ~]# esxcli nmp psp list Name Description VMW_PSP_FIXED_AP Fixed Path Selection with Array Preference VMW_PSP_MRU Most Recently Used Path Selection VMW_PSP_RR Round Robin Path Selection VMW_PSP_FIXED Fixed Path Selection [root@esx4 ~]# Versions from VMware ESX 4.1.0 GA to VMware ESX 4.1.0 Update 2 support the same PSPs. ESXi 5.0 Run the following command to display the PSPs supported by the operating system: ~ # esxcli storage nmp psp list Name Description ------------- --------------------------------- VMW_PSP_MRU Most Recently Used Path Selection VMW_PSP_RR Round Robin Path Selection VMW_PSP_FIXED Fixed Path Selection ~ # Versions from VMware ESXi 5.0.0 GA to VMware ESXi 5.0.0 Update 3 support the same PSPs. ESXi 5.1 Run the following command to display the PSPs supported by the operating system: ~ # esxcli storage nmp psp list Name Description ------------- --------------------------------- VMW_PSP_MRU Most Recently Used Path Selection VMW_PSP_RR Round Robin Path Selection VMW_PSP_FIXED Fixed Path Selection 74

9 Multipathing Management ~ # Versions from VMware ESXi 5.1.0 GA to VMware ESXi 5.1.0 Update 1 support the same PSPs. ESXi 5.5 Run the following command to display the PSPs supported by the operating system: ~ # esxcli storage nmp psp list Name Description ------------- --------------------------------- VMW_PSP_MRU Most Recently Used Path Selection VMW_PSP_RR Round Robin Path Selection VMW_PSP_FIXED Fixed Path Selection ~ # 9.5 VMware SATPs VMware supports multiple SATPs, which vary with VMware versions. The following details SATPs for different VMware versions. ESX/ESXi 4.0 Run the following command to display the SATPs supported by the operating system: [root@e4 ~]# esxcli nmp satp list Name Default PSP Description VMW_SATP_ALUA_CX VMW_PSP_FIXED Supports EMC CX that use the ALUA protocol VMW_SATP_SVC VMW_PSP_FIXED Supports IBM SVC VMW_SATP_MSA VMW_PSP_MRU Supports HP MSA VMW_SATP_EQL VMW_PSP_FIXED Supports EqualLogic arrays VMW_SATP_INV VMW_PSP_FIXED Supports EMC Invista VMW_SATP_SYMM VMW_PSP_FIXED Supports EMC Symmetrix VMW_SATP_LSI VMW_PSP_MRU Supports LSI and other arrays compatible with the SIS 6.10 in non-avt mode VMW_SATP_EVA VMW_PSP_FIXED Supports HP EVA VMW_SATP_DEFAULT_AP VMW_PSP_MRU Supports non-specific active/passive arrays VMW_SATP_CX VMW_PSP_MRU Supports EMC CX that do not use the ALUA protocol VMW_SATP_ALUA VMW_PSP_MRU Supports non-specific arrays that use the ALUA protocol VMW_SATP_DEFAULT_AA VMW_PSP_FIXED Supports non-specific active/active arrays VMW_SATP_LOCAL VMW_PSP_FIXED Supports direct attached devices [root@e4 ~]# Versions from VMware ESX 4.0.0 GA to VMware ESX 4.0.0 Update 4 support the same SATPs. ESX/ESXi 4.1 Run the following command to display the SATPs supported by the operating system: [root@esx4 ~]# esxcli nmp satp list Name Default PSP Description 75

9 Multipathing Management VMW_SATP_SYMM VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_SVC VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_MSA VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_LSI VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_INV VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_EVA VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_EQL VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_DEFAULT_AP VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_ALUA_CX VMW_PSP_FIXED_AP Placeholder (plugin not loaded) VMW_SATP_CX VMW_PSP_MRU Supports EMC CX that do not use the ALUA protocol VMW_SATP_ALUA VMW_PSP_MRU Supports non-specific arrays that use the ALUA protocol VMW_SATP_DEFAULT_AA VMW_PSP_FIXED Supports non-specific active/active arrays VMW_SATP_LOCAL VMW_PSP_FIXED Supports direct attached devices [root@esx4 ~]# Versions from VMware ESX 4.1.0 GA to VMware ESX 4.1.0 Update 2 support the same SATPs. ESXi 5.0 Run the following command to display the SATPs supported by the operating system: ~ # esxcli storage nmp satp list Name Default PSP Description ------------------- ------------- ------------------------------------------ VMW_SATP_MSA VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_ALUA VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_DEFAULT_AP VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_SVC VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_EQL VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_INV VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_EVA VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_ALUA_CX VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_SYMM VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_CX VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_LSI VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_DEFAULT_AA VMW_PSP_FIXED Supports non-specific active/active arrays VMW_SATP_LOCAL VMW_PSP_FIXED Supports direct attached devices ~ # Versions from VMware ESX 5.0.0 GA to VMware ESX 5.0.0 Update 3 support the same SATPs. ESXi 5.1 Run the following command to display the SATPs supported by the operating system: ~ # esxcli storage nmp satp list Name Default PSP Description ------------------- ------------- ------------------------------------------ VMW_SATP_MSA VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_ALUA VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_DEFAULT_AP VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_SVC VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_EQL VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_INV VMW_PSP_FIXED Placeholder (plugin not loaded) 76

9 Multipathing Management VMW_SATP_EVA VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_ALUA_CX VMW_PSP_RR Placeholder (plugin not loaded) VMW_SATP_SYMM VMW_PSP_RR Placeholder (plugin not loaded) VMW_SATP_CX VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_LSI VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_DEFAULT_AA VMW_PSP_FIXED Supports non-specific active/active arrays VMW_SATP_LOCAL VMW_PSP_FIXED Supports direct attached devices ~ # Versions from VMware ESX 5.1.0 GA to VMware ESX 5.1.0 Update 1 support the same SATPs. ESXi 5.5 Run the following command to display the SATPs supported by the operating system: ~ # esxcli storage nmp satp list Name Default PSP Description ------------------- ------------- ------------------------------------------------------- VMW_SATP_ALUA VMW_PSP_MRU Supports non-specific arrays that use the ALUA protocol VMW_SATP_MSA VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_DEFAULT_AP VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_SVC VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_EQL VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_INV VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_EVA VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_ALUA_CX VMW_PSP_RR Placeholder (plugin not loaded) VMW_SATP_SYMM VMW_PSP_RR Placeholder (plugin not loaded) VMW_SATP_CX VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_LSI VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_DEFAULT_AA VMW_PSP_FIXED Supports non-specific active/active arrays VMW_SATP_LOCAL VMW_PSP_FIXED Supports direct attached devices ~ # The following details SATPs for different arrays: VMW_SATP_LOCAL Applies to local disks. VMW_SATP_DEFAULT_AP, VMW_SATP_DEFAULT_AA, and VMW_SATP_ALUA Applies to external storage arrays that have no customized plugins in ESX/ESXi operating systems. The plugins vary with external storage array types. Other SATPs Customized plugins in ESX/ESXi operating systems for storage arrays 9.6 Policy Configuration Policies supported by operating systems vary with operating system versions. This section describes the default and recommended policies after a VMware host is connected to a Huawei storage system. 77

9 Multipathing Management CAUTION A recommended policy applies to common scenarios and may not be the optimal one for a specific environment. For example, VMW_PSP_RR allows better performance than VMW_PSP_FIXED but has some usage limitations. If you want to configure an optimal PSP, contact Huawei Customer Service Center. 9.6.1 OceanStor T Series Storage System ESX/ESXi 4.0 Table 9-2 describes the default policies when a host running VMware ESX/ESXi 4.0 operating system is connected to a Huawei storage system. Table 9-2 Default policies for the ESX/ESXi 4.0 operating system Storage Information Operating System Default Policy Remarks ALUA enabled 4.0.0 GA SATP VMW_SATP_DEFAULT_AA See the PSP VMW_PSP_FIXED note. 4.0.0 Update 4 SATP VMW_SATP_DEFAULT_AA See the PSP VMW_PSP_FIXED note. ALUA disabled 4.0.0 GA SATP VMW_SATP_DEFAULT_AA See the PSP VMW_PSP_FIXED note. 4.0.0 Update 4 SATP VMW_SATP_DEFAULT_AA See the PSP VMW_PSP_FIXED note. 1. The preferred path cannot be selected. As a result, I/Os of all LUNs are transferred to one controller and the workload is unevenly distributed. 2. After a path recovers from a fault, its services can be switched back. To address limitations of the default policies, you can use the recommended policies as described in Table 9-3. Table 9-3 Recommended policies for the ESX/ESXi 4.0 operating system Storage Information Operatin g System Recommended Policy Remarks ALUA enabled 4.0.0 GA to 4.0.0 Update 4 SATP VMW_SATP_DEFAULT_AA See 1, 2, PSP VMW_PSP_FIXED and 3 in the note. 78

9 Multipathing Management Storage Information Operatin g System Recommended Policy Remarks ALUA disabled 4.0.0 GA to 4.0.0 Update 4 SATP VMW_SATP_DEFAULT_AA See 1, 2, PSP VMW_PSP_FIXED and 4 in the note. 1. You need to manually specify the preferred path for each LUN on the management software of VMware. 2. After a path recovers from a fault, its services can be switched back. 3. This configuration mode is recommended for storage that supports ALUA. 4. This configuration mode is recommended for storage that does not support ALUA. There is no need to change the default SATP or PSP. You need only to manually specify a preferred path for each LUN. Select the preferred active path according to the working controller of the LUN. The procedure is as follows: Step 2 Go to the menu of the storage adapter, as shown in Figure 9-3. Figure 9-3 Menu of the storage adapter Step 3 Select the device in the previous figure, and right-click a LUN. The shortcut menu is displayed, as shown in Figure 9-4. 79

9 Multipathing Management Figure 9-4 Shortcut menu of a LUN Step 4 In the path management dialog box, select the path where the owning controller resides, and set the path as the preferred path in the shortcut menu, as shown in Figure 9-5. Figure 9-5 Configuring the management path Click Close. The preferred path for the LUN is specified. Step 5 Repeat the steps 1 to 3 to specify preferred paths for the remaining LUNs. ----End 80

9 Multipathing Management ESX/ESXi 4.1 Table 9-4 describes the default policies when a host running VMware ESX/ESXi 4.1 operating system is connected to a Huawei storage system. Table 9-4 Default policies for the ESX/ESXi 4.1 operating system Storage Information Operating System Default Policy Remarks ALUA enabled 4.1.0 GA SATP VMW_SATP_ALUA See 1 and PSP VMW_PSP_MRU 2 in the note. 4.1.0 Update 2 SATP VMW_SATP_ALUA See 1 and PSP VMW_PSP_MRU 2 in the note. ALUA disabled 4.1.0 GA SATP VMW_SATP_DEFAULT_AA See 3 and PSP VMW_PSP_FIXED 4 in the note. 4.1.0 Update 2 SATP VMW_SATP_DEFAULT_AA See 3 and PSP VMW_PSP_FIXED 4 in the note. 1. The preferred path is selected when LUNs are mapped for the first time. The recently used path is selected for I/O transfer until this path is failed. After a host restarts, it continues to use the path used before the restart. As a result, I/Os of all LUNs are transferred to one controller and the workload is unevenly distributed. 2. After a path recovers from a fault, its services can be switched back. 3. The preferred path cannot be selected. As a result, I/Os of all LUNs are transferred to one controller and the workload is unevenly distributed. 4. After a path recovers from a fault, its services can be switched back. To address limitations of the default policies, you can use the recommended policies as described in Table 9-5. Table 9-5 Recommended policies for the ESX/ESXi 4.1 operating system Storage Information Operating System Recommended Policy Remarks ALUA enabled ALUA disabled 4.1.0 GA to 4.1.0 Update 2 4.1.0 GA to 4.1.0 Update 2 SATP VMW_SATP_ALUA See 1, 3, PSP VMW_PSP_FIXED_AP and 4 in the note. SATP VMW_SATP_DEFAULT_AA See 2, 3, PSP VMW_PSP_FIXED and 5 in the note. 81

9 Multipathing Management 1. The following command must be run on the VMware CLI to add a rule: esxcli nmp satp addrule -V HUAWEI -M S2600T -s VMW_SATP_ALUA -P VMW_PSP_FIXED_AP -c tpgs_on The part in bold face can be specified based on site requirements. After the command is executed, restart the host for the new rule to take effect. Then the preferred path is selected. 2. You need to manually specify the preferred path for each LUN on the management software of VMware. 3. After a path recovers from a fault, its services can be switched back. 4. This configuration mode is recommended for storage that supports ALUA. 5. This configuration mode is recommended for storage that does not support ALUA. For storage with ALUA enabled, a newly added rule takes effect immediately after the host is restarted. CAUTION If a path policy or preferred path is set on VMware before or after rules are added, this setting prevails. The newly added rule will not be applied to a LUN that has a path policy or preferred path. For storage with ALUA disabled, there is no need to change the default SATP or PSP. You need only to manually specify a preferred path for each LUN. The method of specifying the preferred active path for LUNs is similar to that for ESX/ESXi 4.0. ESXi 5.0 Table 9-6 describes the default policies when a host running VMware ESXi 5.0 operating system is connected to a Huawei storage system. Table 9-6 Default policies for the ESXi 5.0 operating system Storage Information Operating System Default Policy Remarks ALUA enabled 5.0.0 GA SATP VMW_SATP_ALUA See 1 and PSP VMW_PSP_MRU 2 in the note. 5.0.0 Update 1 5.0.0 Update 2 SATP VMW_SATP_ALUA See 1 and PSP VMW_PSP_MRU 2 in the note. SATP VMW_SATP_ALUA See 2 and PSP VMW_PSP_MRU 4 in the note. 5.0.0 Update SATP VMW_SATP_ALUA See 1 and 82

9 Multipathing Management Storage Information Operating System Default Policy 3 PSP VMW_PSP_MRU Remarks 2 in the note. ALUA disabled 5.0.0 GA SATP VMW_SATP_DEFAULT_AA See 2 and PSP VMW_PSP_FIXED 3 in the note. 5.0.0 Update 1 5.0.0 Update 2 5.0.0 Update 3 SATP VMW_SATP_DEFAULT_AA See 2 and PSP VMW_PSP_FIXED 3 in the note. SATP VMW_SATP_DEFAULT_AA See 2 and PSP VMW_PSP_FIXED 3 in the note. SATP VMW_SATP_DEFAULT_AA See 2 and PSP VMW_PSP_FIXED 3 in the note. 1. The preferred path cannot be selected. After a host restarts, it continues to use the path used before the restart. As a result, I/Os of all LUNs are transferred to one controller and the workload is unevenly distributed. 2. After a path recovers from a fault, its services can be switched back. 3. The preferred path cannot be selected. As a result, I/Os of all LUNs are transferred to one controller and the workload is unevenly distributed. 4. The preferred path is selected when LUNs are mapped for the first time. The recently used path is selected for I/O transfer until this path is failed. After a host restarts, it continues to use the path used before the restart. As a result, I/Os of all LUNs are transferred to one controller and the workload is unevenly distributed. To address limitations of the default policies, you can use the recommended policies as described in Table 9-7. Table 9-7 Recommended policies for the ESXi 5.0 operating system Storage Information Operating System Recommended Policy Remarks ALUA enabled ALUA disabled 5.0.0 GA to 5.0.0 Update 3 5.0.0 GA to 5.0.0 Update 3 SATP VMW_SATP_ALUA See 1, 3, PSP VMW_PSP_FIXED and 4 in the note. SATP VMW_SATP_DEFAULT_AA See 2, 3, PSP VMW_PSP_FIXED and 5 in the note. 1. The following command must be run on the VMware CLI to add a rule: esxcli storage nmp satp rule add -V HUAWEI -M S2600T -s VMW_SATP_ALUA -P VMW_PSP_FIXED -c tpgs_on 83

9 Multipathing Management The part in bold face can be specified based on site requirements. After the command is executed, restart the host for the new rule to take effect. Then the preferred path is selected. 2. You need to manually specify the preferred path for each LUN on the management software of VMware. For a LUN that already has a preferred path, change the path to a non-preferred one and specify the preferred path for the LUN again. 3. After a path recovers from a fault, its services can be switched back. 4. This configuration mode is recommended for storage that supports ALUA. 5. This configuration mode is recommended for storage that does not support ALUA. For storage with ALUA enabled, a newly added rule takes effect immediately after the host is restarted. CAUTION If a path policy or preferred path is set on VMware before or after rules are added, this setting prevails. The newly added rule will not be applied to a LUN that has a path policy or preferred path. Note that the rule adding command for ESXi 5.0 is different from that for ESXi 4.1. For storage with ALUA disabled, there is no need to change the default SATP or PSP. You need only to manually specify a preferred path for each LUN. The method of specifying the preferred active path for LUNs is similar to that for ESX/ESXi 4.0. CAUTION In default policy configuration, path failover cannot be implemented when storage systems have ALUA disabled. In this case, you need to manually specify a preferred path for each LUN. You also need to specify a preferred path for a LUN that already has a default preferred path. Change the default path to a non-preferred one and specify the preferred path for the LUN again. Only after this configuration can services be switched back to the path after a fault recovery. ESXi 5.1 Table 9-8 describes the default policies when a host running VMware ESXi 5.1 operating system is connected to a Huawei storage system. 84

9 Multipathing Management Table 9-8 Default policies for the ESXi 5.1 operating system Storage Information Operating System Default Policy Remarks ALUA enabled 5.1.0 GA SATP VMW_SATP_ALUA See 1 and PSP VMW_PSP_MRU 2 in the note. 5.1.0 Update 1 SATP VMW_SATP_ALUA See 1 and PSP VMW_PSP_MRU 2 in the note. ALUA disabled 5.1.0 GA SATP VMW_SATP_DEFAULT_AA See 2 and PSP VMW_PSP_FIXED 3 in the note. 5.1.0 Update 1 SATP VMW_SATP_DEFAULT_AA See 2 and PSP VMW_PSP_FIXED 3 in the note. 1. The preferred path is selected when LUNs are mapped for the first time. The recently used path is selected for I/O transfer until this path is failed. After a host restarts, it continues to use the path used before the restart. As a result, I/Os of all LUNs are transferred to one controller and the workload is unevenly distributed. 2. After a path recovers from a fault, its services can be switched back. 3. The preferred path cannot be selected. As a result, I/Os of all LUNs are transferred to one controller and the workload is unevenly distributed. To address limitations of the default policies, you can use the recommended policies as described in Table 9-9. Table 9-9 Recommended policies for the ESXi 5.1 operating system Storage Information Operating System Recommended Policy Remarks ALUA enabled ALUA disabled 5.1.0 GA to 5.1.0 Update 1 5.1.0 GA to 5.1.0 Update 1 SATP VMW_SATP_ALUA See 1, 3, PSP VMW_PSP_FIXED and 4 in the note. SATP VMW_SATP_DEFAULT_AA See 2, 3, PSP VMW_PSP_FIXED and 5 in the note. 1. The following command must be run on the VMware CLI to add a rule: esxcli storage nmp satp rule add -V HUAWEI -M S2600T -s VMW_SATP_ALUA -P VMW_PSP_FIXED -c tpgs_on The part in bold face can be specified based on site requirements. After the command is executed, restart the host for the new rule to take effect. Then the preferred path is selected. 85

9 Multipathing Management 2. You need to manually specify the preferred path for each LUN on the management software of VMware. For a LUN that already has a preferred path, change the path to a non-preferred one and specify the preferred path for the LUN again. 3. After a path recovers from a fault, its services can be switched back. 4. This configuration mode is recommended for storage that supports ALUA. 5. This configuration mode is recommended for storage that does not support ALUA. For storage with ALUA enabled, a newly added rule takes effect immediately after the host is restarted. CAUTION If a path policy or preferred path is set on VMware before or after rules are added, this setting prevails. The newly added rule will not be applied to a LUN that has a path policy or preferred path. Note that the rule adding command for ESXi 5.1 is different from that for ESXi 4.1. For storage with ALUA disabled, there is no need to change the default SATP or PSP. You need only to manually specify a preferred path for each LUN. The method of specifying the preferred active path for LUNs is similar to that for ESX/ESXi 4.0. CAUTION In default policy configuration, path failover cannot be implemented when storage systems have ALUA disabled. In this case, you need to manually specify a preferred path for each LUN. You also need to specify a preferred path for a LUN that already has a default preferred path. Change the default path to a non-preferred one and specify the preferred path for the LUN again. Only after this configuration can services be switched back to the path after a fault recovery. ESXi 5.5 Table 9-10 describes the default policies when a host running VMware ESXi 5.5 operating system is connected to a Huawei storage system. Table 9-10 Default policies for the ESXi 5.5 operating system Storage Information Operating System Recommended Policy Remarks ALUA enabled 5.5.0 GA SATP VMW_SATP_ALUA See 1 and PSP VMW_PSP_MRU 2 in the note. ALUA disabled 5.5.0 GA SATP VMW_SATP_DEFAULT_AA See 2 and 86

9 Multipathing Management Storage Information Operating System Recommended Policy PSP VMW_PSP_FIXED Remarks 3 in the note. 1. The preferred path cannot be selected. The recently used path is selected for I/O transfer until this path is failed. After a host restarts, it continues to use the path used before the restart. As a result, I/Os of all LUNs are transferred to one controller and the workload is unevenly distributed. 2. After a path recovers from a fault, its services can be switched back. 3. The preferred path cannot be selected. As a result, I/Os of all LUNs are transferred to one controller and the workload is unevenly distributed. To address limitations of the default policies, you can use the recommended policies as described in Table 9-11. Table 9-11 Recommended policies for the ESXi 5.5 operating system Storage Information Operatin g System Recommended Policy Remarks ALUA enabled 5.5.0 GA SATP VMW_SATP_ALUA See 1, 3, PSP VMW_PSP_FIXED and 4 in the note. ALUA disabled 5.5.0 GA SATP VMW_SATP_DEFAULT_AA See 2, 3, PSP VMW_PSP_FIXED and 5 in the note. 1. The following command must be run on the VMware CLI to add a rule: esxcli storage nmp satp rule add -V HUAWEI -M S2600T -s VMW_SATP_ALUA -P VMW_PSP_FIXED -c tpgs_on The part in bold face can be specified based on site requirements. After the command is executed, restart the host for the new rule to take effect. Then the preferred path is selected. 2. You need to manually specify the preferred path for each LUN on the management software of VMware. For a LUN that already has a preferred path, change the path to a non-preferred one and specify the preferred path for the LUN again. 3. After a path recovers from a fault, its services can be switched back. 4. This configuration mode is recommended for storage that supports ALUA. 5. This configuration mode is recommended for storage that does not support ALUA. For storage with ALUA enabled, a newly added rule takes effect immediately after the host is restarted. CAUTION If a path policy or preferred path is set on VMware before or after rules are added, this setting prevails. 87

9 Multipathing Management The newly added rule will not be applied to a LUN that has a path policy or preferred path. Note that the rule adding command for ESXi 5.5 is different from that for ESXi 4.1. For storage with ALUA disabled, there is no need to change the default SATP or PSP. You need only to manually specify a preferred path for each LUN. The method of specifying the preferred active path for LUNs is similar to that for ESX/ESXi 4.0. CAUTION In default policy configuration, path failover cannot be implemented when storage systems have ALUA disabled. In this case, you need to manually specify a preferred path for each LUN. You also need to specify a preferred path for a LUN that already has a default preferred path. Change the default path to a non-preferred one and specify the preferred path for the LUN again. Only after this configuration can services be switched back to the path after a fault recovery. 9.6.2 OceanStor 18000 Series Enterprise Storage System OceanStor 18000 series enterprise storage system supports multiple controllers (number of controllers >=2). When the storage system has two controllers, it supports ALUA and A/A. When the storage system has more than two controllers, it supports only A/A but not ALUA (by the release of this document). To facilitate future capacity expansion, you are advised to disable ALUA on the OceanStor 18000 series enterprise storage system and its host. Table 9-12 describes the policy configuration. Table 9-12 Recommended policies Storage Information Recommended Policy Remarks ALUA disabled SATP VMW_SATP_DEFAULT_AA See 1 and 2 in PSP VMW_PSP_FIXED the note. 1. You need to manually specify the preferred path for each LUN on the management software of VMware. For a LUN that already has a preferred path, change the path to a non-preferred one and specify the preferred path for the LUN again. 2. After a path recovers from a fault, its services can be switched back. 88

9 Multipathing Management 9.7 LUN Failure Policy A LUN becomes inaccessible after all its paths failed. However, a VMware host can still receive I/Os sent by this LUN for a certain period of time. During this period of time, if any path of the LUN is recovered, the LUN continues to send I/Os over this path. If no path of the LUN is recovered during this period of time, each I/O returns with an error flag. 9.8 Path Policy Query and Modification 9.8.1 ESX/ESXi 4.0 This section describes how to use the CLI to query and modify path policies. Commands for querying and modifying path policies vary with host operating system versions. The following details the commands for different host operating systems. The following is an example command for querying path policies: [root@e4 ~]# esxcli nmp device list -d naa.66666661006666650092f53300000045 naa.66666661006666650092f53300000045 Device Display Name: HUASY iscsi Disk (naa.66666661006666650092f53300000045) Storage Array Type: VMW_SATP_DEFAULT_AA Storage Array Type Device Config: Path Selection Policy: VMW_PSP_FIXED Path Selection Policy Device Config: {preferred=vmhba33:c0:t1:l0;current=vmhba33:c0:t0:l0} Working Paths: vmhba33:c0:t0:l0 [root@e4 ~]# 9.8.2 ESX/ESXi 4.1 Query The following is an example command for querying path policies: [root@tongrenyuan ~]# esxcli corestorage device list naa.60022a11000416611b2a9d180000000a Display Name: HUASY Fibre Channel Disk (naa.60022a11000416611b2a9d180000000a) Size: 56320 Device Type: Direct-Access Multipath Plugin: NMP Devfs Path: /vmfs/devices/disks/naa.60022a11000416611b2a9d180000000a Vendor: HUASY Model: S5600T Revision: 2105 SCSI Level: 4 Is Pseudo: false Status: on Is RDM Capable: true Is Local: false Is Removable: false Attached Filters: 89

9 Multipathing Management VAAI Status: unknown Other UIDs: vml.020001000060022a11000416611b2a9d180000000a533536303054 naa.60022a11000416611b2a929700000005 Display Name: HUASY Fibre Channel Disk (naa.60022a11000416611b2a929700000005) Size: 51200 Device Type: Direct-Access Multipath Plugin: NMP Devfs Path: /vmfs/devices/disks/naa.60022a11000416611b2a929700000005 Vendor: HUASY Model: S5600T Revision: 2105 SCSI Level: 4 Is Pseudo: false Status: on Is RDM Capable: true Is Local: false Is Removable: false Attached Filters: VAAI Status: unknown Other UIDs: vml.020000000060022a11000416611b2a929700000005533536303054 naa.600508e000000000b573f30ad3068305 Display Name: LSILOGIC Serial Attached SCSI Disk (naa.600508e000000000b573f30ad3068305) Size: 139236 Device Type: Direct-Access Multipath Plugin: NMP Devfs Path: /vmfs/devices/disks/naa.600508e000000000b573f30ad3068305 Vendor: LSILOGIC Model: Logical Volume Revision: 3000 SCSI Level: 2 Is Pseudo: false Status: on Is RDM Capable: true Is Local: false Is Removable: false Attached Filters: VAAI Status: unknown Other UIDs: vml.0200000000600508e000000000b573f30ad30683054c6f67696361 [root@tongrenyuan ~]# [root@tongrenyuan ~]# esxcli nmp device list naa.60022a11000416611b2a9d180000000a Device Display Name: HUASY Fibre Channel Disk (naa.60022a11000416611b2a9d180000000a) Storage Array Type: VMW_SATP_ALUA Storage Array Type Device Config: {implicit_support=on;explicit_support=on; explicit_allow=on;alua_followover=on;{tpg_id=2,tpg_state=ao}} Path Selection Policy: VMW_PSP_MRU Path Selection Policy Device Config: Current Path=vmhba1:C0:T0:L1 Working Paths: vmhba1:c0:t0:l1 naa.60022a11000416611b2a929700000005 90

9 Multipathing Management Device Display Name: HUASY Fibre Channel Disk (naa.60022a11000416611b2a929700000005) Storage Array Type: VMW_SATP_ALUA Storage Array Type Device Config: {implicit_support=on;explicit_support=on; explicit_allow=on;alua_followover=on;{tpg_id=2,tpg_state=ao}} Path Selection Policy: VMW_PSP_MRU Path Selection Policy Device Config: Current Path=vmhba1:C0:T0:L0 Working Paths: vmhba1:c0:t0:l0 naa.600508e000000000b573f30ad3068305 Device Display Name: LSILOGIC Serial Attached SCSI Disk (naa.600508e000000000b573f30ad3068305) Storage Array Type: VMW_SATP_LOCAL Storage Array Type Device Config: SATP VMW_SATP_LOCAL does not support device configuration. Path Selection Policy: VMW_PSP_FIXED Path Selection Policy Device Config: {preferred=vmhba0:c1:t0:l0;current=vmhba0:c1:t0:l0} Working Paths: vmhba0:c1:t0:l0 [root@tongrenyuan ~]# esxcli corestorage device list is used to display existing disks. esxcli nmp device list is used to display disk paths. Configuration 9.8.3 ESXi 5.0 You can run the following command to query the path policy parameters that can be modified: [root@tongrenyuan ~]# esxcli nmp psp getconfig --device naa.600508e000000000b573f30ad3068305 {preferred=vmhba0:c1:t0:l0;current=vmhba0:c1:t0:l0} The preceding output shows that parameters preferred and current of device naa.600508e000000000b573f30ad3068305 can be modified. Run the following command to modify path policy parameters: esxcli nmp psp setconfig --preferred new_value --device naa.600508e000000000b573f30ad3068305 The following is an example command for querying path policies: ~ # esxcli storage nmp device list naa.666666610066666502b85d9200000014 Device Display Name: HUASY iscsi Disk (naa.666666610066666502b85d9200000014) Storage Array Type: VMW_SATP_ALUA Storage Array Type Device Config: {implicit_support=on;explicit_support=on; explicit_allow=on;alua_followover=on;{tpg_id=1,tpg_state=ao}{tpg_id=2,tpg_state=an O}} Path Selection Policy: VMW_PSP_MRU Path Selection Policy Device Config: Current Path=vmhba39:C0:T0:L2 Path Selection Policy Device Custom Config: Working Paths: vmhba39:c0:t0:l2 91

9 Multipathing Management 9.8.4 ESXi 5.1 naa.6666666100666665016b2d290000001c The following is an example command for querying path policies: ~ # esxcli storage nmp device list naa.60026b904a3e070013165e5207145089 Device Display Name: Local DELL Disk (naa.60026b904a3e070013165e5207145089) Storage Array Type: VMW_SATP_LOCAL Storage Array Type Device Config: SATP VMW_SATP_LOCAL does not support device configuration. Path Selection Policy: VMW_PSP_FIXED Path Selection Policy Device Config: {preferred=vmhba1:c2:t0:l0;current=vmhba1:c2:t0:l0} Path Selection Policy Device Custom Config: Working Paths: vmhba1:c2:t0:l0 Is Local SAS Device: false Is Boot USB Device: false naa.6666666100666665010739820000000f Device Display Name: HUASY iscsi Disk (naa.6666666100666665010739820000000f) Storage Array Type: VMW_SATP_DEFAULT_AA Storage Array Type Device Config: SATP VMW_SATP_DEFAULT_AA does not support device configuration. Path Selection Policy: VMW_PSP_FIXED Path Selection Policy Device Config: {preferred=vmhba35:c0:t1:l0;current=vmhba35:c0:t1:l0} Path Selection Policy Device Custom Config: Working Paths: vmhba35:c0:t1:l0 Is Local SAS Device: false Is Boot USB Device: false mpx.vmhba34:c0:t0:l0 Device Display Name: Local TEAC CD-ROM (mpx.vmhba34:c0:t0:l0) Storage Array Type: VMW_SATP_LOCAL Storage Array Type Device Config: SATP VMW_SATP_LOCAL does not support device configuration. Path Selection Policy: VMW_PSP_FIXED Path Selection Policy Device Config: {preferred=vmhba34:c0:t0:l0;current=vmhba34:c0:t0:l0} Path Selection Policy Device Custom Config: Working Paths: vmhba34:c0:t0:l0 Is Local SAS Device: false Is Boot USB Device: false t10.dp BACKPLANE000000 Device Display Name: Local DP Enclosure Svc Dev (t10.dp BACKPLANE000000) Storage Array Type: VMW_SATP_LOCAL Storage Array Type Device Config: SATP VMW_SATP_LOCAL does not support device configuration. Path Selection Policy: VMW_PSP_FIXED Path Selection Policy Device Config: {preferred=vmhba1:c0:t32:l0;current=vmhba1:c0:t32:l0} Path Selection Policy Device Custom Config: 92

9 Multipathing Management 9.8.5 ESXi 5.5 Working Paths: vmhba1:c0:t32:l0 Is Local SAS Device: false Is Boot USB Device: false ~ # The following is an example command for querying path policies: ~ # esxcli storage nmp device list naa.63030371003030370068574700000016 Device Display Name: HUAWEI Fibre Channel Disk (naa.63030371003030370068574700000016) Storage Array Type: VMW_SATP_ALUA Storage Array Type Device Config: {implicit_support=on;explicit_support=on; explicit_allow=on;alua_followover=on;{tpg_id=2,tpg_state=ano}{tpg_id=1,tpg_state=a O}} Path Selection Policy: VMW_PSP_MRU Path Selection Policy Device Config: Current Path=vmhba3:C0:T0:L0 Path Selection Policy Device Custom Config: Working Paths: vmhba3:c0:t0:l0 Is Local SAS Device: false Is Boot USB Device: false naa.6234567890abcde01a3b00fa0ec82e34 Device Display Name: Local LSI Disk (naa.6234567890abcde01a3b00fa0ec82e34) Storage Array Type: VMW_SATP_LOCAL Storage Array Type Device Config: SATP VMW_SATP_LOCAL does not support device configuration. Path Selection Policy: VMW_PSP_FIXED Path Selection Policy Device Config: {preferred=vmhba1:c2:t0:l0;current=vmhba1:c2:t0:l0} Path Selection Policy Device Custom Config: Working Paths: vmhba1:c2:t0:l0 Is Local SAS Device: false Is Boot USB Device: false naa.63030371003030370066bff500000015 Device Display Name: HUAWEI Fibre Channel Disk (naa.63030371003030370066bff500000015) Storage Array Type: VMW_SATP_ALUA Storage Array Type Device Config: {implicit_support=on;explicit_support=on; explicit_allow=on;alua_followover=on;{tpg_id=2,tpg_state=ano}{tpg_id=1,tpg_state=a O}} Path Selection Policy: VMW_PSP_MRU Path Selection Policy Device Config: Current Path=vmhba3:C0:T0:L1 Path Selection Policy Device Custom Config: Working Paths: vmhba3:c0:t0:l1 Is Local SAS Device: false Is Boot USB Device: false ~ # ~ # 93

9 Multipathing Management 9.9 Differences Between iscsi Multi-Path Networks with Single and Multiple HBAs This section describes differences between iscsi multi-path networks with single and multiple HBAs. 9.9.1 iscsi Multi-Path Network with a Single HBA Usually, a blade server has only one HBA apart from the one used for management. For example, an IBM HS22 with eight network ports can provide only one HBA during VMkernel creation. In this case, you can bind two VMkernels to the HBA. This configuration is proven applicable by practical experience. Figure 9-6 shows this configuration. Figure 9-6 iscsi network with a single HBA 9.9.2 iscsi Multi-Path Network with Multiple HBAs If two or more HBAs are available, you can bind VMkernels to different HBAs to set up a cross-connection network. Figure 9-7 shows a parallel network where two VMkernels are bound to network ports of the HBAs on controller A. Figure 9-7 iscsi network A with multiple HBAs Figure 9-8 shows the port mapping. 94

9 Multipathing Management Figure 9-8 Port mapping of iscsi network A with multiple HBAs Figure 9-9 shows a cross-connection network where two VMkernels are bound to two HBAs, one of which resides on controller A and the other on controller B. Figure 9-9 iscsi network B with multiple HBAs In this configuration, both NIC 1 and NIC 2 (VMkernels) are bound to controller A and controller B, forming a cross-connection network. Services are not affected when any path fails. Figure 9-10 shows the port mapping. Figure 9-10 Port mapping of iscsi network B with multiple HBAs 95

10 Common Commands 10 Common Commands This chapter describes command commands for VMware. Viewing the Version Run the following commands to view the VMware version: ~ # vmware -l VMware ESXi 5.1.0 GA ~ # vmware -v VMware ESXi 5.1.0 build-799733 ~ # Viewing Hardware Information Run the following commands to view hardware information such as ESX hardware and kernel: esxcfg -info a (Displays all.) esxcfg info w (Displays ESX hardware information.) Configuring Firewalls Run the following commands to configure firewalls: esxcfg fireware e sshclient (Opens the firewall SSH port.) esxcfg fireware d sshclient (Closes the firewall SSH port.) Obtaining Help Documentation Command syntax varies with host system versions. You can perform the following steps to obtain help documentation of different versions of host systems. Step 1 Log in to the VMware official website. The official site of VMware: http://www.vmware.com/support/developer/vcli/ Step 2 Select a VMware version. 96

10 Common Commands The latest VMware version 5.1 is used as an example. Then click vsphere Command-Line Interface Reference, as shown in Figure 10-1. Figure 10-1 Selecting a VMware version You are navigated to the help page of the selected VMware version. ----End 97

11 Host High-Availability 11 Host High-Availability 11.1 Overview As services grow, key applications must be available all the time and a system must have the fault tolerance capability. However, the systems with fault tolerance capability are costly. To lower the system costs, economical applications that provide the fault tolerance capacity are required. A high availability (HA) solution ensures the availability of applications and data in an event of any system component fault. This solution aims at eliminating single points of failure and minimizing the impact of expected or unexpected system downtimes. 11.1.1 Working Principle and Functions Working Principle Functions VMware HA continuously monitors all virtualized servers in a resource pool and detects physical server and operating system failures. To monitor physical servers, an agent on each server maintains a heartbeat with other servers in the resource pool such that a loss of heartbeat automatically initiates the restart of all affected virtual machines on other servers in the resource pool. VMware HA ensures that sufficient resources are available in the resource pool at all times to be able to restart virtual machines on different physical servers in the event of server failure. Safe restart of virtual machines is made possible by the locking technology in the ESX Server storage stack, which allows multiple ESX Server hosts to have simultaneous access to the same virtual machine files. VMware HA initiates failover for all running VMs specified in the failover capacity in an event of an ESX Server host failure. VMware HA automatically can detect server faults and restart VMs without human intervention. VMware HA interworks with Distributed Resource Scheduler (DRS) to implement dynamic and intelligent resource allocation and VM optimization after failover. After a host becomes faulty and its VMs are restarted on another host, DRS can provide further migration suggestions or directly migrate VMs to achieve the optimal placement of virtual machines and balanced resource allocation. 98

11 Host High-Availability 11.1.2 Relationship Among VMware HA, DRS, and vmotion VMware vmotion dynamically migrates VMs among different physical hosts (ESX hosts). VMware HA employs vmotion to migrate VMs to normal ESX hosts in real time when VMs fail or ESX host encounters an error. Incorporating vmotion and HA, VMware DRS can dynamically migrate VMs to ESX hosts carrying lighter load according to the CPU or memory usage of ESX hosts. You can use DRS to migrate VMs on one ESX host to different ESX hosts for load balancing. 11.2 Installation and Configuration For information about how to install and configure VMware HA, visit: http://www.vmware.com/files/cn/pdf/support/vmware-vsp_41_availability-pg-cn.p df Huawei also provides VMware HA configuration guides. You can obtain the guides from the Huawei customer service center. 11.3 Log Collection Perform the following steps to collect host logs: Step 1 Use vsphere Client to log in to the ESX server. Step 2 Choose System Management > System Log. Step 3 Click Export System Logs, as shown in Figure 11-1. Figure 11-1 Host logs 99

12 Acronyms and Abbreviations 12 Acronyms and Abbreviations C CDFS CLI CD-ROM File System Command Line Interface D DRS Distributed Resource Scheduler F Fibre Channel FT Fibre Channel Fault Tolerance G GOS Guest Operating System H HA HBA High Availability Host Bus Adapter I IP ISM iscsi Internet Protocol Integrated Storage Manager Internet Small Computer Systems Interface 100

12 Acronyms and Abbreviations L LUN LV Logical Unit Number Logical Volume N NIC Network Interface Card P PSP Path Selection Plug-in R RAID RDM Redundant Array of Independent Disks Raw Device Mapping S SATP Storage Array Type Plug-in V VM VMFS Virtual Machine Virtual Machine File System W WWN World Wide Name 101