Configuration Guide for VMware ESX Server Host Attachment



Similar documents
Configuration Guide for Microsoft Windows Host Attachment

FASTFIND LINKS. Contents Product Version Getting Help MK-96RD640-05

Hitachi Universal Storage Platform V Hitachi Universal Storage Platform VM Hitachi Data Retention Utility User s Guide

Open-Systems Host Attachment Guide

Hitachi Universal Storage Platform V Hitachi Universal Storage Platform VM Dynamic Provisioning User s Guide

Hitachi Virtual Storage Platform

Hitachi Virtual Storage Platform

Hitachi Virtual Storage Platform

FASTFIND LINKS. Document Organization Product Version Getting Help Contents MK-96RD617-06

Hitachi NAS Blade for TagmaStore Universal Storage Platform and Network Storage Controller NAS Blade Error Codes User s Guide

Hitachi Compute Blade 500 Series NVIDIA GPU Adapter User s Guide

Hitachi Device Manager Software Getting Started Guide

FASTFIND LINKS. Document Organization. Product Version. Getting Help. Contents MK-96RD617-15

Hitachi Unified Storage VM Block Module

Hitachi Storage Replication Adapter 2.1 for VMware vcenter Site Recovery Manager 5.1/5.5 Deployment Guide

Command Control Interface

Hitachi Virtual Storage Platform G1000

Hitachi Unified Storage VM Block Module

Setup for Microsoft Cluster Service ESX Server and VirtualCenter 2.0.1

Setup for Failover Clustering and Microsoft Cluster Service

Hitachi Command Suite. Dynamic Link Manager. (for Windows ) User Guide. Document Organization. Product Version. Getting Help. Contents MK-92DLM129-30

Tuning Manager. Hitachi Command Suite. Server Administration Guide MK-92HC FASTFIND LINKS Document Organization. Product Version.

HiCommand Dynamic Link Manager (HDLM) for Windows Systems User s Guide

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service

Hitachi Data Ingestor

Compute Systems Manager

Hitachi Command Suite. Tuning Manager. Installation Guide. Document Organization. Product Version. Getting Help. Contents MK-96HC141-27

SAN Conceptual and Design Basics

Hitachi Application Protector User Guide for Microsoft SQL Server

SAN Implementation Course SANIW; 3 Days, Instructor-led

Using VMware ESX Server With Hitachi Data Systems NSC or USP Storage ESX Server 3.0.2

Setup for Failover Clustering and Microsoft Cluster Service

FlexArray Virtualization

Hitachi Unified Compute Platform (UCP) Pro for VMware vsphere

Setup for Failover Clustering and Microsoft Cluster Service

Hitachi Virtual Storage Platform

Windows Host Utilities Installation and Setup Guide

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

vrealize Operations Manager Customization and Administration Guide

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution

Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i

IMPROVING VMWARE DISASTER RECOVERY WITH EMC RECOVERPOINT Applied Technology

Using VMware ESX Server with IBM System Storage SAN Volume Controller ESX Server 3.0.2

Deploying SAP on Microsoft SQL Server 2008 Environments Using the Hitachi Virtual Storage Platform

Hitachi Compute Blade Series Hitachi Compute Rack Series Server installation and monitoring tool User's Guide log monitoring functions for VMware vma

Setup for Failover Clustering and Microsoft Cluster Service

Job Management Partner 1/Performance Management - Remote Monitor for Virtual Machine Description, User's Guide and Reference

Installing and Administering VMware vsphere Update Manager

HP XP P9000 External Storage for Open and Mainframe Systems User Guide

How to Backup and Restore a VM using Veeam

Violin Memory Arrays With IBM System Storage SAN Volume Control

TGL VMware Presentation. Guangzhou Macau Hong Kong Shanghai Beijing

Hitachi Command Suite

Merge Healthcare Virtualization

HP StorageWorks Auto LUN XP user guide. for the XP12000/XP10000

Compute Systems Manager

Hitachi Virtual Storage Platform

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution

Atlantis ILIO Persistent VDI for Flash Arrays Administration Guide Atlantis Computing Inc. Atlantis ILIO 4.1. Document Version 1.0

QNAP in vsphere Environment

Hitachi Storage Replication Adapter Software VMware vcenter Site Recovery Manager Deployment Guide

HBA Virtualization Technologies for Windows OS Environments

Core Protection for Virtual Machines 1

NetApp OnCommand Plug-in for VMware Backup and Recovery Administration Guide. For Use with Host Package 1.0

EMC Data Domain Management Center

Frequently Asked Questions: EMC UnityVSA

VEEAM ONE 8 RELEASE NOTES

VMware Best Practice and Integration Guide

Virtual SAN Design and Deployment Guide

Unleash the Performance of vsphere 5.1 with 16Gb Fibre Channel

Windows Host Utilities 6.0 Installation and Setup Guide

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution

What s New in VMware vsphere 4.1 Storage. VMware vsphere 4.1

Managing Microsoft Hyper-V Server 2008 R2 with HP Insight Management

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

Configuration Maximums VMware vsphere 4.0

Installation ServerView ESXi CIM Provider V6.12

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server

Direct Attached Storage

Virtualizing Microsoft SQL Server 2008 Using VMware vsphere 4 on the Hitachi Adaptable Modular Storage 2000 Family

Basic System Administration ESX Server and Virtual Center 2.0.1

VMware vsphere 5.1 Advanced Administration

Virtual Storage Console 4.0 for VMware vsphere Installation and Administration Guide

NEC ESMPRO Manager RAID System Management Guide for VMware ESXi 5 or later

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution

HP 3PAR Online Import for HDS Storage Data Migration Guide

Hitachi Command Suite. Command Director. User Guide MK-90HCMD001-13

Acronis Backup & Recovery 10 Advanced Server Virtual Edition. Quick Start Guide

Configuration Maximums VMware Infrastructure 3

Hitachi Virtual Storage Platform

Veritas Cluster Server

Configuration Maximums VMware vsphere 4.1

VMware Certified Professional 5 Data Center Virtualization (VCP5-DCV) Exam

Dell PowerVault MD32xx Deployment Guide for VMware ESX4.1 Server

HP Intelligent Management Center v7.1 Virtualization Monitor Administrator Guide

Table of Contents. vsphere 4 Suite 24. Chapter Format and Conventions 10. Why You Need Virtualization 15 Types. Why vsphere. Onward, Through the Fog!

Migrating to ESXi: How To

Transcription:

Configuration Guide for VMware ESX Server Host Attachment Hitachi Unified Storage VM Hitachi Virtual Storage Platform Hitachi Universal Storage Platform V/VM Hitachi TagmaStore Universal Storage Platform Hitachi TagmaStore Network Storage Controller FASTFIND LINKS Contents Product Version Getting Help MK-98RD6716-08

2008-2013 Hitachi, Ltd. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying and recording, or stored in a database or retrieval system for any purpose without the express written permission of Hitachi, Ltd. (hereinafter referred to as Hitachi ) and Hitachi Data Systems Corporation (hereinafter referred to as Hitachi Data Systems ). Hitachi Data Systems reserves the right to make changes to this document at any time without notice and assumes no responsibility for its use. This document contains the most current information available at the time of publication. When new or revised information becomes available, this entire document will be updated and distributed to all registered users. All of the features described in this document may not be currently available. Refer to the most recent product announcement or contact your local Hitachi Data Systems sales office for information about feature and product availability. Notice: Hitachi Data Systems products and services can be ordered only under the terms and conditions of the applicable Hitachi Data Systems agreements. The use of Hitachi Data Systems products is governed by the terms of your agreements with Hitachi Data Systems. Hitachi is a registered trademark of Hitachi, Ltd. in the United States and other countries. Hitachi Data Systems is a registered trademark and service mark of Hitachi, Ltd. in the United States and other countries. All other trademarks, service marks, and company names are properties of their respective owners. Microsoft product screen shots are reprinted with permission from Microsoft Corporation. ii

Contents Preface... v Intended Audience...vi Product Version...vi Document Revision Level...vi Changes in this Revision... vii Referenced Documents... vii Document Organization... viii Document Conventions...ix Convention for Storage Capacity Values...x Accessing Product Documentation...x Getting Help...xi Comments...xi Overview... 1-1 About the Hitachi RAID Storage Systems...1-2 About the VMware ESX Server Platform...1-3 Installation and Configuration Roadmap...1-4 Preparing for New Device Configuration... 2-1 Installation and Configuration Requirements...2-2 Installing the Hitachi RAID Storage System...2-4 Configuring the Hitachi RAID Storage System...2-5 Setting the System Option Modes...2-5 Configuring the Ports...2-7 Setting the Host Modes and Host Mode Options...2-8 Configuring the Host Adapters...2-20 Settings for QLogic Adapters...2-21 Settings for Emulex Adapters...2-21 Connecting the Storage System to the VMware Server...2-22 Contents iii

Configuring the New Devices... 3-1 Creating the VMFS Datastores... 3-2 Adding a Hard Disk to a Virtual Machine... 3-3 Failover and SNMP... 4-1 Host Failover... 4-2 Path Failover... 4-3 SNMP Remote System Management... 4-4 Troubleshooting... 5-1 General Troubleshooting... 5-2 Calling the Hitachi Data Systems Support Center... 5-4 Specifications for Device Types... A-1 Using VMware with Hitachi RAID Storage Systems... B-1 VMware ESX Server and VirtualCenter Compatibility... 2 Installing and Configuring VMware... 3 Creating and Managing VMware Infrastructure Components... 4 Acronyms and Abbreviations iv Contents

Preface This document describes and provides instructions for installing and configuring the devices on the Hitachi RAID storage systems for operations in a VMware ESX Server environment. The Hitachi RAID storage system models include the following: Hitachi Unified Storage VM (HUS VM) Hitachi Virtual Storage Platform (VSP) Hitachi Universal Storage Platform V and VM (USP V/VM) Hitachi TagmaStore Universal Storage Platform and Network Storage Controller (USP/NSC) Please read this document carefully to understand how to use this product, and maintain a copy for reference purposes. This preface includes the following information: Intended Audience Product Version Document Revision Level Changes in this Revision Referenced Documents Document Organization Document Conventions Convention for Storage Capacity Values Accessing Product Documentation Getting Help Comments Preface v

Intended Audience This document is intended for system administrators, Hitachi Data Systems representatives, and authorized service providers who are involved in installing, configuring, and operating the Hitachi RAID storage systems. Readers of this document should be familiar with the following: Data processing and RAID storage systems and their basic functions. The Hitachi RAID storage systems and the User and Reference Guide for the storage systems. The Storage Navigator software for the Hitachi RAID storage systems and the Storage Navigator User s Guide. The VMware ESX Server operating system and the hardware hosting the VMware Server system. The hardware used to attach the Hitachi RAID storage systems to the VMware host, including fibre-channel cabling, host adapters, and switches. Product Version This document revision applies to the following microcode levels: Hitachi Unified Storage VM: 73-01-0x Hitachi Virtual Storage Platform: 70-03-01 or later Hitachi Universal Storage Platform V/VM: 60-08-02 or later Hitachi TagmaStore USP/NSC: 50-09-3x or later Document Revision Level Revision Date Description MK-98RD6716-P August 2008 Preliminary Release MK-98RD6716-00 October 2008 Initial release, supersedes and replaces MK-98RD6716-P MK-98RD6716-01 July 2010 Revision 1, supersedes and replaces MK-98RD6716-00 MK-98RD6716-02 October 2010 Revision 2, supersedes and replaces MK-98RD6716-01 MK-98RD6716-03 April 2011 Revision 3, supersedes and replaces MK-98RD6716-02 MK-98RD6716-04 August 2011 Revision 4, supersedes and replaces MK-98RD6716-03 MK-98RD6716-05 November 2011 Revision 5, supersedes and replaces MK-98RD6716-04 MK-98RD6716-06 December 2011 Revision 6, supersedes and replaces MK-98RD6716-05 MK-98RD6716-07 September 2012 Revision 7, supersedes and replaces MK-98RD6716-06 MK-98RD6716-08 July 2013 Revision 8, supersedes and replaces MK-98RD6716-07 vi Preface

Changes in this Revision Updated the URL of the Hitachi Data Systems Portal in the Getting Help section of the preface. Updated the description of host mode option (HMO) 71 (applies to HUS VM as well as VSP) (Table 2-6). Fixed a typographical error in the description of system option mode 808 (Table 2-3). Referenced Documents Hitachi Unified Storage VM documents: Block Module Hardware User Guide, MK-92HM7005 Provisioning Guide, MK-92HM7012 Storage Navigator User Guide, MK-92HM7016 Storage Navigator Messages, MK-92HM7017 Hitachi Virtual Storage Platform documents: Provisioning Guide for Open Systems, MK-90RD7022 Storage Navigator User Guide, MK-90RD7027 Storage Navigator Messages, MK-90RD7028 User and Reference Guide, MK-90RD7042 Hitachi Dynamic Link Manager User s Guide for VMware, MK-92DLM130 Hitachi Universal Storage Platform V/VM documents: LUN Manager User s Guide, MK-96RD615 LUN Expansion User s Guide, MK-96RD616 Storage Navigator User s Guide, MK-96RD621 Virtual LVI/LUN and Volume Shredder User s Guide, MK-96RD630 Storage Navigator Messages, MK-96RD633 User and Reference Guide, MK-96RD635 Hitachi Universal Storage Platform and Network Storage Controller documents: Storage Navigator Error Codes, MK-94RD202 LUN Manager User s Guide, MK-94RD203 LUN Expansion and Virtual LVI/LUN User s Guide, MK-94RD205 Storage Navigator User s Guide, MK-94RD206 User and Reference Guide, MK-94RD231 Preface vii

Document Organization The following table provides an overview of the contents and organization of this document. Click the chapter title in the left column to go to that chapter. The first page of each chapter provides links to the sections in that chapter. Chapter Chapter 1, Overview Chapter 2, Preparing for New Device Configuration Chapter 3, Configuring the New Devices Chapter 4, Failover and SNMP Chapter 5, Troubleshooting Appendix A, Specifications for Device Types Appendix B, Using VMware with Hitachi RAID Storage Systems Description Provides an overview of the Hitachi RAID storage systems, the VMware host, and the host attachment procedure. Describes and provides the requirements and instructions for preparing the Hitachi RAID storage system, VMware host, and other SAN components for the attachment of new storage. Describes and provides the requirements for configuring the new Hitachi RAID storage devices on the VMware host. Provides information about configuring the Hitachi RAID storage system for failover and SNMP. Provides troubleshooting information for VMware host attachment. Provides specifications for the device emulation types in the Hitachi RAID storage systems. Contains reference information to help you implement VMware software with the Hitachi RAID storage systems. viii Preface

Document Conventions This document uses the following terminology conventions: Convention Hitachi RAID storage system, storage system Description Refers to all models of the Hitachi RAID storage systems unless otherwise noted. This document uses the following typographic conventions: Convention Bold Italic screen/code Description Indicates text on a window, other than the window title, including menus, menu options, buttons, fields, and labels. Example: Click OK. Indicates a variable, which is a placeholder for actual text provided by the user or system. Example: copy source-file target-file Note: Angled brackets (< >) are also used to indicate variables. Indicates text that is displayed on screen or entered by the user. Example: # pairdisplay -g oradb < > angled brackets Indicates a variable, which is a placeholder for actual text provided by the user or system. Example: # pairdisplay -g <group> Note: Italic font is also used to indicate variables. [ ] square brackets Indicates optional values. Example: [ a b ] indicates that you can choose a, b, or nothing. { } braces Indicates required or expected values. Example: { a b } indicates that you must choose either a or b. vertical bar Indicates that you have a choice between two or more options or arguments. Examples: [ a b ] indicates that you can choose a, b, or nothing. { a b } indicates that you must choose either a or b. This document uses the following icons to draw attention to information: Icon Meaning Description Note Calls attention to important or additional information. Tip Caution WARNING Provides helpful information, guidelines, or suggestions for performing tasks more effectively. Warns the user of adverse conditions or consequences (for example, disruptive operations). Warns the user of severe conditions or consequences (for example, destructive operations). Preface ix

Convention for Storage Capacity Values Physical storage capacity values (for example, disk drive capacity) are calculated based on the following values: Physical capacity unit Value 1 KB 1,000 (10 3 ) bytes 1 MB 1,000 KB or 1,000 2 bytes 1 GB 1,000 MB or 1,000 3 bytes 1 TB 1,000 GB or 1,000 4 bytes 1 PB 1,000 TB or 1,000 5 bytes 1 EB 1,000 PB or 1,000 6 bytes Logical storage capacity values (for example, logical device capacity) are calculated based on the following values: Logical capacity unit Value 1 block 512 bytes 1 KB 1,024 (2 10 ) bytes 1 MB 1,024 KB or 1,024 2 bytes 1 GB 1,024 MB or 1,024 3 bytes 1 TB 1,024 GB or 1,024 4 bytes 1 PB 1,024 TB or 1,024 5 bytes 1 EB 1,024 PB or 1,024 6 bytes Accessing Product Documentation The user documentation for the Hitachi RAID storage systems is available on the Hitachi Data Systems Portal: https://portal.hds.com. Check this site for the most current documentation, including important updates that may have been made after the release of the product. x Preface

Getting Help Comments The Hitachi Data Systems customer support staff is available 24 hours a day, seven days a week. If you need technical support, log on to the Hitachi Data Systems Portal for contact information: https://portal.hds.com Please send us your comments on this document: doc.comments@hds.com. Include the document title, number, and revision, and refer to specific sections and paragraphs whenever possible. Thank you! (All comments become the property of Hitachi Data Systems.) Preface xi

1 Overview This chapter provides an overview of the Hitachi RAID storage systems, the VMware host platform, and the installation and configuration activities: About the Hitachi RAID Storage Systems About the VMware ESX Server Platform Installation and Configuration Roadmap Overview 1-1

About the Hitachi RAID Storage Systems The Hitachi RAID storage systems offer a wide range of storage and data services, including thin provisioning with Hitachi Dynamic Provisioning software, application-centric storage management and logical partitioning, and simplified and unified data replication across heterogeneous storage systems. These storage systems are an integral part of the Services-Oriented Storage Solutions architecture from Hitachi Data Systems, providing the foundation for matching application requirements to different classes of storage and delivering critical services such as: Business continuity services Content management services (search, indexing) Non-disruptive data migration Volume management across heterogeneous storage arrays Thin provisioning Security services (immutability, logging, auditing, data shredding) Data de-duplication I/O load balancing Data classification File management services The Hitachi RAID storage systems include the following: Hitachi Unified Storage VM (HUS VM) Hitachi Virtual Storage Platform (VSP) Hitachi Universal Storage Platform V and VM (USP V/VM) Hitachi TagmaStore Universal Storage Platform and Network Storage Controller (USP/NSC) These storage systems provide heterogeneous connectivity to support multiple concurrent attachment to a variety of host operating systems (OS), including VMware as well as Windows, UNIX, Linux, enabling massive consolidation and storage aggregation across disparate platforms. The storage systems can operate with multi-host applications and host clusters, and are designed to handle very large databases as well as data warehousing and data mining applications that store and retrieve terabytes of data. 1-2 Overview

The Hitachi RAID storage systems are configured with OPEN-V logical units (LUs) and are compatible with most fibre-channel (FC) host bus adapters (HBAs) and fibre-channel-over-ethernet (FCoE) converged network adapters (CNAs) (VSP only). Users can perform additional LU configuration activities using the LUN Manager, Virtual LVI/LUN (VLL), and LUN Expansion (LUSE) features provided by the Hitachi Storage Navigator software for the storage systems. For more information about storage solutions and the Hitachi RAID storage systems, please contact your Hitachi Data Systems account team. About the VMware ESX Server Platform The VMware ESX Server product is a bare-metal hypervisor that partitions physical servers in multiple virtual machines. Each virtual machine represents a complete system, with processors, memory, networking, storage and BIOS. VMware ESX Server enables multiple virtual machines to share physical resources, run unmodified operating systems and applications, and run the most resource-intensive applications side-by-side on the same server. For more information about the VMware ESX Server host platform, see the VMware user documentation, or contact VMware technical support. Overview 1-3

Installation and Configuration Roadmap Table 1-1 shows the activities that are performed to connect the Hitachi RAID storage system to the VMware ESX Server host and configure the new logical devices (LDEVs) on the storage system for operations with the VMware host. Some activities are performed by the Hitachi Data Systems representative, while other activities are performed by the user: The Hitachi Data Systems representative performs the physical installation of the Hitachi RAID storage system and connection to the host. The user prepares for the installation and configures the new devices, with assistance from the Hitachi Data Systems representative. Table 1-1 Installation and Configuration Roadmap Task 1. Verify that the installation and configuration requirements have been met. 2. Prepare the Hitachi RAID storage system, VMware host, and host adapters for the installation. 3. Connect the Hitachi RAID storage system to the VMware ESX Server host. 4. Configure the new devices for operation with the VMware host. 5. Create and manage the VMware infrastructure components. 1-4 Overview

2 Preparing for New Device Configuration This chapter describes and provides the requirements and instructions for preparing the Hitachi RAID storage system, VMware host, and other SAN components for the attachment of new storage. Installation and Configuration Requirements Installing the Hitachi RAID Storage System Configuring the Hitachi RAID Storage System Setting the System Option Modes Configuring the Ports Setting the Host Modes Configuring the Host Adapters Connecting the Storage System to the VMware Server Preparing for New Device Configuration 2-1

Installation and Configuration Requirements Table 2-1 lists and describes the requirements for installing and configuring the Hitachi RAID storage system on a VMware ESX host server. Table 2-1 Item Hitachi RAID storage system VMware server hardware Installation and Configuration Requirements Requirements The Hitachi Storage Navigator software must be installed and operational. For details, see the Storage Navigator User s Guide for the storage system. The Hitachi LUN Manager software on Storage Navigator must be installed and operational. You will use LUN Manager to configure the storage system. The availability of features and devices depends on the level of microcode installed on the Hitachi RAID storage system. Review the hardware requirements for attaching new storage to the VMware ESX Server host. For details, see the VMware user documentation. For details about supported VMware server hardware and hardware for host attachment (for example, cables, hubs, switches), see the Hitachi Data Systems interoperability site: http://www.hds.com/products/interoperability VMware operating system VAAI Verify that the OS version, architecture, relevant patches, and maintenance levels are supported by the Hitachi RAID storage system. See the Hitachi Data Systems interoperability site for details about supported versions: http://www.hds.com/products/interoperability Verify that the VMware ESX Server host meets the latest system and software requirements for attaching new storage. For details, see the VMware ESX Server user documentation. Verify that you have the VMware ESX Server software installation media. VMware ESX 4.1 or later is required for VAAI operations. The Hitachi RAID storage systems (USP V/VM and later) support VMware vstorage API for Array Integration (VAAI). VAAI enables the offload of specific storage operations from the VMware ESX host to the Hitachi RAID storage system for improved performance and efficiency. These APIs, available in VMware s vsphere 4.1, provide integration with the advanced features and capabilities of the Hitachi RAID storage systems such as thin provisioning, dynamic tiering, and storage virtualization. For details, see the following sites: www.hds.com/go/vmware/ www.vmware.com/products/vstorage-apis-for-array-integration/ 2-2 Preparing for New Device Configuration

Item Host adapters (HBAs and CNAs) Storage area network (SAN) Requirements Verify that the adapters are functioning properly. HBAs: For OM3 fiber and 200-MB/s data transfer rate, the total cable length attached to each HBA must not exceed 500 meters (1,640 feet). Do not connect any OFC type connectors to the Hitachi RAID storage system. CNAs (VSP only): For OM3 fiber and 10-Gb/s data transfer rate, the total cable length attached to each CNA must not exceed 300 meters (984 feet). The diskless VSP model (no internal disk drives or flash drives) does not support the FCoE option. For details about installing the adapter and using the utilities and tools for the adapter, see the user documentation for the adapter. The VMware ESX Server drivers are based on the Emulex-provided lpfcdd-2xx 2.01g driver. This driver has been tested with VMware ESX Server in various fabric configurations. FC-AL configurations are not supported. Point-to-point and direct connections are not supported unless otherwise noted. For details about supported host adapters, drivers, optical cables, hubs, and switches, see the Hitachi Data Systems interoperability site: http://www.hds.com/products/interoperability A SAN is required to connect the Hitachi RAID storage system to the VMware ESX Server host. VMware does not support FC-AL and direct-connect connections to storage systems. For information about setting up storage arrays for VMware ESX Server, see the VMware user documentation. For details about supported fibre-channel switches, topology, and firmware versions for SAN configurations, see the Hitachi Data Systems interoperability site: http://www.hds.com/products/interoperability Preparing for New Device Configuration 2-3

Installing the Hitachi RAID Storage System The Hitachi RAID storage systems come with all the hardware and cabling required for installation. The Hitachi Data Systems representative follows the instructions and precautions in the Maintenance Manual for the storage system when installing the product. The installation tasks include: Checking all specifications to ensure proper installation and configuration. Installing and assembling all hardware and cabling. Verifying that the Storage Navigator software has been installed and is ready for use. For details, see the Storage Navigator User s Guide for the storage system. Installing and formatting the logical devices (LDEVs). The user provides the LDEV configuration information to the Hitachi Data Systems representative, including the number of OPEN-V devices, LUSE devices, and VLL devices. For details see the following documents: HUS VM: Provisioning Guide. Hitachi VSP: Provisioning Guide for Open Systems. Hitachi USP V/VM, USP/NSC: LUN Manager User s Guide, LUN Expansion User s Guide, and Virtual LVI/LUN User s Guide. Table 2-2 describes the types of logical devices that can be configured on the Hitachi RAID storage systems for operation with the VMware host system. For details on the device emulation types, see Appendix A. Table 2-2 Logical Device Types Device Type OPEN-V Devices LUSE Devices (OPEN-V*n) VLL Devices (OPEN-V VLL) VLL LUSE Devices (OPEN-V*n VLL) OPEN-x Devices Description OPEN-V logical units (LUs) are logical devices of variable sizes as defined by the user. LUSE devices are LUs that are created by combining up to 36 LUs. The LUN Expansion (LUSE) software on Storage Navigator enables you to configure these devices. LUSE devices are designated as OPEN-x*n, where x is the LU type (for example, OPEN-V) and n is the number of combined devices. For example, a LUSE device created from 10 OPEN-V LUs is designated as an OPEN-V*10 device. This enables the host to access the data stored on the storage system using fewer LU numbers. VLL devices are customized LUs that are configured using the Virtual LVI/LUN software on Storage Navigator. The VLL devices are configured by slicing a single LU into several smaller LUs that best fit your application needs to improve host access to frequently used files. VLL devices are designated as OPEN-V-CVS devices, where CVS stands for custom volume size. The VLL LUSE feature allows you to combine Virtual LVI/LUN devices (instead of standard OPEN-V LUs) into LUSE devices. For example, a VLL LUSE device created by using LUSE to combine 10 OPEN-V VLL (OPEN-V-CVS) volumes into a single logical device is designated as an OPEN-V*10-CVS device. The OPEN-x LUs (for example, OPEN-3, OPEN-9) are logical devices of predefined sizes. The Hitachi RAID storage system supports OPEN-3, OPEN-8, OPEN-9, OPEN-E, and OPEN-L devices. For more information about these device types, contact your Hitachi Data Systems account team. 2-4 Preparing for New Device Configuration

Configuring the Hitachi RAID Storage System Complete the following tasks to configure the Hitachi RAID storage system for attachment to the VMware server: Setting the System Option Modes Configuring the Ports Setting the Host Modes and Host Mode Options Setting the System Option Modes To provide greater flexibility, the Hitachi RAID storage systems have additional operational parameters called system option modes (SOMs) that allow you to tailor the storage system to your unique operating requirements. The SOMs are set on the service processor (SVP) by the Hitachi Data Systems representative. To set and manage the SOMs 1. Review the complete list of SOMs in the User and Reference Guide for your storage system. HUS VM Block Module Hardware User Guide, MK-92HM7005 Hitachi VSP User and Reference Guide, MK-90RD7042 Hitachi USP V/VM User and Reference Guide, MK-96RD635 Hitachi USP/NSC User and Reference Guide, MK-94RD231 2. Work with your Hitachi Data Systems team to ensure that the appropriate SOMs for your operational environment are set on your storage system. 3. Check each new revision of the User and Reference Guide for SOM changes that may apply to your operational environment, and contact your Hitachi Data Systems representative as needed. Table 2-3 lists and describes the SOMs for the VMware environment. Note: The SOM information may have changed since this document was published. Contact your Hitachi Data Systems representative for the latest SOM information. Preparing for New Device Configuration 2-5

Table 2-3 SOMs for VMware Host Attachment SOM Category Storage System Description 808 VMware VAAI USP V/VM: 60-08-07 or later Used in combination with host mode option (HMO) 54 to control the ANSI version of Standard Inquiry (2 or 4). Apply this SOM when the USP V/VM is connected with VMware ESXi (5.x or later) and the VAAI function is used. ON: SOM 808: ON HMO 54: ON 4 is returned as the ANSI version of Standard Inquiry. SOM 808: ON HMO 54: OFF 2 is returned as the ANSI version of Standard Inquiry. OFF (default): SOM 808: OFF HMO 54: ON or OFF 2 is returned as the ANSI version of Standard Inquiry. Notes: 1. When HMO 54 is set to OFF, this SOM is disabled, and 2 is returned as the ANSI version regardless of the SOM 808 setting. 2. Since ESX/ESXi 4.1 does not refer to the ANSI version, SOM 808 and HMO 54 can be set without any effect on the ESX/ESXi 4.1 environment. 2-6 Preparing for New Device Configuration

Configuring the Ports Before connecting the storage system to the VMware host, you need to configure the ports on the Hitachi RAID storage system using the LUN Manager software on Storage Navigator. For instructions on configuring the ports: For HUS VM, see the Provisioning Guide. For VSP, see the Provisioning Guide for Open Systems. For USP V/VM and USP/NSC, see the LUN Manager User s Guide. Port address. In fabric environments, the port addresses are assigned automatically by fabric switch port number and are not controlled by the storage system port settings. If you plan to connect different types of servers to the storage system via the same fabric switch, use the zoning function of the fabric switch. Topology. Table 2-4 list and describes the fibre parameter settings for VMware host attachment. Select the appropriate settings for each port based on the device to which the port is connected. Determine the topology parameters supported by the device and port type, and set your topology accordingly. For the latest information about port topology configurations supported by VMware versions and adapter/switch combinations, see the Hitachi Data Systems interoperability site: http://www.hds.com/products/interoperability. Note: FC topology with FCoE ports cannot be configured. Table 2-4 Fibre Parameter Settings Adapter Fabric Connection Parameter Supported/Not Supported Fibre-Channel Enable FC-AL Not supported Enable Point-to-Point Supported Disable FC-AL Supported Disable Point-to-Point Not supported FCoE (VSP storage system only) Enable FC-AL Not supported Enable Point-to-Point Not supported Disable FC-AL Not supported Disable Point-to-Point Supported Preparing for New Device Configuration 2-7

Setting the Host Modes and Host Mode Options When you connect multiple server hosts of different platforms to a single port, you must group server hosts connected to the storage system by host groups that are segregated by platform. For example, if VMware hosts and Windows hosts are connected to a single port, you must create a host group for each platform, register the hosts in the appropriate host group, and then set the host mode and host mode options for each host group. While host groups can be created with more than one WWN, it is recommended that you create one host group for each host adapter and name the host group the same as the nickname for the adapter. Creating one host group per adapter provides flexibility and is the only supported configuration when booting hosts from a SAN. Host groups are created per VMware cluster or per ESX host on the ports on each storage cluster that the VMware cluster or ESX hosts can access. Table 2-5 lists the required host modes for VMware host attachment. Table 2-6 lists the HMOs that can be used for VMware host attachment. For instructions on creating host groups and setting host modes and HMOs, see the following manuals: For HUS VM, see the Provisioning Guide. For VSP, see the Provisioning Guide for Open Systems. For USP V/VM and USP/NSC, see the LUN Manager User s Guide. Note: The host mode and HMO information may have changed since this document was published. Contact your Hitachi Data Systems team for the latest host mode and HMO information. WARNING: Before setting any HMO, review its functionality carefully to determine whether it can be used for your configuration and environment. If you have any questions or concerns, contact your Hitachi Data Systems representative or the Support Center. Changing HMOs on a Hitachi RAID storage system that is already installed and configured is disruptive and requires the server to be rebooted. 2-8 Preparing for New Device Configuration

Table 2-5 Host Modes for VMware Operations Storage System Host Mode Comments HUS VM VSP USP V/VM TagmaStore USP/NSC 01[VMware] 21[VMware Extension] 0A[NetWare] If you use host mode 01, you will not be able to create a LUSE volume using a volume to which an LU path has already been defined. Before performing a LUSE operation on an LDEV with a path defined from a VMware host, make sure that the host mode is 21 (VMware Extension). Use host mode 21 if you plan to create LUSE volumes. Use host mode 0A[NetWare] for a USP/NSC storage system attached to a VMware host. Table 2-6 Host Mode Options for VMware Operations HMO Storage System Function Host Mode Comments 2 HUS VM Veritas DBE+RAC Common Mandatory. VSP USP V/VM USP/NSC (1) The response of Test Unit Ready(TUR) for Persistent Reserve is changed. (2) According to the SCSI-3 specification, Reservation Conflict is responded to the TUR issued via the path without Reservation Key registered, Do not apply this option to Sun Cluster. (3) Setting HMO 02 to ON enables the TUR to perform normally, which is issued via the path without Reservation Key registered and to which Reservation Key used to be responded. Note: HMO 02 is required when the Veritas DBE for Oracle RAC(I/O Fencing) function is in use. 7 HUS VM VSP USP V/VM Changes the setting of whether to return the Unit Attention response when adding a LUN. ON: Unit Attention response is returned. OFF (default): Unit Attention response is not returned. Sense code: REPORTED LUNS DATA HAS CHANGED Notes: Common For VSP and HUS VM HMO 7 works regardless of the host mode setting. USP V/VM: 60-01- 30-00/00 or later 1. Set host mode option 07 to ON when you expect the REPORTED LUNS DATA HAS CHANGED UA at SCSI path change. 2. If the Unit Attention report occurs frequently and the load on the host side becomes high, the data transfer cannot be started on the host side and timeout may occur. 3. If both HMO 07 and HMO 69 are set to ON, the UA of HMO 69 is returned to the host. 13 HUS VM VSP USP V/VM USP/NSC Provides SIM notification when the number of link failures detected between ports exceeds the threshold. Common Optional Configure HMO 13 only when you are requested to do so. Preparing for New Device Configuration 2-9

HMO Storage System Function Host Mode Comments 19 USP/NSC only Select this HMO when registering VMware server hosts in the host group. Reduces the processing time for the reserve command during I/O processing. Common This function is enabled by default on the HUS VM, VSP, and USP V/VM. USP/NSC: 50-07- 66-00/00 or later 22 HUS VM VSP USP V/VM USP/NSC When a reserved volume receives a Mode Sense command from a node that is not reserving this volume, the host will receive the following responses from the storage system: ON: Normal Response. OFF: Reservation Conflict (Default). Notes: 1. By applying HMO 22, the volume status (reserved/nonreserved) is checked more frequently (several tens of milliseconds per LU). Common USP V/VM: 60-02-52-00/00 or later (within 60-02-5x range) 60-03-2x-00/00 or later USP/NSC: 50-07- 66-00/00 or later 2. By applying HMO 22, the host OS does not receive warning messages when a Mode Select command is issued to a reserved volume. 3. There is no influence to Veritas Cluster Server software when HMO 22 is set OFF. Enable HMO 22 ON when there are numerous reservation conflicts. 4. Set HMO 22 ON when Veritas Cluster Server is connected. 24 USP/NSC 16-byte command can be used for accessing an LU larger than 2 TB. Use this option when you want to switch the operation only for the limited extent, for example, for a certain host group. ON: Supported (Normal end) OFF: Not supported (ABEND) (Default) Caution: Host offline required. Note: When system option mode (SOM) 515 is used in combination with this HMO, SOM 515 is prioritized. Common USP/NSC: 50-08- 05-00/00 or later 39 HUS VM VSP USP V/VM Resets a job and returns UA to all the initiators connected to the host group where Target Reset has occurred. ON: Job reset range: Reset is performed to the jobs of all the Initiators connected to the host group where Target Reset has occurred. UA set range: UA is returned to all the Initiators connected to the host group where Target Reset has occurred. OFF (default): Job reset range: Reset is performed to the jobs of the initiator that has issued Target Reset. UA set range: UA is returned to the initiator that has issued Target Reset. Note: This option is used in the SVC environment, and the job reset range and UA set range need to be controlled per host group when Target Reset has been received. Common USP V/VM: 60-08- 01-00/00 or later VSP: 70-02-03-00/00 or later 2-10 Preparing for New Device Configuration

HMO Storage System Function Host Mode Comments 41 HUS VM VSP USP V/VM Gives priority to starting Inquiry/ Report LUN issued from the host where this option is set. ON: Inquiry/ Report LUN is started by priority. OFF (default): The operation is the same as before. Common USP V/VM: 60-03- 24-00/00 or later 48 USP V/VM By setting this option to ON, in normal operation, the pair status of S-VOL is not changed to SSWS even when Read commands exceeding the threshold (1,000/6 min) are issued while a specific application is used. ON: The pair status of S-VOL is not changed to SSWS if Read commands exceeding the threshold are issued. OFF (default): The pair status of S-VOL is changed to SSWS if Read commands exceeding the threshold are issued. Note: 1. Set this option to ON for the host group if the transition of the pair status to SSWS is not desired in the case that an application that issues Read commands (*1) exceeding the threshold (1,000/6 min) to S-VOL is used in HAM environment. *1: Currently, the vxdisksetup command of Solaris VxVM serves. 2. Even when a failure occurs in P-VOL, if this option is set to ON, which means that the pair status of S-VOL is not changed to SSWS (*2), the response time of Read command to the S-VOL whose pair status remains as Pair takes several msecs. On the other hand, if the option is set to OFF, the response time of Read command to the S-VOL is recovered to be equal to that to P-VOL by judging that an error occurs in the P-VOL when Read commands exceeding the threshold are issued. *2: Until the S-VOL receives a Write command, the pair status of S-VOL is not changed to SSWS. Common USP V/VM: 60-06-10-00/10 or later (within 60-06-1x range) 60-06-21-00/00 or later Preparing for New Device Configuration 2-11

HMO Storage System Function Host Mode Comments 49 HUS VM VSP USP V/VM Selects BB_Credit value. (HMO#49: Low_bit) ON: The subsystem operates with BB_Credit value of 80 or 255. OFF (default): The subsystem operates with BB_Credit value of 40 or 128. *HMO#50/HMO#49: BB_Credit value is decided by 2 bits of the two HMO. 00: Existing mode (BB_Credit value = 40) 01: BB_Credit value = 80 10: BB_Credit value = 128 11: BB_Credit value = 255 Note: This option is applied when the two conditions below are met: Data frame transfer in long distance connection exceeds the BB_Credit value. System option mode 769 is set to OFF (retry operation is enabled at TC/UR path creation). VSP, HUS VM: 1. When HMO 49 is set to ON, SSB log of link down is output on MCU (M-DKC) and RCU (R-DKC). 2. This HMO can work only when the micro-program supporting this function is installed on both MCU (M-DKC) and RCU (R-DKC). 3. The HMO setting is only applied to Initiator-Port and RCU Target-Port. This function is only applicable when the 8UFC or 16UFC PCB is used on RCU/MCU. 4. If this option is used, Point to Point setting is necessary. 5. When removing the 8UFC or 16UFC PCB, the operation must be executed after setting HMO 49 to OFF. 6. If HMO 49 is set to ON while SOM 769 is ON, path creation may fail after automatic port switching. 7. Make sure to set HMO 49 from OFF to ON or from ON to OFF after the pair is suspended or when the load is low. 8. The RCU Target, which is connected with the MCU where this mode is set to ON, cannot be used for UR. 9. This function is prepared for long distance data transfer. Therefore, if HMO49 is set to ON with distance of 0 km, a data transfer error may occur on RCU side. USP V/VM: 1. When HMO 49 is set to ON, SSB log of link down is output on MCU (M-DKC). 2. This HMO can work only when the micro-program supporting this function is installed on both MCU (M-DKC) and RCU (R-DKC). 3. The HMO setting is only applied to Initiator-Port. This function is only applicable when the 8US PCB is used on RCU/MCU. 4. If this option is used, Point to Point setting is necessary. 5. When removing the 8US PCB, the operation must be executed after setting the HMO 49 to OFF. 6. If HMO 49 is set to ON while SOM 769 is ON, path creation may fail after automatic port switching. Common 7. Make sure to set HMO 49 from OFF to ON or from ON to OFF after the pair is suspended or when the load is low. 2-12 8. The RCU Target, Preparing which is for connected New Device with Configuration the MCU where this mode is set to ON, cannot be used for UR. 9. This function is prepared for long distance data transfer. Therefore, if HMO49 is set to ON with distance of 0 km, a data transfer error may occur on RCU side. VSP: 70-02-31-00/00 and higher (within 70-02-3x range) 70-02-54-00/00 or later USP V/VM: 60-07- 51-00/00 or later

HMO Storage System Function Host Mode Comments 50 HUS VM VSP USP V/VM Selects BB_Credit value. (HMO#50: High_bit) ON: The subsystem operates with BB_Credit value of 128 or 255. OFF (default): The subsystem operates with BB_Credit value of 40 or 80. *HMO#50/HMO#49: BB_Credit value is decided by 2 bits of the two HMO. 00: Existing mode (BB_Credit value = 40) 01: BB_Credit value = 80 10: BB_Credit value = 128 11: BB_Credit value = 255 Note: This option is applied when the two conditions below are met: Data frame transfer in long distance connection exceeds the BB_Credit value. System option mode 769 is set to OFF (retry operation is enabled at TC/UR path creation). VSP, HUS VM: 1. When HMO 50 is set to ON, SSB log of link down is output on MCU (M-DKC) and RCU (R-DKC). 2. This HMO can work only when the micro-program supporting this function is installed on both MCU (M-DKC) and RCU (R-DKC). 3. The HMO setting is only applied to Initiator-Port and RCU Target-Port. This function is only applicable when the 8UFC or 16UFC PCB is used on RCU/MCU. 4. If this option is used, Point to Point setting is necessary. 5. When removing the 8UFC or 16UFC PCB, the operation must be executed after setting HMO 50 to OFF. 6. If HMO 50 is set to ON while SOM 769 is ON, path creation may fail after automatic port switching. 7. Make sure to set HMO 50 from OFF to ON or from ON to OFF after the pair is suspended or when the load is low. 8. The RCU Target, which is connected with the MCU where this mode is set to ON, cannot be used for UR. 9. This function is prepared for long distance data transfer. Therefore, if HMO49 is set to ON with distance of 0 km, a data transfer error may occur on RCU side. USP V/VM: 1. When HMO 50 is set to ON, SSB log of link down is output on MCU (M-DKC). 2. This HMO can work only when the micro-program supporting this function is installed on both MCU (M-DKC) and RCU (R-DKC). 3. The HMO setting is only applied to Initiator-Port. This function is only applicable when the 8US PCB is used on RCU/MCU. 4. If this option is used, Point to Point setting is necessary. 5. When removing 8US PCB, the operation must be executed after setting the HMO 50 to OFF. 6. If HMO 50 is set to ON while SOM 769 is ON, path creation may fail after automatic port switching. Common VSP: 70-02-31-00/00 and higher (within 70-02-3x range) 70-02-54-00/00 or later USP V/VM: 60-07- 51-00/00 or later 7. Make sure to set HMO 50 from OFF to ON or from ON to OFF after the pair is suspended or when the load is low. 8. The RCU Target, Preparing which is for connected New Device with Configuration the MCU where 2-13 this mode is set to ON, cannot be used for UR. 9. This function is prepared for long distance data transfer. Therefore, if HMO49 is set to ON with distance of 0 km, a data transfer error may occur on RCU side.

HMO Storage System Function Host Mode Comments 51 HUS VM VSP USP V/VM Selects operation condition of TrueCopy. ON: TrueCopy operates in the performance improvement logic. (When a WRITE command is issued, FCP_CMD/FCP_DATA is continuously issued while XFER_RDY issued from RCU side is prevented.) OFF (default): TrueCopy operates in the existing logic. Note: This option is applied when write I/O of TrueCopy is executed. VSP, HUS VM: 1. When HMO 51 is set to ON, SSB log of link down is output on MCU (M-DKC) and RCU (R-DKC). 2. This HMO can work only when the micro-program supporting this function is installed on both MCU (M-DKC) and RCU (R-DKC). 3. The HMO setting is only applied to Initiator-Port and RCU Target-Port. This function is only applicable when the 8UFC or 16UFC PCB is used on RCU/MCU. 4. When removing 8UFC or 16UFC PCB, the operation must be executed after setting HMO 51 to OFF. 5. If HMO 51 is set to ON while SOM 769 is ON, path creation may fail after automatic port switching. 6. Make sure to set HMO 51 from OFF to ON or from ON to OFF after the pair is suspended or when the load is low. 7. The RCU Target, which is connected with the MCU where this mode is set to ON, cannot be used for UR. 8. When HMO51 is set to ON using RAID600 as MCU and RAID700 as RCU, the micro-program of RAID600 must be 60-07-63-00/00 or higher (within 60-07-6x range) or 60-08-06-00/00 or higher. 9. Path attribute change (Initiator Port - RCU-Target Port, RCU-Target Port - Initiator Port) accompanied with Hyperswap is enabled after setting HMO51 to ON. If HMO51 is already set to ON on the both paths, HMO51 continues to be applied on the paths even after execution of Hyperswap. 10. In a storage system with maximum number of MPBs (8 MPBs) mounted, HMO051 may need to be used with HMO065. In this case, also see HMO 65. USP V/VM: 1. When HMO 51 is set to ON, SSB log of link down is output on MCU (M-DKC). 2. This HMO can work only when the micro-program supporting this function is installed on both MCU (M-DKC) and RCU (R-DKC). 3. The HMO setting is only applied to Initiator-Port. This function is only applicable when the 8US PCB is used on RCU/MCU. 4. When removing 8US PCB, the operation must be executed after setting the HMO 51 to OFF. 5. If HMO 51 is set to ON while SOM 769 is ON, path creation may fail after automatic port switching. 6. Make sure to set HMO 51 from OFF to ON or from ON to OFF after the pair is suspended or when the load is low. Common 2-14 7. The RCU Target, Preparing which is for connected New Device with Configuration the MCU where this mode is set to ON, cannot be used for UR. 8. Configuration When HMO51 is Guide set to for ON VMware using RAID600 ESX as Server MCU and Host Attachment RAID700 as RCU, the micro-program of RAID600 must be 60-07-63-00/00 or higher (within 60-07-6x range) or 60-08-06-00/00 or higher. 9. Path attribute change (Initiator Port - RCU-Target Port, VSP: 70-02-31-00/00 and higher (within 70-02-3x range) 70-02-54-00/00 or later USP V/VM: 60-07- 51-00/00 or later

HMO Storage System Function Host Mode Comments 52 VSP Enables a function using HAM to transfer SCSI-2 reserve information. If using software for a cluster system that uses a SCSI-2 Reservation, set host mode option 52 on the host groups where the executing node and standby node reside. ON: The function to transfer SCSI-2 reserve information is enabled. OFF (default): The function to transfer SCSI-2 reserve information is not enabled. Notes: 1. To use HAM to transfer SCSI-2 reserve information, the cluster middleware (alternate path) on host side must have been evaluated with the function. 2. Set this HMO to ON on both paths of P-VOL and S-VOL to use this function. Common VSP: 70-03-01-00/00 or later 54 HUS VM VSP USP V/VM Enables the EXTENDED COPY (XCOPY) command. ON: The XCOPY command can be used. OFF (default): When the XCOPY command is received, Check Condition is returned as an unsupported command (0x05/0x2000). Also used in combination with system option mode (SOM) 808 to set the ANSI version of Standard Inquiry (2 or 4): HMO 54: ON SOM 808: ON 4 is returned as the ANSI version of Standard Inquiry. HMO 54: ON SOM 808: OFF 2 is returned as the ANSI version of Standard Inquiry. HMO 54: OFF SOM 808: ON or OFF 2 is returned as the ANSI version of Standard Inquiry. Notes: 1. Set this HMO to ON only when VMWare ESX server is connected and the VAAI function is used. 2. If this HMO is not applied, the VMWare support function, Cloning file blocks, cannot be used. 3. When the Block Zero function is used in the ESXi 5 environment with RAID600 (60-08-07-00/00 and higher), make sure to set HMO 54 and SOM 808 to ON. x01, x21 VMware ESX/ESXi 4.1 or later with vstorage API for Array Integration (VAAI) function. ESX 4.1 or later VSP: 70-01-42 or later USP V/VM: XCOPY: 60-08- 01 or later SOM 808: 60-08-07 or later 57 VSP USP V/VM Converts the sense code/key that is returned when an S-VOL is accessed. Apply this HMO when the sense code/key response needs to be converted when an old data volume of an HAM pair is accessed. ON: Sense code/key 05/2500 (LDEV blockage) converted from 0b/c0000 is returned when SSB=B8A0 is output. x01, x21 x0c, x2c ESXi 5 or later VSP: 70-02-03-00/00 or later USP V/VM: 60-08- 01-00/00 or later OFF (default): Sense code/key 0b/c0000 is returned when SSB=B8A0 is output. Preparing for New Device Configuration 2-15

HMO Storage System Function Host Mode Comments 61 HUS VM VSP Increases Reservation Keys from 128 to 2,048. ON: Up to 2,048 Reservation Keys can be allowed per port. OFF (default): 128 Reservation Keys can be allowed per port. Notes: 1. HMO 61 is applied when more than 128 Reservation Keys are required in the environment using the Persistent Reserve command. 2. When the option is set to ON, the performance of the Persistent Reserve command and read/write commands may degrade. 3. When the option is switched from ON to OFF, the expanded keys used so far become unavailable. 4. HMO61 setting can be switched from ON to OFF only when SOM864 is ON. To switch OFF from ON, make sure that there is no LU with PGR/Key registered in the target group. 5. If HMO61 is ON, the performance of Persistent Reserve command and that of I/O for Persistent Reserve LU may degrade. 6. If a host group where HMO61 is ON is deleted and a new host group is created, HMO61 goes OFF. However, if any LU with PGR/Key registered exists in the target group, host group deletion ends abnormally. (Current spec) 7. During micro-program version downgrade, if a HMO61- caused error occurs, HMO61 needs to be set to OFF. 8. Switching HMO61 from ON to OFF while the 129th and later reservation keys exist may cause the following adverse effects, which may result in server down. - Registered Reservation Keys become invalid. - The above Reservation Keys suddenly become valid when the option is set to ON again. 9. The 129th and later Reservation Keys can be deleted by the following operations. - Micro-program exchange from an unsupported version to a supported version. - Forcible reserve cancellation. 10. HMO61 can be set to OFF in accordance with the following procedure. Procedure 1: When neither PGR nor KEY is displayed on the LUN Status window (1) Set SOM864 to ON (2) Set HMO61 to OFF (3) Set SOM864 to OFF Procedure 2: When either PGR or KEY is displayed on the LUN Status window (1) Release the PGR or KEY from the host. (2) Confirm that neither PGR nor KEY is displayed on the LUN Status window (3) Set SOM864 to ON. (4) Set HMO61 to OFF (5) Set SOM864 to OFF. Common VSP: 70-02-03-00/00 or later 2-16 Preparing for New Device Configuration

HMO Storage System Function Host Mode Comments 63 HUS VM VSP Support option for vstorage APIs based on T10 standards. ON: Standard ANSI Version of inquiry is set to 4. XCOPY command with T10 specification is enabled. RCR command is enabled. The Check response is returned at pool depletion (permanent or temporary). The Check response is returned when the warning threshold is exceeded. UNMAP command is enabled. OFF (default): Standard ANSI Version of inquiry is set to 2. XCOPY command with T10 specification is disabled (as unsupported command). RCR command is disabled (as unsupported command). The Check response is not returned at pool depletion (permanent or temporary). The Check response is not returned when the warning threshold is exceeded. UNMAP command is disabled (as unsupported command). HMO 54 is used for VAAI with ESX 4.1. If HMO 54 and HMO 63 are set to ON, HMO 63 setting is prioritized. Notes: 1. Apply this HMO when VAAI which complies with SCSI T10 is used in ESXi 5 connection. 2. When this HMO is set to ON for VMware ESXi 5, set SOM 729 to OFF (to respond with 0x7/0x2707 (SPACE ALLOCATION FAILED WRITE PROTECT) while the pool is full). If SOM 729 is set to ON, ESXi 5 may not recognize pool full and may not output the error message. 3. SCSI commands (WriteSame, ATS, XCOPY, UNMAP) issued with VAAI are available only for OPEN-V. 4. XCOPY across storage systems is not supported. To execute Cloning and Storage vmotion across storage systems when ESXi5.0 is connected, set HMO 63 to OFF. x01, x21 VMware ESXi 5.0 or later with the VAAI function for T10. VSP: 70-02-54 or later Preparing for New Device Configuration 2-17

HMO Storage System Function Host Mode Comments 65 VSP Selects TrueCopy operation mode when the Round Trip function is enabled by setting HMO 051 to ON in the configuration of the maximum number of MPBs. ON: TrueCopy is performed in enhanced performance improvement mode of Round Trip. OFF (default): TrueCopy is performed in existing Round Trip mode. Note: 1. The option is applied when response performance for an update I/O degrades while the Round Trip function is used in a configuration of the maximum number of MPBs. 2. When using the option, set HMO 51 to ON. 3. The option can work only when HMO 51 is ON. Refer to the document of HMO 51. 4. When the option is set to ON, SSB logs of link down are output on MCU (M-DKC) and RCU (R-DKC). 5. The option can work only when the micro-program supporting this function is installed on both MCU (M-DKC) and RCU (R-DKC). 6. The option setting is applied to Initiator-Port and RCU Target-Port. The function is applicable only when the PCB type of 8UFC or 16UFC is used on MCU and RCU and in the configuration with 4 sets of MPBs on MCU. 7. Setting change of the option from OFF to ON or from ON to OFF must be done after the pair is suspended or when the load is low. 8. Before downgrading the micro-program from a supported version to an unsupported version, set the option to OFF. (Micro-program exchange without setting the option to OFF is guarded. In this case, setting the option to OFF and then retry the micro-program exchange is required.) 69 VSP Enables/disables the UA response to a host when an LU whose capacity has been expanded receives a command from the host. ON: When an LU whose capacity has been expanded receives a command from a host, UA is returned to the host. Sense key: 0x06 (Unit Attention) Sense code: 0x2a09 (Capacity Data Has Changed), 0x2a01 (Mode Parameters Changed) OFF (default): When an LU whose capacity has been expanded receives a command from a host, UA is not returned to the host. Note: 1. The option is applied when returning UA to the host after LUSE capacity expansion is required. 2. If both HMO 7 and HMO 69 are set to ON, the UA of HMO 69 is returned to the host. Common Common 70-03-32-00/00 or later 70-03-36-00/00 or later 2-18 Preparing for New Device Configuration

HMO Storage System Function Host Mode Comments 71 HUS VM VSP Switches sense key/sense code returned as a response to Check Condition when a read/write I/O is received while a DP pool is blocked. ON: The sense key/sense code returned as a response to Check Condition when a read/write I/O is received while a DP pool is blocked is 03(MEDIUM ERROR)/9001(VENDORUNIQUE). OFF (default): The sense key/sense code returned as a response to Check Condition when a read/write I/O is received while a DP pool is blocked is 0400(LOGICAL UNIT NOT READY/CAUSE NOT REPORTABLE). Note: This option is applied if switching sense key/sense code returned as a response to Check Condition when a read/write I/O is received while a DP pool is blocked can prevent a device file from being blocked and therefore the extent of impact can be reduced on host side. Common HUS VM: 73-01- 31-00/00 or later VSP: 70-04-01-00/00 or later Preparing for New Device Configuration 2-19

Configuring the Host Adapters Before connecting the Hitachi RAID storage system to the VMware host, you must configure the adapters (HBAs and/or CNAs) that will be connected to the storage system. The adapters have many configuration options. The minimum requirements for configuring host adapters on a VMware host for operation with the Hitachi RAID storage systems are: Queue depth. You may need to change the queue depth value on the host adapters connected to the Hitachi RAID storage system. If the value is too high, I/O performance can deteriorate, and when this happens, the Hitachi RAID storage system reports a queue full status (because the queue depth exceeds an allowable limit). The system may not operate correctly when the queue is full and a high value is set. Table 2-7 specifies the queue depth requirements for the Hitachi RAID storage systems. Set an appropriate queue depth value according to your configuration. You can adjust the queue depth later as needed (within the specified range) to optimize the I/O performance. Table 2-7 Parameter Queue Depth Requirements Required Value Queue depth per LU Queue depth per port 32 per LU HUS VM, VSP, USP V/VM: 2048 per port TagmaStore USP/NSC: 1024 per port BIOS. Table 2-8 specifies the BIOS requirements for the Hitachi RAID storage systems. Table 2-8 Parameter BIOS Requirements Required Value Host adapter BIOS Connection Disabled Fabric point-to-point In addition to the queue depth and BIOS, other parameters (for example, FC, fabric) may also need to be set. The following sections provide settings Refer to the user documentation for the adapter to ensure that all parameters required for your operational environment are set. Note: Use the same settings and parameters for all host adapters connected to the Hitachi RAID storage systems. 2-20 Preparing for New Device Configuration

Settings for QLogic Adapters Use the setup utility for the QLogic adapter to set the required options. For details and instructions, see the user documentation for the adapter. Table 2-9 lists the recommended QLogic adapter settings for Hitachi RAID storage attached to a VMware host. For the latest information about QLogic adapters and Hitachi RAID storage systems, see the QLogic website for Hitachi Data Systems storage: http://driverdownloads.qlogic.com/qlogicdriverdownloads_ui/hitachidatasys tems.aspx?companyid=5 Table 2-9 Settings for QLogic Adapters Parameter Setting Host Adapter BIOS Number of LUNs per target Enable LIP reset Enable LIP full login Enable target reset Connection option Disabled Determined by the number of LUNs in your configuration. Multiple LUN support is typically for RAID arrays that use LUNs to map drives. The default is 8. If you do not need multiple LUN support, set the number of LUNs to 0. No Yes Yes Point-to-point only Settings for Emulex Adapters Use the setup utility for the adapter to set the required options. For details and instructions, see the user documentation for the adapter. Table 2-10 lists the recommended Emulex adapter settings for Hitachi RAID storage attached to a VMware host. For the latest information about Emulex adapters and Hitachi RAID storage systems, see Emulex website for Hitachi Data Systems storage: http://www.emulex.com/downloads/hitachi-data-systemshitachi.html Table 2-10 Settings for Emulex Adapters Parameter Setting Host Adapter BIOS Topology Disabled Fabric Point-to-Point Preparing for New Device Configuration 2-21

Connecting the Storage System to the VMware Server After the Hitachi RAID storage system and host adapters have been configured, the Hitachi RAID storage system can be connected to the VMware system. Some of these steps are performed by the Hitachi Data Systems representative, while other steps are performed by the user. Note: The Hitachi Data Systems representative must use the Maintenance Manual for the storage system during all installation activities. Follow all precautions and procedures in the Maintenance Manual, and always check all specifications to ensure proper installation and configuration. To connect the Hitachi RAID storage system to the VMware Server host: 1. Verify the system installation. The Hitachi Data Systems representative verifies the configuration and operational status of the Hitachi RAID storage system ports, LDEVs, and paths. 2. Shut down and power off the VMware host. The user shuts down and powers off the VMware host. The power must be off when the FC cables are connected. 3. Connect the Hitachi RAID storage system to the VMware system. The Hitachi Data Systems representative connects the cables between the Hitachi RAID storage system and the VMware host or fabric switch. Verify the ready status of the storage system and peripherals. 4. Power on and boot the VMware system. The user powers on and boots the VMware system after the storage system has been connected: a. Power on the VMware system display. b. Power on all peripheral devices. The Hitachi RAID storage system must be on, and the ports must be configured. If the ports are configured after the VMware system is powered on, the VMware system may need to be restarted to recognize the new devices. c. Confirm the ready status of all peripheral devices, including the Hitachi RAID storage system. d. Power on and boot the VMware system. 2-22 Preparing for New Device Configuration

4 Configuring the New Devices This chapter provides information about configuring the new storage devices on the Hitachi RAID storage system for operation with the VMware host. Creating the VMFS Datastores Adding a Hard Disk to a Virtual Machine Configuring the New Devices 3-1

Creating the VMFS Datastores Use the software on the VMware host (for example, vsphere Client) to create the VMFS datastores on the new storage devices in the Hitachi RAID storage system. Make sure to create only one VMFS datastore for each storage device. For details about configuring new storage devices (for example, supported file and block sizes), see the VMware user documentation. Use the following settings when creating a VMFS datastore on a Hitachi RAID storage device: LUN properties Path policy: Round robin. Preference: Preferred. Always route traffic over this port when possible. State: Enabled. Make this path available for load balancing and failover. VMFS properties Storage type: disk/lun Maximum file size: 256 GB, block size 1 MB Capacity: Maximum capacity TIP: You do not need to create the VMFS datastores again on other hosts that may need access to the new storage devices. Use the storage refresh and rescan operations to update the datastore lists and storage information on the other hosts. 3-2 Configuring the New Devices

Adding a Hard Disk to a Virtual Machine Use the following settings when adding a hard disk to a virtual machine for Hitachi RAID storage devices: When creating a new virtual disk: Disk capacity (can be changed later) Location: on the same datastore as the virtual machine files, or specify a datastore When adding an existing virtual disk: browse for the disk file path. When adding a mapped SAN LUN: Datastore: Virtual Machine Compatibility mode: physical Store LUN mapping file on the same datastore as the virtual machine files Virtual device node: Select a node that is local to the virtual machine. Virtual disk mode options: Independent mode (persistent or nonpersistent) Configuring the New Devices 3-3

3-4 Configuring the New Devices

4 Failover and SNMP The Hitachi RAID storage systems support industry-standard products and functions that provide host and/or application failover, I/O path failover, and logical volume management (LVM). The Hitachi RAID storage systems also support the industry-standard simple network management protocol (SNMP) for remote system management from the open-systems host. SNMP is used to transport management information between the storage system and the SNMP manager on the host. The SNMP agent sends status information to the hosts when requested by the host or when a significant event occurs. This chapter provides an overview of the failover and SNMP operations that are supported on the Hitachi RAID storage system in the VMware environment. Host Failover Path Failover SNMP Remote System Management Note: The user is responsible for configuring the failover and SNMP management software on the VMware host. For assistance with failover or SNMP configuration on the host, see the user documentation for the product, or contact VMware technical support. Failover and SNMP 4-1

Host Failover Please contact your Hitachi Data Systems representative for the latest information on supported host failover software for the Hitachi RAID storage systems. 4-2 Failover and SNMP

Path Failover The Hitachi RAID storage systems support the Hitachi HiCommand Dynamic Link Manager (HDLM) software for the VMware host platform. At this time HDLM is supported for fibre-channel connection only, not FCoE. For details, see the Hitachi Dynamic Link Manager User s Guide for VMware, or contact your Hitachi Data Systems representative. For information about other supported path failover software, contact your Hitachi Data Systems representative. Failover and SNMP 4-3

SNMP Remote System Management SNMP is a part of the TCP/IP protocol suite that supports maintenance functions for storage and communication devices. The Hitachi RAID storage systems use SNMP to transfer status and management commands to the SNMP Manager on the open-sytems host (see Figure 4-1). When the SNMP manager requests status information or when a service information message (SIM) occurs, the SNMP agent on the storage system notifies the SNMP manager on the host. Notification of error conditions is made in real time, providing the open-sytems user with the same level of monitoring and support available to the mainframe user. The SIM reporting via SNMP enables the user to monitor the Hitachi RAID storage system from the open-sytems host. When a SIM occurs, the SNMP agent on the Hitachi RAID storage system initiates trap operations, which alert the SNMP manager of the SIM condition. The SNMP manager receives the SIM traps from the SNMP agent, and can request information from the SNMP agent at any time. Note: The user is responsible for configuring the SNMP manager on the VMware host. For assistance with SNMP manager configuration on the VMware host, refer to the user documentation, or contact the vendor s technical support. Hitachi RAID storage system SIM Private LAN SNMP Manager Error Info. Public LAN Service Processor Host Server Figure 4-1 SNMP Environment 4-4 Failover and SNMP

5 Troubleshooting This chapter provides troubleshooting information for VMware host attachment and includes instructions for calling technical support. General Troubleshooting Calling the Hitachi Data Systems Support Center Troubleshooting 5-1

General Troubleshooting Table 5-1 lists potential error conditions during device configuration and provides instructions for resolving each condition. If you cannot resolve an error condition, contact your Hitachi Data Systems representative, or call the Hitachi Data Systems Support Center for assistance (see Calling the Hitachi Data Systems Support Center for instructions). For troubleshooting information for the Hitachi RAID storage system, see the User and Reference Guide for the storage system (for example, Hitachi Virtual Storage Platform User and Reference Guide). For troubleshooting information for Hitachi Storage Navigator, see the Storage Navigator User s Guide for the storage system (for example, Hitachi Virtual Storage Platform Storage Navigator User s Guide). For information about error messages displayed by Storage Navigator, see the Storage Navigator Messages document for the storage system (for example, Hitachi Virtual Storage Platform Storage Navigator Messages). 5-2 Troubleshooting

Table 5-1 General Troubleshooting Error Condition Virtual Machine adapter does not see Lun8 and greater. Guest OS virtual machine booting up but not installing the OS. Cannot add Meta Data File for raw device. Guest OS virtual machine boots up, but does not install the operating system. Cannot add Meta Data File for raw device. Recommended Action Verify cabling, storage LUN, switch and storage security and LUN masking. Verify that the Disk.MaxLUN parameter in the Advance Settings (VMware Management Interface) is set to more than 7. It is possible that there is an existing corrupted vmdk file (due to an incomplete installation). Delete the vmdk file from the File Manager and remove it from the Guest OS. Add a new device for the Guest OS and recreate a new vmdk image file. The Meta Data File for the raw device may have existed. Selected the existing Meta Data File or delete the old Meta Data File and create a new one. There may be a corrupt vmdk file (usually because of previous incomplete installation). Delete the vmdk file from the File Manager and remove it from the Guest OS. Add a new device for the Guest OS and recreate a new vmdk image file. The Meta Data File for the raw device may have existed. Selected the existing Meta Data File or delete the old Meta Data File and create a new one. Volume label is not successful. Limit the number of characters to 30. Cannot delete a VMFS file. Guest OS cannot communicate with the server or outside network. vmkfstools -s does not add LUN online. Service console discovers online LUN addition, but the Disks and LUNs do not. VMware ESX Server crashes while booting up. It is possible that there is an active swap file on the same extended partition. Manually turn off the swap device (using vmkfstools command) from the service console and try again. Relocate the swap file to another disk. Make sure a virtual switch is created and bound to a connected network adapter. Delete the LUN. Select and add another LUN and retry the process again. Repeat the command or perform the Rescan SAN function in the Storage Management of the VMware Management Interface and display again. Rescan SAN and refresh. Check for the error message on the screen. It could be because of mixing different types of adapters in the server. Troubleshooting 5-3

Calling the Hitachi Data Systems Support Center If you need to call the Hitachi Data Systems Support Center, make sure to provide as much information about the problem as possible, including: The circumstances surrounding the error or failure. The content of any error messages displayed on the host systems. The content of any error messages displayed on Storage Navigator. The Storage Navigator configuration information (use the FD Dump Tool). The service information messages (SIMs), including reference codes and severity levels, displayed by Storage Navigator. The Hitachi Data Systems customer support staff is available 24 hours a day, seven days a week. If you need technical support, log on to the Hitachi Data Systems Portal for contact information: https://portal.hds.com 5-4 Troubleshooting

A Specifications for Device Types Table A-1 provides the specifications for the device emulation types that can be configured on the Hitachi RAID storage systems. Some device types and parameters may not be relevant to your storage system model. For more information about device emulation types and configuring devices other than OPEN-V on your storage system, please contact your Hitachi Data Systems account team. Please note the following: The logical devices on the Hitachi RAID storage systems are defined to the host as SCSI disk devices, even though the interface is fibre channel. The sector size for the device types is 512 bytes. Table A-1 Device Specifications Device Type (Note 1) Product Name (Note 2) No. of Blocks (512 B/blk) Number of Cylinders No. of Heads # of Sectors per Track Capacity (MB) (Note 3) OPEN-3 OPEN-3 4806720 3338 15 96 2347 OPEN-8 OPEN-8 14351040 9966 15 96 7007 OPEN-9 OPEN-9 14423040 10016 15 96 7042 OPEN-E OPEN-E 28452960 19759 15 96 13893 OPEN-L OPEN-L 71192160 49439 15 96 34761 OPEN-V OPEN-V 125827200 max Note 4 Note 5 15 128 Note 6 OPEN-3*n OPEN-3*n 4806720*n 3338*n 15 96 2347*n OPEN-8*n OPEN-8*n 14351040*n 9966*n 15 96 7007*n OPEN-9*n OPEN-9*n 14423040*n 10016*n 15 96 7042*n OPEN-E*n OPEN-E*n 28452960*n 19759*n 15 96 13893*n OPEN-L*n OPEN-L*n 71192160*n 49439*n 15 96 34761*n OPEN-3 VLL OPEN-3-CVS Note 4 Note 5 15 96 Note 6 OPEN-8 VLL OPEN-8-CVS Note 4 Note 5 15 96 Note 6 Specifications for Device Types A-1

Device Type (Note 1) Product Name (Note 2) No. of Blocks (512 B/blk) Number of Cylinders No. of Heads # of Sectors per Track Capacity (MB) (Note 3) OPEN-9 VLL OPEN-9-CVS Note 4 Note 5 15 96 Note 6 OPEN-E VLL OPEN-E-CVS Note 4 Note 5 15 96 Note 6 OPEN-V VLL OPEN-V Note 4 Note 5 15 128 Note 6 OPEN-3*n VLL OPEN-3*n-CVS Note 4 Note 5 15 96 Note 6 OPEN-8*n VLL OPEN-8*n-CVS Note 4 Note 5 15 96 Note 6 OPEN-9*n VLL OPEN-9*n-CVS Note 4 Note 5 15 96 Note 6 OPEN-E*n VLL OPEN-E*n-CVS Note 4 Note 5 15 96 Note 6 OPEN-V*n VLL OPEN-V*n Note 4 Note 5 15 128 Note 6 Note 1: The availability of specific device types depends on the storage system model and the level of microcode installed on the storage system. Note 2: The command device (used for Command Control Interface operations) is distinguished by -CM on the product name (for example, OPEN-3-CM, OPEN-3-CVS-CM). The product name for VLL devices is OPEN-x-CVS, where CVS stands for custom volume size. Note 3: This capacity is the maximum size that can be configured on the host. The device capacity can sometimes be changed by the BIOS or host bus adapter. Also, different capacities may be due to variations such as 1 MB = 1000 2 or 1024 2 bytes. Note 4: The number of blocks for a VLL volume is calculated as follows: # of blocks = (# of data cylinders) (# of heads) (# of sectors per track) The number of sectors per track is 128 for OPEN-V and 96 for the other emulation types. Example: For an OPEN-3 VLL volume with capacity = 37 MB: # of blocks = (53 cylinders see Note 2) (15 heads) (96 sectors per track) = 76320 Note 5: The number of data cylinders for a VLL volume is calculated as follows ( means that the value should be rounded up to the next integer): Number of data cylinders for OPEN-x VLL volume (except for OPEN-V) = # of cylinders = (capacity (MB) 1024/720 Example: For OPEN-3 VLL volume with capacity = 37 MB: # of cylinders = 37 1024/720 = 52.62 = 53 cylinders A-2 Specifications for Device Types

Number of data cylinders for an OPEN-V VLL volume = # of cylinders = (capacity (MB) specified by user) 16/15 Example: For OPEN-V VLL volume with capacity = 50 MB: # of cylinders = 50 16/15 = 53.33 = 54 cylinders Number of data cylinders for a VLL LUSE volume (except for OPEN-V) = # of cylinders = (capacity (MB) 1024/720 n Example: For OPEN-3 VLL LUSE volume with capacity = 37 MB and n = 4: # of cylinders = 37 1024/720 4 = 52.62 4 = 53 4 = 212 Number of data cylinders for an OPEN-V VLL LUSE volume = # of cylinders = (capacity (MB) specified by user) 16/15 n Example: For OPEN-V VLL LUSE volume with capacity = 50 MB and n = 4: # of cylinders = 50 16/15 4 = 53.33 4 = 54 4 = 216 Note 6: The size of an OPEN-x VLL volume is specified by capacity in MB, not number of cylinders. The size of an OPEN-V VLL volume can be specified by capacity in MB or number of cylinders. The user specifies the volume size using the Virtual LVI/LUN software. Specifications for Device Types A-3

A-4 Specifications for Device Types

B Using VMware with Hitachi RAID Storage Systems This appendix provides reference information to help you implement VMware software with the Hitachi RAID storage systems: VMware ESX Server and VirtualCenter Compatibility Installing and Configuring VMware Creating and Managing VMware Infrastructure Components Using VMware with Hitachi RAID Storage Systems B-1

VMware ESX Server and VirtualCenter Compatibility VMware recommends that you install VirtualCenter with the ESX Server software. VirtualCenter lets you provision virtual machines and monitor performance of physical servers and virtual machines, monitor performance and utilization of physical servers and the virtual machines they are running, and export VirtualCenter data to HTML and Excel formats for integration with other reporting tools. Make sure that your VMware ESX server and VirtualCenter versions are compatible. For details, refer to your VMware Release Notes and the VMware website at www.vmware.com. B-2 Using VMware with Hitachi RAID Storage Systems

Installing and Configuring VMware You must verify that your server, I/O, storage, guest operating system, management agent, and backup software are all compatible before you install and configure VMware. Consult the following documents for information about VMware ESX Server installation, configuration, and compatibility: Installing and Configuring VMware ESX Server: Refer to the VMware documentation when installing and configuring VMware ESX Server. Follow the configuration steps for licensing, networking, and security. Upgrading an ESX Server and VirtualCenter Environment: Refer to the VMware documentation when upgrading an ESX Server and VirtualCenter environment. Using VMware with Hitachi RAID Storage Systems B-3

Creating and Managing VMware Infrastructure Components After VMware ESX Server installation has been completed, including all major components of the VMware Infrastructure, you can perform the following tasks to manage your VMware infrastructure components: Use the VI client to manage your ESX Server hosts either as a group through VirtualCenter or individually by connecting directly to the host. Set up a datacenter to bring one or more ESX Server hosts under VirtualCenter management, create virtual machines, and determine how you want to organize virtual machines and manage resources. Create a Virtual Machine manually, from templates, or by cloning existing virtual machines. Configure permissions and roles for users to allocate access to VirtualCenter, its administrative functions, and its resources. Use resource pools to partition available CPU and memory resources hierarchically. Configure network connections to ensure that virtual machine traffic does not share a network adapter with the service console for security purposes. Install a guest operating system in a virtual machine. Manage virtual machines to learn how to power them on and off. Monitor the status of your virtual infrastructure using tasks and events. Schedule automated tasks to perform actions at designated times. Configure alarm notification messages to be sent when selected events occur to or on hosts or virtual machines. B-4 Using VMware with Hitachi RAID Storage Systems

Acronyms and Abbreviations AL AL-PA BIOS blk CNA CU CVS FC FC-AL FCoE FCP GB HBA HMO HUS VM I/O ISO LDEV LU LUN LUSE MB NSC arbitrated loop arbitrated loop physical address basic input/output system block converged network adapter control unit custom volume size fibre-channel Fibre-channel arbitrated loop fibre-channel over ethernet fibre-channel protocol gigabyte host bus adapter host mode option Hitachi Unified Storage VM input/output International Organization for Standardization logical device logical unit logical unit, logical unit number LU Size Expansion megabyte Hitachi TagmaStore Network Storage Controller OFC open fibre control OM3 optical multimode 3 (per ISO 11801) PA PB PC RAID physical address petabyte personal computer system redundant array of independent disks Acronyms and Abbreviations Acronyms-1

SAN SCSI SIM SOM TB USP USP V USP VM VAAI VLL VSP storage-area network small computer system interface service information message system option mode terabyte Hitachi TagmaStore Universal Storage Platform Hitachi Universal Storage Platform V Hitachi Universal Storage Platform VM vstorage API for Array Integration Hitachi Virtual LVI/LUN Hitachi Virtual Storage Platform Acronyms-2 Acronyms and Abbreviations

Hitachi Data Systems Corporate Headquarters 2845 Lafayette Street Santa Clara, California 95050-2639 U.S.A. www.hds.com Regional Contact Information Americas +1 408 970 1000 info@hds.com Europe, Middle East, and Africa +44 (0) 1753 618000 info.emea@hds.com Asia Pacific +852 3189 7900 hds.marketing.apac@hds.com MK-98RD6716-08