FASTFIND LINKS. Contents Product Version Getting Help MK-96RD640-05
|
|
|
- Domenic Bridges
- 10 years ago
- Views:
Transcription
1 Configuration Guide for Red Hat Linux Host Attachment Hitachi Unified Storage VM Hitachi Virtual Storage Platform Hitachi Universal Storage Platform V/VM FASTFIND LINKS Contents Product Version Getting Help MK-96RD640-05
2 Hitachi, Ltd. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying and recording, or stored in a database or retrieval system for any purpose without the express written permission of Hitachi, Ltd. (hereinafter referred to as "Hitachi") and Hitachi Data Systems Corporation (hereinafter referred to as "Hitachi Data Systems"). Hitachi and Hitachi Data Systems reserve the right to make changes to this document at any time without notice and assume no responsibility for its use. This document contains the most current information available at the time of publication. When new or revised information becomes available, this entire document will be updated and distributed to all registered users. Some of the features described in this document may not be currently available. Refer to the most recent product announcement or contact your local Hitachi Data Systems sales office for information about feature and product availability. Notice: Hitachi Data Systems products and services can be ordered only under the terms and conditions of the applicable Hitachi Data Systems agreements. The use of Hitachi Data Systems products is governed by the terms of your agreements with Hitachi Data Systems. Hitachi is a registered trademark of Hitachi, Ltd. in the United States and other countries. Hitachi Data Systems is a registered trademark and service mark of Hitachi, Ltd. in the United States and other countries. All other trademarks, service marks, and company names are properties of their respective owners. Microsoft product screen shots are reprinted with permission from Microsoft Corporation. ii
3 Contents Preface... v Intended Audience... vi Product Version... vi Document Revision Level... vi Changes in this Revision... vii Referenced Documents... vii Document Conventions... viii Convention for Storage Capacity Values... ix Accessing Product Documentation... ix Getting Help...x Comments...x Introduction About the Hitachi RAID Storage Systems Device Types Installation and Configuration Roadmap Installing the Storage System Requirements Preparing for the Storage System Installation Hardware Installation Considerations LUN Manager Software Installation Setting the Host Mode Setting the Host Mode Options Configuring the Fibre-Channel Ports Port Address Considerations for Fabric Environments Loop ID Conflicts Connecting the Storage System to the Red Hat Linux Host Configuring the Host Fibre-Channel HBAs Contents iii
4 Verifying New Device Recognition Configuring the New Disk Devices Setting the Number of Logical Units Partitioning the Devices Creating, Mounting, and Verifying the File Systems Creating the File Systems Creating the Mount Directories Mounting the New File Systems Verifying the File Systems Setting the Auto-Mount Parameters Failover and SNMP Operation Host Failover Path Failover Device Mapper Multipath SNMP Remote System Management Troubleshooting General Troubleshooting Calling the Hitachi Data Systems Support Center Note on Using Veritas Cluster Server... A-1 Acronyms and Abbreviations iv Contents
5 Preface This document describes and provides instructions for installing and configuring the devices on the Hitachi RAID storage systems for operations in a Red Hat Linux environment. The Hitachi RAID storage system models include the Hitachi Unified Storage VM, Hitachi Virtual Storage Platform (VSP), and the Hitachi Universal Storage Platform V and Hitachi Universal Storage Platform VM (USP V/VM). Please read this document carefully to understand how to use this product, and maintain a copy for reference purposes. This preface includes the following information: Intended Audience Product Version Document Revision Level Changes in this Revision Referenced Documents Document Conventions Convention for Storage Capacity Values Accessing Product Documentation Getting Help Comments Preface v
6 Intended Audience This document is intended for system administrators, Hitachi Data Systems representatives, and authorized service providers who are involved in installing, configuring, and operating the Hitachi RAID storage systems. Readers of this document should meet the following requirements: You should have a background in data processing and understand RAID storage systems and their basic functions. You should be familiar with the Hitachi RAID storage systems, and you should have read the User and Reference Guide for the storage system. You should be familiar with the Storage Navigator software for the Hitachi RAID storage systems, and you should have read the Storage Navigator User s Guide. You should be familiar with the Red Hat Linux operating system and the hardware hosting the Red Hat Linux system. You should be familiar with the hardware used to attach the Hitachi RAID storage system to the Red Hat Linux host, including fibre-channel cabling, host bus adapters (HBAs), switches, and hubs. Product Version This document revision applies to the following microcode levels: Hitachi Unified Storage VM microcode x or later. Hitachi Virtual Storage Platform microcode x or later. Hitachi Universal Storage Platform V/VM microcode x or later. Document Revision Level Revision Date Description MK-96RD640-P February 2007 Initial Release MK-96RD May 2007 Initial Release, supersedes and replaces MK-96RD640-P MK-96RD September 2007 Revision 01, supersedes and replaces MK-96RD MK-96RD February 2010 Revision 02, supersedes and replaces MK-96RD MK-96RD October 2010 Revision 03, supersedes and replaces MK-96RD MK-96RD June 2012 Revision 04, supersedes and replaces MK-96RD MK-96RD September 2012 Revision 05, supersedes and replaces MK-96RD vi Preface
7 Changes in this Revision Added the Hitachi Unified Storage VM storage system. Updated the host mode option information (Table 2-2): Added information about the following host mode options: 7, 22, 39, 41, 48, 49, 50, 51, 52, 65, 68, 69, 71 Referenced Documents Hitachi Unified Storage VM documents: User and Reference Guide, MK-92HM7005 Provisioning Guide, MK-92HM7012 Storage Navigator User Guide, MK-92HM7016 Storage Navigator Messages, MK-92HM7017 Hitachi Virtual Storage Platform documents: Provisioning Guide for Open Systems, MK-90RD7022 Storage Navigator User Guide, MK-90RD7027 Storage Navigator Messages, MK-90RD7028 User and Reference Guide, MK-90RD7042 Hitachi Universal Storage Platform V/VM documents: Storage Navigator Messages, MK-96RD613 LUN Manager User s Guide, MK-96RD615 LUN Expansion (LUSE) User s Guide, MK-96RD616 Storage Navigator User s Guide, MK-96RD621 Virtual LVI/LUN and Volume Shredder User s Guide, MK-96RD630 User and Reference Guide, MK-96RD635 Cross-OS File Exchange User s Guide, MK-96RD647 Hitachi Dynamic Link Manager for Red Hat Linux User s Guide, MK-92DLM113 Preface vii
8 Document Conventions This document uses the following terminology conventions: Convention Hitachi RAID storage system, storage system Description Refers to all models of the Hitachi RAID storage systems unless otherwise noted. This document uses the following typographic conventions: Convention Bold Italic monospace Indicates the following: Description Text in a window or dialog box, such as menus, menu options,buttons, and labels. Example: On the Add Pair dialog box, click OK. Text appearing on screen or entered by the user. Example: The -split option. The name of a directory, folder, or file. Example: The horcm.conf file. Indicates a variable, which is a placeholder for actual text provided by the user or system. Example: copy source-file target-file Note: Angle brackets (< >) are also used to indicate variables. Indicates text that is displayed on screen or entered by the user. Example: # pairdisplay -g oradb < > angle brackets Indicates a variable, which is a placeholder for actual text provided by the user or system. Example: # pairdisplay -g <group> Note: Italic is also used to indicate variables. [ ] square brackets Indicates optional values. Example: [ a b ] indicates that you can choose a, b, or nothing. { } braces Indicates required or expected values. Example: { a b } indicates that you must choose either a or b. vertical bar Indicates that you have a choice between two or more options or arguments. Examples: [ a b ] indicates that you can choose a, b, or nothing. { a b } indicates that you must choose either a or b. This document uses the following icons to draw attention to information: Icon Meaning Description Note Calls attention to important or additional information. Tip Caution WARNING Provides helpful information, guidelines, or suggestions for performing tasks more effectively. Warns of adverse conditions or consequences (for example, disruptive operations). Warns of severe conditions or consequences (for example, destructive operations). viii Preface
9 Convention for Storage Capacity Values Physical storage capacity values (for example, disk drive capacity) are calculated based on the following values: Physical capacity unit Value 1 KB 1,000 (10 3 ) bytes 1 MB 1,000 KB or 1,000 2 bytes 1 GB 1,000 MB or 1,000 3 bytes 1 TB 1,000 GB or 1,000 4 bytes 1 PB 1,000 TB or 1,000 5 bytes 1 EB 1,000 PB or 1,000 6 bytes Logical storage capacity values (for example, logical device capacity) are calculated based on the following values: Logical capacity unit Value 1 block 512 bytes 1 KB 1,024 (2 10 ) bytes 1 MB 1,024 KB or 1,024 2 bytes 1 GB 1,024 MB or 1,024 3 bytes 1 TB 1,024 GB or 1,024 4 bytes 1 PB 1,024 TB or 1,024 5 bytes 1 EB 1,024 PB or 1,024 6 bytes Accessing Product Documentation The user documentation for the Hitachi RAID storage systems is available on the Hitachi Data Systems Portal: Check this site for the most current documentation, including important updates that may have been made after the release of the product. Preface ix
10 Getting Help Comments The Hitachi Data Systems customer support staff is available 24 hours a day, seven days a week. If you need technical support, log on to the Hitachi Data Systems Portal for contact information: Please send us your comments on this document: [email protected] Include the document title and number, including the revision level (for example, -07), and refer to specific sections and paragraphs whenever possible. All comments become the property of Hitachi Data Systems. Thank you! x Preface
11 1 Introduction This chapter provides an overview of the Hitachi RAID storage systems and host attachment: About the Hitachi RAID Storage Systems Device Types Installation and Configuration Roadmap Introduction 1-1
12 About the Hitachi RAID Storage Systems The Hitachi RAID storage systems offer a wide range of storage and data services, including thin provisioning with Hitachi Dynamic Provisioning software, application-centric storage management and logical partitioning, and simplified and unified data replication across heterogeneous storage systems. These storage systems are an integral part of the Services Oriented Storage Solutions architecture from Hitachi Data Systems, providing the foundation for matching application requirements to different classes of storage and delivering critical services such as: Business continuity services Content management services (search, indexing) Non-disruptive data migration Volume management across heterogeneous storage arrays Thin provisioning Security services (immutability, logging, auditing, data shredding) Data de-duplication I/O load balancing Data classification File management services The Hitachi RAID storage systems provide heterogeneous connectivity to support multiple concurrent attachment to a variety of host operating systems, including Red Hat Linux, UNIX platforms, Windows, VMware, and mainframe servers, enabling massive consolidation and storage aggregation across disparate platforms. The storage systems can operate with multi-host applications and host clusters and are designed to handle very large databases as well as data warehousing and data mining applications that store and retrieve petabytes of data. The Hitachi RAID storage systems are configured with OPEN-V logical units (LUs) and are compatible with most fibre-channel (FC) host bus adapters (HBAs). Users can perform additional LU configuration activities using the LUN Manager, Virtual LVI/LUN (VLL), and LUN Expansion (LUSE) features provided by the Storage Navigator software, which is the primary user interface for the storage systems. For further information on storage solutions and the Hitachi RAID storage systems, contact your Hitachi Data Systems account team. 1-2 Introduction
13 Device Types Table 1-1 describes the types of logical devices (volumes) that can be installed and configured for operation with the Hitachi RAID storage systems on a Red Hat Linux operating system. Table 1-2 lists the specifications for devices supported by the Hitachi RAID storage systems. Logical devices are defined to the host as SCSI disk devices, even though the interface is fibre channel. For information about configuring devices other than OPEN-V, contact your Hitachi Data Systems representative. The sector size for the devices is 512 bytes. Table 1-1 Device Type OPEN-V Devices OPEN-x Devices LUSE Devices (OPEN-x*n) VLL Devices (OPEN-x VLL) VLL LUSE Devices (OPEN-x*n VLL) FX Devices (3390-3A/B/C, OPEN-x-FXoto) Logical Devices Supported by the Hitachi RAID Storage Systems Description OPEN-V logical units (LUs) are disk devices (VLL-based volumes) that do not have a predefined size. OPEN-x logical units (LUs) (for example, OPEN-3, OPEN-9) are disk devices of predefined sizes. The Hitachi RAID storage systems support OPEN-3, OPEN-8, OPEN-9, OPEN-E, and OPEN-L, devices. For the latest information on usage of these device types, contact your Hitachi Data Systems account team. LUSE devices are combined LUs that can be from 2 to 36 times larger than standard OPEN-x LUs. Using LUN Expansion (LUSE) remote console software, you can configure these custom-size devices. LUSE devices are designated as OPEN-x*n, where x is the LU type (for example, OPEN- 9*n) and 2< n < 36). For example, a LUSE device created from 10 OPEN-3 LUs is designated as an OPEN-3*10 disk device. This lets the host combine logical devices and access the data stored on the Hitachi RAID storage system using fewer LU numbers. VLL devices are custom-size LUs that are smaller than standard OPEN-x LUs. Using Virtual LVI/LUN remote console software, you can configure VLL devices by slicing a single LU into several smaller LUs that best fit your application needs to improve host access to frequently used files. The product name for the OPEN-x VLL devices is OPEN-x-CVS (CVS stands for custom volume size). The OPEN-L LU type does not support Virtual LVI/LUN. VLL LUSE devices combine Virtual LVI/LUN devices (instead of standard OPEN-x LUs) into LUSE devices. Use the Virtual LVI/LUN feature to create custom-size devices, then use the LUSE feature to combine the VLL devices. You can combine from 2 to 36 VLL devices into one VLL LUSE device. For example, an OPEN-3 LUSE volume created from a0 OPEN-3 VLL volumes is designated as an OPEN-3*10 VLL device (product name OPEN-3*10-CVS). The Hitachi Cross-OS File Exchange (FX) software allows you to share data across mainframe, UNIX, and PC server platforms using special multiplatform volumes. The VLL feature can be applied to FX devices for maximum flexibility in volume size. For more information about FX, see the Cross-OS File Exchange User s Guide, or contact your Hitachi Data Systems account team. FX devices are not SCSI disk devices, and must be installed and accessed as raw devices. UNIX/PC server hosts must use FX to access the FX devices as raw devices (no file system, no mount operation). The B devices are write-protected from UNIX/PC server access. The Hitachi RAID storage system rejects all UNIX/PC server write operations (including fibre-channel adapters) for B devices. Multiplatform devices are not write-protected for UNIX/PC server access. Do not execute any write operation by the fibre-channel adapters on these devices. Do not create a partition or file system on these devices. This will overwrite any data on the FX device and prevent the FX software from accessing the device. Introduction 1-3
14 Table 1-2 Device Specifications Device Type Category (Note 1) Product Name (Note 2) # of Blocks (512 B/blk) # of Cylinders # of Heads # of Sectors per Track Capacity (MB) (Note 3) OPEN-3 SCSI disk OPEN OPEN-8 SCSI disk OPEN OPEN-9 SCSI disk OPEN OPEN-E SCSI disk OPEN-E OPEN-L SCSI disk OPEN-L OPEN-V SCSI disk OPEN-V max Note 4 Note Note 6 OPEN-3*n SCSI disk OPEN-3*n *n 3338*n *n OPEN-8*n SCSI disk OPEN-8*n *n 9966*n *n OPEN-9*n SCSI disk OPEN-9*n *n 10016*n *n OPEN-E*n SCSI disk OPEN-E*n *n 19759*n *n OPEN-L*n SCSI disk OPEN-L*n *n 49439*n *n OPEN-V*n SCSI disk OPEN-L*n Note 4 Note Note 6 OPEN-3 VLL SCSI disk OPEN-3-CVS Note 4 Note Note 6 OPEN-8 VLL SCSI disk OPEN-8-CVS Note 4 Note Note 6 OPEN-9 VLL SCSI disk OPEN-9-CVS Note 4 Note Note 6 OPEN-E VLL SCSI disk OPEN-E-CVS Note 4 Note Note 6 OPEN-V VLL SCSI disk OPEN-V Note 4 Note Note 6 OPEN-3*n VLL SCSI disk OPEN-3*n-CVS Note 4 Note Note 6 OPEN-8*n VLL SCSI disk OPEN-8*n-CVS Note 4 Note Note 6 OPEN-9*n VLL SCSI disk OPEN-9*n-CVS Note 4 Note Note 6 OPEN-E*n VLL SCSI disk OPEN-E*n-CVS Note 4 Note Note 6 OPEN-V*n VLL SCSI disk OPEN-V*n Note 4 Note Note A FX otm/mto A B FXmto B C FXotm OP-C C FX OPEN-3 FXoto OPEN A VLL FX otm/mto A-CVS Note 4 Note Note B VLL FXmto B-CVS Note 4 Note Note C VLL FXotm OP-C C-CVS Note 4 Note Note 6 FX OPEN-3 VLL FXoto OPEN-3-CVS Note 4 Note Note Introduction
15 Note 1: The category of a device (SCSI disk or FX) determines its volume usage. Table 1-3 shows the volume usage for SCSI disk devices and FX devices. The SCSI disk devices (OPEN-x, VLL, LUSE, and VLL LUSE) are usually formatted with file systems for Red Hat Linux operations. The FX devices (3390-3A/B/C, and OPEN-x-FXoto) must be installed as raw devices and can only be accessed using the FX software. Do not partition or create a file system on any device used for FX operations. Table 1-3 Volume Usage for Device Categories Category Device Type Volume Usage SCSI Disk FX OPEN-x, OPEN-x VLL, OPEN-x*n LUSE, OPEN-x*n VLL LUSE A/B/C A/B/C VLL OPEN-x for FXoto, OPEN-x VLL for FXoto File System or Raw Device (for example, some applications use raw devices) Raw Device Note 2: The command device (used for Command Control Interface (CCI) operations) is distinguished by CM on the product name (for example, OPEN- 3-CM, OPEN-3-CVS-CM). The product name for VLL devices is OPEN-x-CVS, where CVS = custom volume size. Note 3: This capacity is the maximum size which can be entered using the lvcreate command. The device capacity can sometimes be changed by the BIOS or host bus adapter. Also, different capacities may be due to variations such as 1 MB = or bytes. Note 4: The number of blocks for a VLL volume is calculated as follows: # of blocks = (# of data cylinders) (# of heads) (# of sectors per track) The number of sectors per track is 128 for OPEN-V and 96 for the other emulation types. Example: For an OPEN-3 VLL volume with capacity = 37 MB: # of blocks = (53 cylinders see Note 2) (15 heads) (96 sectors per track) = Note 5: The number of data cylinders for a Virtual LVI/LUN volume is calculated as follows ( means that the value should be rounded up to the next integer): Number of data cylinders for OPEN-x VLL volume (except for OPEN-V) = # of cylinders = (capacity (MB) 1024/720 Example: For OPEN-3 VLL volume with capacity = 37 MB: # of cylinders = /720 = = 53 cylinders Introduction 1-5
16 Number of data cylinders for an OPEN-V VLL volume = # of cylinders = (capacity (MB) specified by user) 16/15 Example: For OPEN-V VLL volume with capacity = 50 MB: # of cylinders = 50 16/15 = = 54 cylinders Number of data cylinders for a VLL LUSE volume (except for OPEN-V) = # of cylinders = (capacity (MB) 1024/720 n Example: For OPEN-3 VLL LUSE volume with capacity = 37 MB and n = 4: # of cylinders = /720 4 = = 53 4 = 212 Number of data cylinders for an OPEN-V VLL LUSE volume = # of cylinders = (capacity (MB) specified by user) 16/15 n Example: For OPEN-V VLL LUSE volume with capacity = 50 MB and n = 4: # of cylinders = 50 16/15 4 = = 54 4 = 216 Number of data cylinders for a A/C = # of cylinders = (number of cylinders) + 9 Number of data cylinders for a B VLL volume = # of cylinders = (number of cylinders) + 7 S1 = maximum lvcreate size value for VLL, LUSE, and VLL LUSE devices. Calculate the maximum size value (in MB) as follows: S1 = (PE Size) (Free PE). Note: Do not exceed the maximum lvcreate size value of 128 GB. Note 6: The size of an OPEN-x VLL volume is specified by capacity in MB, not number of cylinders. The size of an OPEN-V VLL volume can be specified by capacity in MB or number of cylinders. The user specifies the volume size using the Virtual LVI/LUN software. 1-6 Introduction
17 Installation and Configuration Roadmap The steps in Table 1-4 outline the general process you follow to install and configure the Hitachi RAID storage system on a Red Hat Linux operating system. Table 1-4 Installation and Configuration Roadmap 1. Verify that the system on which you are installing the Hitachi RAID storage system meets the minimum requirements for this release. Task 2. Prepare the Hitachi RAID storage system for the installation. 3. Connect the Hitachi RAID storage system to a Red Hat Linux host. 4. Configure the fibre-channel HBAs for the installation. 5. Verify recognition of the new devices. 6. Set the number of logical units. 7. Partition the disk devices. 8. Create file systems and mount directories, mount and verify the file systems, and set and verify auto-mount parameters. Introduction 1-7
18 1-8 Introduction
19 2 Installing the Storage System This chapter describes how to install the Hitachi RAID storage system on a Red Hat Linux operating system: Requirements Preparing for the Storage System Installation Configuring the Fibre-Channel Ports Connecting the Storage System to the Red Hat Linux Host Configuring the Host Fibre-Channel HBAs Verifying New Device Recognition Installing the Storage System 2-1
20 Requirements Table 2-1 lists and describes the requirements for installing the Hitachi RAID storage system on the HP-UX operating system. Table 2-1 Requirements Item Hitachi RAID storage system Red Hat Linux AS/ES operating system Red Hat Linux server Fibre-channel HBAs Fibre-channel utilities and tools Fibre-channel drivers Requirements Hitachi Unified Storage VM Hitachi Virtual Storage Platform (VSP) Hitachi Universal Storage Platform V/VM (USP V/VM) The availability of features and devices depends on the level of microcode installed on the Hitachi RAID storage system. Use the LUN Manager software on Storage Navigator to configure the fibrechannel ports. Refer to the Hitachi Data Systems interoperability site for specific support information for the Red Hat Linux operating system: DM Multipath: Red Hat Enterprise Linux (RHEL) version 5.4 or later (X64 or X32) is required for Device Mapper (DM) Multipath operations. Root (superuser) login access to the host system is required. Refer to the Red Hat Linux user documentation for server hardware and configuration requirements. The Hitachi RAID storage systems support fibre-channel HBAs equipped as follows: 8-Gbps fibre-channel interface, including shortwave non-ofc (open fibre control) optical interface and multimode optical cables with LC connectors 4 Gbps fibre-channel interface, including shortwave non-ofc (open fibre control) optical interface and multimode optical cables with LC connectors. 2 Gbps fibre-channel interface, including shortwave non-ofc (open fibre control) optical interface and multimode optical cables with LC connectors. 1 Gbps fibre-channel interface, including shortwave non-ofc optical interface and multimode optical cables with SC connectors. If a switch or HBA with a 1Gbps transfer rate is used, configure the device to use a fixed 1Gbps setting instead of Auto Negotiation. Otherwise, it may prevent a connection from being established. However, the transfer speed of CHF port cannot be set as 1 Gbps when the CHF is 8US/8UFC/16UFC. Therefore 1 Gbps HBA and switch cannot be connected. Do not connect OFC-type fibre-channel interfaces to the Hitachi RAID storage system. For information about supported fibre-channel HBAs, optical cables, hubs, and fabric switches, contact your Hitachi Data Systems account team. For information about supported HBAs, drivers, hubs, and switches, see the Hitachi Data Systems interoperability site: Refer to the documentation for your HBA for information about installing the utilities and tools for your adapter. Do not install/load the drivers yet. When instructed in this guide to install the drives for your fibre-channel HBA, refer to the documentation for your adapter. 2-2 Installing the Storage System
21 Preparing for the Storage System Installation The following sections describe preinstallation considerations to follow before installing the Hitachi RAID storage system. Hardware Installation Considerations The Hitachi Data Systems representative performs the hardware installation by following the precautions and procedures in the Maintenance Manual. Hardware installation activities include: Assembling all hardware and cabling Installing and formatting the logical devices (LDEVs). Be sure to obtain the desired LDEV configuration information from the user, including the desired number of OPEN-x, LUSE, VLL, VLL LUSE, and multiplatform (FX) devices. Installing the fibre-channel HBAs and cabling. The total fibre cable length attached to each fibre-channel adapter must not exceed 500 meters (1,640 feet). Do not connect any OFC-type connectors to the storage system. Do not connect/disconnect fibre-channel cabling that is being actively used for I/O. This can cause the Red Hat Linux system to hang. Always confirm that the devices on the fibre cable are offline before connecting/disconnecting the fibre cable. Configuring the fibre port topology. The fibre topology parameters for each fibre-channel port depend on the type of device to which the port is connected, and the type of port. Determine the topology parameters supported by the device, and set your topology accordingly (see Configuring the Fibre-Channel Ports). Before starting the installation, check all specifications to ensure proper installation and configuration. Installing the Storage System 2-3
22 LUN Manager Software Installation The LUN Manager software on Storage Navigator is used to configure the fibrechannel ports. For instructions on installing LUN Manager, see the Storage Navigator User Guide for the storage system. Setting the Host Mode The Hitachi RAID storage system has host modes that the storage administrator must set for all new installations (newly connected ports) to Red Hat Linux hosts. The required host mode for Red Hat Linux is 00. Do not select a host mode other than 00 for Red Hat Linux. Use the LUN Manager software to set the host mode. For instructions, see the Provisioning Guide or LUN Manager User Guide for the storage system. Caution: Changing host modes on a Hitachi RAID storage system that is already installed and configured is disruptive and requires the server to be rebooted. 2-4 Installing the Storage System
23 Setting the Host Mode Options When each new host group is added, the storage administrator must be sure that the host mode options (HMOs) are set for all host groups connected to Red Hat Linux hosts. Table 2-2 lists and describes the HMOs that can be used for Solaris operations. Use the LUN Manager software on Storage Navigator to set the HMOs. WARNING: Before setting any HMO, review its functionality carefully to determine whether it can be used for your configuration and environment. If you have any questions or concerns, contact your Hitachi Data Systems representative or the Support Center. Changing HMOs on a Hitachi RAID storage system that is already installed and configured is disruptive and requires the server to be rebooted. Table 2-2 Host Mode Options for Red Hat Linux Operations HMO Storage System Description Host Mode Comments 2 HUS VM Veritas DBE+RAC Common Mandatory. VSP USP V/VM (1) The response of Test Unit Ready(TUR) for Persistent Reserve is changed. (2) According to the SCSI-3 specification, Reservation Conflict is responded to the TUR issued via the path without Reservation Key registered, Do not apply this option to Sun Cluster. (3) Setting HMO 02 to ON enables the TUR to perform normally, which is issued via the path without Reservation Key registered and to which Reservation Key used to be responded. Note: HMO 02 is required when the Veritas DBE for Oracle RAC(I/O Fencing) function is in use. 7 HUS VM VSP USP V/VM Changes the setting of whether to return the Unit Attention response when adding a LUN. ON: Unit Attention response is returned. OFF (default): Unit Attention response is not returned. Sense code: REPORTED LUNS DATA HAS CHANGED Notes: 1. Set host mode option 07 to ON when you expect the REPORTED LUNS DATA HAS CHANGED UA at SCSI path change. Common For VSP and HUS VM HMO 7 works regardless of the host mode setting. For USP V/VM the host mode must be 0x00 or 0x09 for /99 or earlier. 2. If the Unit Attention report occurs frequently and the load on the host side becomes high, the data transfer cannot be started on the host side and timeout may occur. 3. If both HMO 07 and HMO 69 are set to ON, the UA of HMO 69 is returned to the host. 13 HUS VM VSP USP V/VM Provides SIM notification when the number of link failures detected between ports exceeds the threshold. Common Optional Configure HMO 13 only when you are requested to do so. Installing the Storage System 2-5
24 HMO Storage System Description Host Mode Comments 22 HUS VM VSP USP V/VM When a reserved volume receives a Mode Sense command from a node that is not reserving this volume, the host will receive the following responses from the storage system: ON: Normal Response. Common USP V/VM: /00 or later (within x range) OFF: Reservation Conflict (Default). Notes: x-00/00 or later 1. By applying HMO 22, the volume status (reserved/nonreserved) is checked more frequently (several tens of milliseconds per LU). 2. By applying HMO 22, the host OS does not receive warning messages when a Mode Select command is issued to a reserved volume. 3. There is no influence to Veritas Cluster Server software when HMO 22 is set OFF. Enable HMO 22 ON when there are numerous reservation conflicts. 4. Set HMO 22 ON when Veritas Cluster Server is connected. 39 HUS VM VSP USP V/VM 41 HUS VM VSP USP V/VM Resets a job and returns UA to all the initiators connected to the host group where Target Reset has occurred. ON: Job reset range: Reset is performed to the jobs of all the Initiators connected to the host group where Target Reset has occurred. UA set range: UA is returned to all the Initiators connected to the host group where Target Reset has occurred. OFF (default): Job reset range: Reset is performed to the jobs of the initiator that has issued Target Reset. UA set range: UA is returned to the initiator that has issued Target Reset. Note: This option is used in the SVC environment, and the job reset range and UA set range need to be controlled per host group when Target Reset has been received. Gives priority to starting Inquiry/ Report LUN issued from the host where this option is set. ON: Inquiry/ Report LUN is started by priority. OFF (default): The operation is the same as before. Common USP V/VM: /00 or later VSP: /00 or later Common USP V/VM: /00 or later 2-6 Installing the Storage System
25 HMO Storage System Description Host Mode Comments 48 USP V/VM By setting this option to ON, in normal operation, the pair status of S-VOL is not changed to SSWS even when Read commands exceeding the threshold (1,000/6 min) are issued while a specific application is used. ON: The pair status of S-VOL is not changed to SSWS if Read commands exceeding the threshold are issued. OFF (default): The pair status of S-VOL is changed to SSWS if Read commands exceeding the threshold are issued. Note: 1. Set this option to ON for the host group if the transition of the pair status to SSWS is not desired in the case that an application that issues Read commands (*1) exceeding the threshold (1,000/6 min) to S-VOL is used in HAM environment. *1: Currently, the vxdisksetup command of Solaris VxVM serves. 2. Even when a failure occurs in P-VOL, if this option is set to ON, which means that the pair status of S-VOL is not changed to SSWS (*2), the response time of Read command to the S-VOL whose pair status remains as Pair takes several msecs. On the other hand, if the option is set to OFF, the response time of Read command to the S-VOL is recovered to be equal to that to P-VOL by judging that an error occurs in the P-VOL when Read commands exceeding the threshold are issued. *2: Until the S-VOL receives a Write command, the pair status of S-VOL is not changed to SSWS. Common USP V/VM: /10 or later (within x range) /00 or later Installing the Storage System 2-7
26 HMO Storage System Description Host Mode Comments 49 HUS VM VSP USP V/VM 2-8 Selects BB_Credit value. (HMO#49: Low_bit) ON: The subsystem operates with BB_Credit value of 80 or 255. OFF (default): The subsystem operates with BB_Credit value of 40 or 128. *HMO#50/HMO#49: BB_Credit value is decided by 2 bits of the two HMO. 00: Existing mode (BB_Credit value = 40) 01: BB_Credit value = 80 10: BB_Credit value = : BB_Credit value = 255 Note: This option is applied when the two conditions below are met: Data frame transfer in long distance connection exceeds the BB_Credit value. System option mode 769 is set to OFF (retry operation is enabled at TC/UR path creation). VSP, HUS VM: 1. When HMO 49 is set to ON, SSB log of link down is output on MCU (M-DKC) and RCU (R-DKC). 2. This HMO can work only when the micro-program supporting this function is installed on both MCU (M-DKC) and RCU (R-DKC). 3. The HMO setting is only applied to Initiator-Port and RCU Target-Port. This function is only applicable when the 8UFC or 16UFC PCB is used on RCU/MCU. 4. If this option is used, Point to Point setting is necessary. 5. When removing the 8UFC or 16UFC PCB, the operation must be executed after setting HMO 49 to OFF. 6. If HMO 49 is set to ON while SOM 769 is ON, path creation may fail after automatic port switching. 7. Make sure to set HMO 49 from OFF to ON or from ON to OFF after the pair is suspended or when the load is low. 8. The RCU Target, which is connected with the MCU where this mode is set to ON, cannot be used for UR. 9. This function is prepared for long distance data transfer. Therefore, if HMO49 is set to ON with distance of 0 km, a data transfer error may occur on RCU side. USP V/VM: 1. When HMO 49 is set to ON, SSB log of link down is output on MCU (M-DKC). 2. This HMO can work only when the micro-program supporting this function is installed on both MCU (M-DKC) and RCU (R-DKC). 3. The HMO setting is only applied to Initiator-Port. This function is only applicable when the 8US PCB is used on RCU/MCU. 4. If this option is used, Point to Point setting is necessary. 5. When removing the 8US PCB, the operation must be executed after setting the HMO 49 to OFF. 6. If HMO 49 is set to ON while SOM 769 is ON, path creation may fail after automatic port switching. Common 7. Make sure to set HMO 49 from OFF to ON or from ON to OFF after the pair is suspended or when the load is low. 8. The RCU Target, which Installing is connected the Storage with the System MCU where this mode is set to ON, cannot be used for UR. 9. This function is prepared for long distance data transfer. Therefore, if HMO49 is set to ON with distance of 0 km, a data transfer error may occur on RCU side. VSP: /00 and higher (within x range) /00 or later USP V/VM: /00 or later
27 HMO Storage System Description Host Mode Comments 50 HUS VM VSP USP V/VM Selects BB_Credit value. (HMO#50: High_bit) ON: The subsystem operates with BB_Credit value of 128 or 255. OFF (default): The subsystem operates with BB_Credit value of 40 or 80. *HMO#50/HMO#49: BB_Credit value is decided by 2 bits of the two HMO. 00: Existing mode (BB_Credit value = 40) 01: BB_Credit value = 80 10: BB_Credit value = : BB_Credit value = 255 Note: This option is applied when the two conditions below are met: Data frame transfer in long distance connection exceeds the BB_Credit value. System option mode 769 is set to OFF (retry operation is enabled at TC/UR path creation). VSP, HUS VM: 1. When HMO 50 is set to ON, SSB log of link down is output on MCU (M-DKC) and RCU (R-DKC). 2. This HMO can work only when the micro-program supporting this function is installed on both MCU (M-DKC) and RCU (R-DKC). 3. The HMO setting is only applied to Initiator-Port and RCU Target-Port. This function is only applicable when the 8UFC or 16UFC PCB is used on RCU/MCU. 4. If this option is used, Point to Point setting is necessary. 5. When removing the 8UFC or 16UFC PCB, the operation must be executed after setting HMO 50 to OFF. 6. If HMO 50 is set to ON while SOM 769 is ON, path creation may fail after automatic port switching. 7. Make sure to set HMO 50 from OFF to ON or from ON to OFF after the pair is suspended or when the load is low. 8. The RCU Target, which is connected with the MCU where this mode is set to ON, cannot be used for UR. 9. This function is prepared for long distance data transfer. Therefore, if HMO49 is set to ON with distance of 0 km, a data transfer error may occur on RCU side. USP V/VM: 1. When HMO 50 is set to ON, SSB log of link down is output on MCU (M-DKC). 2. This HMO can work only when the micro-program supporting this function is installed on both MCU (M-DKC) and RCU (R-DKC). 3. The HMO setting is only applied to Initiator-Port. This function is only applicable when the 8US PCB is used on RCU/MCU. 4. If this option is used, Point to Point setting is necessary. 5. When removing 8US PCB, the operation must be executed after setting the HMO 50 to OFF. 6. If HMO 50 is set to ON while SOM 769 is ON, path creation may fail after automatic port switching. Common 7. Make sure to set HMO 50 from OFF to ON or from ON to OFF after the pair is suspended or when the load is low. 8. The RCU Target, which Installing is connected the Storage with the System MCU where this mode is set to ON, cannot be used for UR. 9. This function is prepared for long distance data transfer. Therefore, if HMO49 is set to ON with distance of 0 km, a data transfer error may occur on RCU side. VSP: /00 and higher (within x range) /00 or later USP V/VM: /00 or later 2-9
28 HMO Storage System Description Host Mode Comments 51 HUS VM VSP USP V/VM 2-10 Selects operation condition of TrueCopy. ON: TrueCopy operates in the performance improvement logic. (When a WRITE command is issued, FCP_CMD/FCP_DATA is continuously issued while XFER_RDY issued from RCU side is prevented.) OFF (default): TrueCopy operates in the existing logic. Note: This option is applied when write I/O of TrueCopy is executed. VSP, HUS VM: 1. When HMO 51 is set to ON, SSB log of link down is output on MCU (M-DKC) and RCU (R-DKC). 2. This HMO can work only when the micro-program supporting this function is installed on both MCU (M-DKC) and RCU (R-DKC). 3. The HMO setting is only applied to Initiator-Port and RCU Target-Port. This function is only applicable when the 8UFC or 16UFC PCB is used on RCU/MCU. 4. When removing 8UFC or 16UFC PCB, the operation must be executed after setting HMO 51 to OFF. 5. If HMO 51 is set to ON while SOM 769 is ON, path creation may fail after automatic port switching. 6. Make sure to set HMO 51 from OFF to ON or from ON to OFF after the pair is suspended or when the load is low. 7. The RCU Target, which is connected with the MCU where this mode is set to ON, cannot be used for UR. 8. When HMO51 is set to ON using RAID600 as MCU and RAID700 as RCU, the micro-program of RAID600 must be /00 or higher (within x range) or /00 or higher. 9. Path attribute change (Initiator Port - RCU-Target Port, RCU-Target Port - Initiator Port) accompanied with Hyperswap is enabled after setting HMO51 to ON. If HMO51 is already set to ON on the both paths, HMO51 continues to be applied on the paths even after execution of Hyperswap. 10. In a storage system with maximum number of MPBs (8 MPBs) mounted, HMO051 may need to be used with HMO065. In this case, also see HMO 65. USP V/VM: 1. When HMO 51 is set to ON, SSB log of link down is output on MCU (M-DKC). 2. This HMO can work only when the micro-program supporting this function is installed on both MCU (M-DKC) and RCU (R-DKC). 3. The HMO setting is only applied to Initiator-Port. This function is only applicable when the 8US PCB is used on RCU/MCU. 4. When removing 8US PCB, the operation must be executed after setting the HMO 51 to OFF. 5. If HMO 51 is set to ON while SOM 769 is ON, path creation may fail after automatic port switching. 6. Make sure to set HMO 51 from OFF to ON or from ON to OFF after the pair is suspended or when the load is low. Common 7. The RCU Target, which Installing is connected the Storage with the System MCU where this mode is set to ON, cannot be used for UR. 8. When Configuration HMO51 is set Guide to ON for using Red RAID600 Hat Linux as MCU Host and Attachment RAID700 as RCU, the micro-program of RAID600 must be /00 or higher (within x range) or /00 or higher. 9. Path attribute change (Initiator Port - RCU-Target Port, VSP: /00 and higher (within x range) /00 or later USP V/VM: /00 or later
29 HMO Storage System Description Host Mode Comments 52 VSP Enables a function using HAM to transfer SCSI-2 reserve information. If using software for a cluster system that uses a SCSI-2 Reservation, set host mode option 52 on the host groups where the executing node and standby node reside. ON: The function to transfer SCSI-2 reserve information is enabled. OFF (default): The function to transfer SCSI-2 reserve information is not enabled. Notes: 1. To use HAM to transfer SCSI-2 reserve information, the cluster middleware (alternate path) on host side must have been evaluated with the function. 2. Set this HMO to ON on both paths of P-VOL and S-VOL to use this function. Common VSP: /00 or later 65 VSP Selects TrueCopy operation mode when the Round Trip function is enabled by setting HMO 051 to ON in the configuration of the maximum number of MPBs. ON: TrueCopy is performed in enhanced performance improvement mode of Round Trip. OFF (default): TrueCopy is performed in existing Round Trip mode. Note: 1. The option is applied when response performance for an update I/O degrades while the Round Trip function is used in a configuration of the maximum number of MPBs. 2. When using the option, set HMO 51 to ON. 3. The option can work only when HMO 51 is ON. Refer to the document of HMO When the option is set to ON, SSB logs of link down are output on MCU (M-DKC) and RCU (R-DKC). 5. The option can work only when the micro-program supporting this function is installed on both MCU (M-DKC) and RCU (R-DKC). 6. The option setting is applied to Initiator-Port and RCU Target-Port. The function is applicable only when the PCB type of 8UFC or 16UFC is used on MCU and RCU and in the configuration with 4 sets of MPBs on MCU. 7. Setting change of the option from OFF to ON or from ON to OFF must be done after the pair is suspended or when the load is low. 8. Before downgrading the micro-program from a supported version to an unsupported version, set the option to OFF. (Micro-program exchange without setting the option to OFF is guarded. In this case, setting the option to OFF and then retry the micro-program exchange is required.) Common /00 or later Installing the Storage System 2-11
30 HMO Storage System Description Host Mode Comments 68 VSP By setting the option, the Linux OS can judge whether the conditions are met to issue WriteSame(16) to the storage system. ON: 05h is set to Version field of Standard Inquiry. LBPRZ and LBPME bit of Read Capacity(16) are set. Block Limits VPD Page and LBP VPD Page are returned. OFF (default): 02h is set to Version field of Standard Inquiry. LBPRZ and LBPME bit of Read Capacity(16) are not supported. Block Limits VPD Page and LBP BPD Page are not supported. Note: 1. This HMO is applied when Dynamic Provisioning is used by Linux or higher. 2. HMO 68 should be used separately from HMO 63, which also enables to change the setting values of Standard Inquiry Page and Read Capacity(16) and to switch the support for Block Limits VPD Page. 69 VSP Enables/disables the UA response to a host when an LU whose capacity has been expanded receives a command from the host. ON: When an LU whose capacity has been expanded receives a command from a host, UA is returned to the host. Sense key: 0x06 (Unit Attention) Sense code: 0x2a09 (Capacity Data Has Changed), 0x2a01 (Mode Parameters Changed) OFF (default): When an LU whose capacity has been expanded receives a command from a host, UA is not returned to the host. Note: 1. The option is applied when returning UA to the host after LUSE capacity expansion is required. 2. If both HMO 7 and HMO 69 are set to ON, the UA of HMO 69 is returned to the host. 71 VSP Switches sense key/sense code returned as a response to Check Condition when a read/write I/O is received while a DP pool is blocked. ON: The sense key/sense code returned as a response to Check Condition when a read/write I/O is received while a DP pool is blocked is 03(MEDIUM ERROR)/9001(VENDORUNIQUE). OFF (default): The sense key/sense code returned as a response to Check Condition when a read/write I/O is received while a DP pool is blocked is 0400(LOGICAL UNIT NOT READY/CAUSE NOT REPORTABLE). Note: This option is applied if switching sense key/sense code returned as a response to Check Condition when a read/write I/O is received while a DP pool is blocked can prevent a device file from being blocked and therefore the extent of impact can be reduced on host side. 0x00 Common Common /00 or later /00 or later /00 or later 2-12 Installing the Storage System
31 Configuring the Fibre-Channel Ports Use LUN Manager to configure the fibre-channel ports with the appropriate fibre parameters. You select the appropriate settings for each port based on the device to which the port is connected. Determine the topology parameters supported by the device, and set your topology accordingly. The Hitachi RAID storage system supports up to 2048 logical units per fibrechannel port (512 per host group). Check your fibre-channel adapter documentation and your Linux system documentation to determine the total number of devices that can be supported. Table 2-3 explains the settings for defining port parameters. For instructions, see the Provisioning Guide or LUN Manager User Guide for the storage system. Table 2-3 Fibre Parameter Settings Fabric Connection Provides Enable FC-AL FL-port (fabric port) Enable Point-to-Point F-port (fabric port) Disable FC-AL NL-port (private arbitrated loop) Disable Point-to-Point Not supported Note: If you plan to connect different types of servers to the Hitachi RAID storage system via the same fabric switch, use the zoning function of the fabric switch. Contact Hitachi Data Systems for information about port topology configurations supported by HBA/switch combinations. Not all switches support F-port connection. Installing the Storage System 2-13
32 Port Address Considerations for Fabric Environments In fabric environments, port addresses are assigned automatically by fabric switch port number and are not controlled by the port settings. In arbitrated loop environments, the port addresses are set by entering an AL-PA (arbitrated-loop physical address, or loop ID). Table 2-4 shows the available AL-PA values ranging from 01 to EF. Fibrechannel protocol uses the AL-PAs to communicate on the fibre-channel link, but the software driver of the platform host adapter translates the AL-PA value assigned to the port to a SCSI TID. Table 2-4 Available AL-PA Values EF CD B A 25 E8 CC B E4 CB AE 90 6E F E2 CA AD 8F 6D E E1 C9 AC 88 6C D E0 C7 AB 84 6B 4E 33 1B DC C6 AA 82 6A 4D DA C5 A C D9 C3 A B 2E 10 D6 BC A6 7C 66 4A 2D 0F D5 BA A5 7A C 08 D4 B9 A B 04 D3 B6 9F 76 5C 46 2A 02 D2 B5 9E 75 5A D1 B4 9D CE B3 9B C 26 Loop ID Conflicts The Red Hat Linux operating system assigns port addresses from lowest (01) to highest (EF). To avoid loop ID conflict, assign the port addresses from highest to lowest (that is, starting at EF). The AL-PAs should be unique for each device on the loop to avoid conflicts. Do not use more than one port address with the same TID in same loop (for example, addresses EF and CD both have TID 0, see Table 2-4) Installing the Storage System
33 Connecting the Storage System to the Red Hat Linux Host After you prepare the hardware, software, and fibre-channel HBAs, connect the Hitachi RAID storage system to the Red Hat Linux system. Table 2-5 summarizes the steps for connecting Hitachi RAID storage system to the Red Hat Linux system host. Some steps are performed by the Hitachi Data Systems representative, while others are performed by the user. Table 2-5 Connecting the Storage System to the Red Hat Linux Host Activity Performed by Description 1. Verify storage system installation 2. Shut down the Red Hat Linux system 3. Connect the Hitachi RAID storage system 4. Power on the Red Hat Linux system 5 Boot the Red Hat Linux system Hitachi Data Systems representative User Hitachi Data Systems representative User Confirm that the status of the fibrechannel HBAs and LDEVs is NORMAL. Power off the Red Hat Linux system before connecting the Hitachi RAID storage system. Shut down the Red Hat Linux system. When shutdown is complete, power off the Red Hat Linux display. Power off all peripheral devices except for the Hitachi RAID storage system. Power off the host system. You are now ready to connect the Hitachi RAID storage system. Install fibre-channel cables between the storage system and the Red Hat Linux system. Follow all precautions and procedures in the Maintenance Manual. Check all specifications to ensure proper installation and configuration. Power on the Red Hat Linux system after connecting the Hitachi storage system: Power on the Red Hat Linux system display. Power on all peripheral devices. The Hitachi RAID storage system should be on, and the fibre-channel ports should be configured. If the fibre ports are configured after the Linux system is powered on, restart the system to have the new devices recognized. Confirm the ready status of all peripheral devices, including the Hitachi RAID storage system. Power on the Red Hat Linux system. Installing the Storage System 2-15
34 Configuring the Host Fibre-Channel HBAs You need to configure the fibre-channel HBAs connected to the Hitachi RAID storage system. The HBAs have many configuration options. This section provides the following minimum requirements for configuring host fibrechannel adapters for operation with the Hitachi RAID storage system. Use the same settings and device parameters for all devices on the Hitachi RAID storage system. The queue depth requirements for the devices on the Hitachi RAID storage system are specified in Table 2-6. You can adjust the queue depth for the devices later as needed (within the specified range) to optimize the I/O performance of the devices. For Qlogic adapters, enable the BIOS. For other adapters, you might need to disable the BIOS to prevent the system from trying to boot from the Hitachi RAID storage system. Refer to the documentation for the adapter. Several other parameters (for example, FC, fabric, multipathing) may also need to be set. See the user documentation for the HBA to determine whether other options or settings are required to meet your operational requirements. Note: If you plan to use Device Mapper (DM) Multipath operations, contact the Hitachi Data Systems Support Center for important HBA settings, such as disabling the HBA failover function and editing the /etc/modprobe.conf file. Table 2-6 Queue Depth Requirements Parameter IOCB Allocation (Queue depth) per LU IOCB Allocation (Queue depth) per port (MAXTAGS) Required Value 32 per LU 2048 per port 2-16 Installing the Storage System
35 Verifying New Device Recognition The final step before configuring the new disk devices is to verify that the host system recognizes the new devices. The host system automatically creates a device file for each new device recognized. To verify new device recognition: 1. Use the dmesg command to display the devices (see Figure 2-1). 2. Record the device file name for each new device. You will need this information when you partition the devices (see Verifying New Device Recognition). See Table 2-7 for a sample SCSI path worksheet. 3. The device files are created under the /dev directory. Verify that a device file was created for each new disk device (see Figure 2-2). # dmesg more : : scsi0 : Qlogic QLA2200 PCI to Fibre Channel Host Adapter: 0 device 14 irq 11 Firmware version: , Driver version 2.11 Beta scsi : 1 host. Vendor: HITACHI Model: OPEN-3 Rev: 0111 Type: Direct-Access ANSI SCSI revision: 02 Detected scsi disk sda at scsi0, channel 0, id 0, lun 0 Device file name of this disk = /dev/sda Logical unit number Vendor: HITACHI Model: OPEN-9 Rev: 0111 Type: Direct-Access ANSI SCSI revision: 02 Detected scsi disk sdb at scsi0, channel 0, id 0, lun 1 : : In this example, the HITACHI OPEN-3 device (TID 0, LUN 0) and the HITACHI OPEN-9 device (TID 0, LUN 1) are recognized by the Red Hat Linux server. Figure 2-1 Example of Verifying New Device Recognition # ls -l /dev more : brw-rw root disk 8, 0 May sda Device file = sda Figure 2-2 Example of Verifying Device Files Installing the Storage System 2-17
36 Table 2-7 Sample SCSI Path Worksheet LDEV (CU:LDEV) Device Type LUSE ( *n) VLL (MB) Device File Name Path Alternate Path 0:00 TID: 0:01 TID: 0:02 TID: 0:03 TID: 0:04 TID: 0:05 TID: 0:06 TID: 0:07 TID: 0:08 TID: 0:09 TID: 0:0A TID: 0:0B TID: 0:0C TID: 0:0D TID: 0:0E TID: 0:0F TID: TID: TID: TID: TID: TID: TID: TID: TID: TID: TID: TID: TID: TID: TID: TID: TID: 2-18 Installing the Storage System
37 3 Configuring the New Disk Devices This chapter describes how to configure the new disk devices on the Red Hat Linux system host: Setting the Number of Logical Units Partitioning the Devices Creating, Mounting, and Verifying the File Systems Configuring the New Disk Devices 3-1
38 Setting the Number of Logical Units To set the number of LUs: 1. Edit the /etc/modules.conf file to add the following line: options scsi_mod max_scsi_luns=xx where xx is the maximum number of LUs supported by your Linux operating system. Check your fibre-channel adapter documentation and Linux system documentation to ascertain the total number of devices that can be supported. 2. To set the Emulex Driver, as shown in Figure 3-3, add the following line to the /etc/modules.conf file: Alias scsi_hostadapter lpfcdd 3. To activate the above modification, make an image file for booting. Example: # mkinitrd /boot/initrd-2.4.x.scsiluns.img `uname -r` 4. Use one of the following methods to change the setting of Bootloader: a. LILO used as Bootloader. Edit the lilo.conf file as shown in Figure 3-1, then issue the lilo command to activate the lilo.conf setting with selecting the label. Example: # lilo b. Grand Unified Bootloader (GRUB) is used as Bootloader. Edit the /boot/grub/grub.conf file as shown in Figure Reboot the system. image=/boot/vmlinuz-qla2x00 label=linux-qla2x00 append= max_scsi_luns=16 # initrd=/boot/initrd-2.4.x.img Comment out this line. initrd=/boot/initrd-2.4.x.scsiluns.img root=/dev/sda7 read-only #sbin/lilo Add this line. Figure 3-1 Example of Setting the Number of LUs (LILO) kernel /boot/vmlinuz-2.4.x ro root=/dev/hda1 # initrd /boot/initrd-2.4.x.img This line is commented out. initrd /boot/initrd-2.4.x.scsiluns.img Add this line. Figure 3-2 Example of Setting the Number of LUs (GRUB) Alias scsi_hostadapter lpfcdd Add this to /etc/modules.conf Figure 3-3 Example of Setting the Emulex Driver 3-2 Configuring the New Disk Devices
39 Partitioning the Devices After the setting the number of logical units, you need to create the partitions on the new disk devices. Note: For important information about creating partitions with DM Multipath, contact the Hitachi Data Systems Support Center. To create the partitions on the new disk devices: 1. Enter fdisk/dev/<device_name> Example: fdisk/dev/sda where dev/sda is the device file name 2. Select p to display the present partitions. 3. Select n to make a new partition. You can make up to four primary partitions (1-4) or one extended partition. The extended partition can be organized into 11 logical partitions, which can be assigned partition numbers from 5 to Select w to write the partition information to disk and complete the fdisk command. Tip: Other useful commands include d to remove partitions and q to stop a change. 5. Repeat steps 1 through 4 for each new disk device. Configuring the New Disk Devices 3-3
40 Creating, Mounting, and Verifying the File Systems Creating the File Systems After you partition the devices, create the file systems. Be sure the file system are appropriate for the primary and/or extended partition for each logical unit. To create the file system, issue the mkfs command: # mkfs /dev/sda1 where /dev/sda1 is device file of primary partition number 1. Creating the Mount Directories To create the mount directories, issue the mkdir command: # mkdir /USP-LU00 Mounting the New File Systems Use the mount command to mount each new file system (see example in Figure 3-4). The first parameter of the mount command is the device file name (/dev/sda1), and the second parameter is the mount directory, as shown in Figure 3-4. # mount /dev/sda1 /USP-LU00 Device file name Mount directory name # Figure 3-4 Example of Mounting the New Devices Verifying the File Systems After mounting the file systems, verify the file systems (see the example in Figure 3-5). # df -h Filesystem Size Used Avail Used% Mounted on /dev/sda1 1.8G 890M 866M 51% / /dev/sdb1 1.9G 1.0G 803M 57% /usr /dev/sdc1 2.2G 13k 2.1G 0% /USP-LU00 # Figure 3-5 Example of Verifying the File System 3-4 Configuring the New Disk Devices
41 Setting the Auto-Mount Parameters To set the auto-mount parameters, edit the /etc/fstab file (see the example in Figure 3-6). # cp -ip /etc/fstab /etc/fstab.standard Make a backup of /etc/fstab. # vi /etc/fstab Edit /etc/fstab. : /dev/sda1 /USP-LU00 ext2 defaults 0 2 Add new device. Figure 3-6 Example of Setting Auto-Mount Parameters Configuring the New Disk Devices 3-5
42 3-6 Configuring the New Disk Devices
43 4 Failover and SNMP Operation The Hitachi RAID storage systems support industry-standard products and functions that provide host and/or application failover, I/O path failover, and logical volume management (LVM). The Hitachi RAID storage systems also support the industry-standard simple network management protocol (SNMP) for remote storage system management from the UNIX/PC server host. SNMP is used to transport management information between the storage system and the SNMP manager on the host. The SNMP agent sends status information to the host when requested by the host or when a significant event occurs. This chapter describes how failover and SNMP operations are supported on the Hitachi RAID storage system: Host Failover Path Failover Device Mapper Multipath SNMP Remote System Management Note: The user is responsible for configuring the failover and SNMP management software on the UNIX/PC server host. For assistance with failover and/or SNMP configuration on the host, refer to the user documentation, or contact the vendor s technical support. Failover and SNMP Operation 4-1
44 Host Failover The Hitachi RAID storage systems support the Veritas Cluster Server and host failover products for the Red Hat Linux operating system. The user must be sure to configure the host failover software and any other high-availability (HA) software as needed to recognize and operate with the newly attached devices. For assistance with Veritas Cluster Server operations, refer to the Veritas user documentation, see Note on Using Veritas Cluster Server, or contact Symantec technical support. For assistance with specific configuration issues related to the Hitachi RAID storage system, contact your Hitachi Data Systems representative. Path Failover The Hitachi RAID storage systems support the Hitachi HiCommand Dynamic Link Manager (HDLM) and Veritas Volume Manager for the Red Hat Linux operating system. For further information, see the Hitachi Dynamic Link Manager for Red Hat Linux User s Guide. For assistance with Veritas Volume Manager operations, refer to the Veritas user documentation or contact Symantec technical support. Device Mapper Multipath The Hitachi Virtual Storage Platform and Hitachi Universal Storage Platform V/VM support DM Multipath operations for Red Hat Enterprise Linux (RHEL) version 5.4 X64 or X32 or later. Note: Contact the Hitachi Data Systems Support Center for important information about required settings and parameters for DM Multipath operations, including but not limited to: Disabling the HBA failover function Editing the /etc/modprobe.conf file Editing the /etc/multipath.conf file Configuring LVM Configuring raw devices Creating partitions with DM Multipath 4-2 Failover and SNMP Operation
45 SNMP Remote System Management SNMP is a part of the TCP/IP protocol suite that supports maintenance functions for storage and communication devices. The Hitachi RAID storage systems use SNMP to transfer status and management commands to the SNMP Manager on the Red Hat Linux server host via a notebook PC (see Figure 4-1). When the SNMP manager requests status information or when a service information message (SIM) occurs, the SNMP agent on the storage system notifies the SNMP manager on the Red Hat Linux server. Notification of error conditions is made in real time, providing the Red Hat Linux server user with the same level of monitoring and support available to the mainframe user. The SIM reporting via SNMP enables the user to monitor the Hitachi RAID storage system from the Red Hat Linux server host. When a SIM occurs, the SNMP agent initiates trap operations, which alert the SNMP manager of the SIM condition. The SNMP manager receives the SIM traps from the SNMP agent, and can request information from the SNMP agent at any time. Note: The user is responsible for configuring the SNMP manager on the Red Hat Linux server host. For assistance with SNMP manager configuration on the Red Hat Linux server host, refer to the user documentation, or contact the vendor s technical support. Hitachi RAID storage system SIM Private LAN SNMP Manager Error Info. Public LAN Service Processor UNIX/PC Server Figure 4-1 SNMP Environment Failover and SNMP Operation 4-3
46 4-4 Failover and SNMP Operation
47 5 Troubleshooting This chapter provides troubleshooting information for Red Hat Linux host attachment and instructions for calling technical support. General Troubleshooting Calling the Hitachi Data Systems Support Center Troubleshooting 5-1
48 General Troubleshooting Table 5-1 lists potential error conditions that may occur during storage system installation and provides instructions for resolving each condition. If you cannot resolve an error condition, contact your Hitachi Data Systems representative for help, or call the Hitachi Data Systems Support Center for assistance. For troubleshooting information on the Hitachi RAID storage system, see the User and Reference Guide for the storage system (for example, Hitachi Virtual Storage Platform User and Reference Guide). For troubleshooting information on Hitachi Storage Navigator, see the Storage Navigator User s Guide for the storage system (for example, Hitachi Virtual Storage Platform Storage Navigator User Guide). For information on errors messages displayed by Storage Navigator, see the Storage Navigator Messages document for the storage system (for example, Hitachi Virtual Storage Platform Storage Navigator Messages). Table 5-1 Troubleshooting Error Condition The logical devices are not recognized by the system. The file system cannot be created. The file system is not mounted after rebooting. Recommended Action Be sure that the READY indicator lights on the Hitachi RAID storage system are ON. Be sure that the LUNs are properly configured. The LUNs for each target ID must start at 0 and continue sequentially without skipping any numbers. Be sure that the device name is entered correctly with mkfs. Be sure that the LU is properly connected and partitioned. Be sure that the system was restarted properly. Be sure that the auto-mount information in the /etx/fstab file is correct. 5-2 Troubleshooting
49 Calling the Hitachi Data Systems Support Center If you need to call the Hitachi Data Systems Support Center, provide as much information about the problem as possible, including: The circumstances surrounding the error or failure. The exact content of any error messages displayed on the host systems. The exact content of any error messages displayed by Storage Navigator. The Storage Navigator configuration information (use the FD Dump Tool). The service information messages (SIMs), including reference codes and severity levels, displayed by Storage Navigator. The Hitachi Data Systems customer support staff is available 24 hours a day, seven days a week. If you need technical support, log on to the Hitachi Data Systems Portal for contact information: Troubleshooting 5-3
50 5-4 Troubleshooting
51 A Note on Using Veritas Cluster Server By issuing a SCSI-3 Persistent Reserve command for a Hitachi RAID storage system, the Veritas Cluster Server (VCS) provides the I/O fencing function that can prevent data corruption from occurring if the cluster communication stops. Each node of VCS registers reserve keys to the storage system, which enables these nodes to share a disk to which the reserve key is registered. Each node of VCS registers the reserve key when importing a disk group. One node registers the identical reserve key for all paths of all disks (LU) in the disk group. The reserve key contains a unique value for each disk group and a value to distinguish nodes. Key format: <Node # + disk group-unique information> Example: APGR0000, APGR0001, BPGR0000, and so on When the Hitachi RAID storage system receives a request to register the reserve key, the reserve key and Port WWN of node are recorded on a key registration table of each port of storage system where the registration request is received. The number of reserve keys that can be registered to one storage system is 128 for a port. The storage system confirms duplication of registration by a combination of the node Port WWN and reserve key. Therefore, the number of entries of the registration table does not increase even though any request for registering duplicated reserve keys is accepted. Calculation formula for the number of used entries of key registration table: [number of nodes] [number of port WWN of node] [number of disk groups] When the number of registered reserve keys exceeds the upper limit of 128, key registration as well as operations such as installing an LU to the disk group fail. To avoid failure of reserve key registration, the number of reserve keys needs to be kept below 128. For this, restrictions such as imposing a limit on the number of nodes or on the number of server ports using LUN security function or maintaining the number of disk groups appropriate are necessary. Note on Using Veritas Cluster Server A-1
52 Example: When adding an LU to increase disk capacity, do not add the number of disk groups, but add an LU to the current disk group. WWNa0 Node A WWNa1 WWNb0 Node B WWNb1 FC-SW Security List WWNa0 WWNb0 1A LU0 LU1 disk group 1 LU2 2A Security List WWNa1 WWNb1 LU4 LU5 disk group 2 LU6 LU4 LU5 disk group 3 Key registration table for Port-1A Key registration table for Port-2A Entry Reserve Key WWN Entry Reserve Key WWN 0 APGR0001 WWNa0 0 APGR0001 WWNa1 1 APGR0002 WWNa0 1 APGR0002 WWNa1 2 APGR0003 WWNa0 2 APGR0003 WWNa1 3 BPGR0001 WWNb0 3 BPGR0001 WWNb1 4 BPGR0002 WWNb0 4 BPGR0002 WWNb1 5 BPGR0003 WWNb0 5 BPGR0003 WWNb : : : : : : Figure A-1 Adding Reserve Keys for LUs to Increase Disk Capacity A-2 Note on Using Veritas Cluster Server
53 Acronyms and Abbreviations AL AL-PA blk CVS DM FC FCP FX GB Gbps GRUB HBA HDLM HUS VM I/O LU LUN LUSE LVI LVM MB MPE OFC PA PC PP RAID RHEL SCSI arbitrated loop arbitrated loop physical address block custom volume size Device Mapper fibre-channel fibre-channel protocol Hitachi Cross-OS File Exchange gigabytes gigabits per second Grand Unified Bootloader host bus adapter Hitachi Dynamic Link Manager Hitachi Unified Storage VM input/output logical unit logical unit, logical unit number LUN Expansion logical volume image Logical Volume Manager megabytes maximum number of physical extents open fibre control physical address personal computer physical partition redundant array of independent disks Red Hat Enterprise Linux small computer system interface Acronyms and Abbreviations Acronyms-1
54 SIM SNMP TCO TID USP V/VM VLL VSP WWN service information message simple network management protocol total cost of ownership target ID Hitachi Universal Storage Platform V/VM Virtual LVI/LUN Hitachi Virtual Storage Platform worldwide name Acronyms-2 Acronyms and Abbreviations
55
56 Hitachi Data Systems Corporate Headquarters 750 Central Expressway Santa Clara, California U.S.A. Regional Contact Information Americas Europe, Middle East, and Africa +44 (0) Asia Pacific MK-96RD640-05
Configuration Guide for Microsoft Windows Host Attachment
Configuration Guide for Microsoft Windows Host Attachment Hitachi Unified Storage VM Hitachi Virtual Storage Platform Hitachi Universal Storage Platform V/VM FASTFIND LINKS Contents Product Version Getting
Configuration Guide for VMware ESX Server Host Attachment
Configuration Guide for VMware ESX Server Host Attachment Hitachi Unified Storage VM Hitachi Virtual Storage Platform Hitachi Universal Storage Platform V/VM Hitachi TagmaStore Universal Storage Platform
Hitachi Universal Storage Platform V Hitachi Universal Storage Platform VM Hitachi Data Retention Utility User s Guide
Hitachi Universal Storage Platform V Hitachi Universal Storage Platform VM Hitachi Data Retention Utility User s Guide FASTFIND LINKS Document Organization Product Version Getting Help Contents MK-96RD612-03
Open-Systems Host Attachment Guide
Open-Systems Host Attachment Guide Hitachi Virtual Storage Platform G1000 Hitachi Virtual Storage Platform Hitachi Unified Storage VM Hitachi Universal Storage Platform V/VM FASTFIND LINKS Contents Product
Hitachi Virtual Storage Platform
Hitachi Virtual Storage Platform Encryption License Key User Guide FASTFIND LINKS Contents Product Version Getting Help MK-90RD7015-10 2010-2014 Hitachi, Ltd. All rights reserved. No part of this publication
Hitachi Virtual Storage Platform
Hitachi Virtual Storage Platform Hitachi Universal Replicator User Guide FASTFIND LINKS Contents Product Version Getting Help MK-90RD7032-14 2010-2014 Hitachi, Ltd. All rights reserved. No part of this
Hitachi Universal Storage Platform V Hitachi Universal Storage Platform VM Dynamic Provisioning User s Guide
Hitachi Universal Storage Platform V Hitachi Universal Storage Platform VM Dynamic Provisioning User s Guide FASTFIND LINKS Document Organization Product Version Getting Help Contents MK-96RD641-07 Copyright
FASTFIND LINKS. Document Organization Product Version Getting Help Contents MK-96RD617-06
Hitachi Universal Storage Platform V Hitachi Universal Storage Platform VM Hitachi Performance Manager User s Guide Performance Monitor, Volume Migration, and Server Priority Manager FASTFIND LINKS Document
Hitachi Virtual Storage Platform
Hitachi Virtual Storage Platform Performance Guide FASTFIND LINKS Document Organization Product Version Getting Help Contents MK-90RD7020-12 2010-2013 Hitachi Ltd. All rights reserved. No part of this
Hitachi Unified Storage VM Block Module
Hitachi Unified Storage VM Block Module System Operations Using Spreadsheets FASTFIND LINKS Contents Product Version Getting Help MK-92HM7015-02 2012-2013 Hitachi Ltd. All rights reserved. No part of this
Command Control Interface
Command Control Interface User and Reference Guide Hitachi Virtual Storage Platform G1000 Hitachi Unified Storage VM Hitachi Virtual Storage Platform Hitachi Universal Storage Platform V/VM FASTFIND LINKS
FASTFIND LINKS. Document Organization. Product Version. Getting Help. Contents MK-96RD617-15
Hitachi Universal Storage Platform V Hitachi Universal Storage Platform VM Hitachi Performance Manager User s Guide Performance Monitor and Server Priority Manager FASTFIND LINKS Document Organization
Hitachi Command Suite. Dynamic Link Manager. (for Windows ) User Guide. Document Organization. Product Version. Getting Help. Contents MK-92DLM129-30
Hitachi Command Suite Dynamic Link Manager (for Windows ) User Guide Document Organization Product Version Getting Help Contents MK-92DLM129-30 2014 Hitachi, Ltd. All rights reserved. No part of this publication
Hitachi Data Ingestor
Hitachi Data Ingestor Backup Restore Features Supplement for Hitachi Data Protection Suite Product Version Getting Help Contents MK-90HDI009-14 2010-2015 Hitachi, Ltd. All rights reserved. No part of this
Hitachi Storage Replication Adapter 2.1 for VMware vcenter Site Recovery Manager 5.1/5.5 Deployment Guide
Hitachi Storage Replication Adapter 2.1 for VMware vcenter Site Recovery Manager 5.1/5.5 Deployment FASTFIND LINKS Document conventions Product version Getting help Contents MK-09RM6745-08 2009-2014 Hitachi
Hitachi Unified Storage VM Block Module
Hitachi Unified Storage VM Block Module Hitachi Universal Replicator User Guide FASTFIND LINKS Contents Product Version Getting Help MK-92HM7019-08 2012-2014 Hitachi, Ltd. All rights reserved. No part
Hitachi Application Protector User Guide for Microsoft SQL Server
Hitachi Application Protector User Guide for Microsoft SQL Server FASTFIND LINKS Document Organization Product Version Getting Help Contents MK-91HAP007-06 Copyright 2011-2014 Hitachi, Ltd., Hitachi Data
HiCommand Dynamic Link Manager (HDLM) for Windows Systems User s Guide
HiCommand Dynamic Link Manager (HDLM) for Windows Systems User s Guide MK-92DLM129-13 2007 Hitachi Data Systems Corporation, ALL RIGHTS RESERVED Notice: No part of this publication may be reproduced or
Hitachi NAS Blade for TagmaStore Universal Storage Platform and Network Storage Controller NAS Blade Error Codes User s Guide
Hitachi NAS Blade for TagmaStore Universal Storage Platform and Network Storage Controller NAS Blade Error Codes User s Guide MK-95RD280-03 2006 Hitachi, Ltd., Hitachi Data Systems Corporation, ALL RIGHTS
Compute Systems Manager
Hitachi Command Suite Compute Systems Manager Installation and Configuration Guide MK-91HC195-12 2014, 2015 Hitachi, Ltd. All rights reserved. No part of this publication may be reproduced or transmitted
SAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
Hitachi Command Suite. Tuning Manager. Installation Guide. Document Organization. Product Version. Getting Help. Contents MK-96HC141-27
Hitachi Command Suite Tuning Manager Installation Guide Document Organization Product Version Getting Help Contents MK-96HC141-27 2014, 2015 Hitachi, Ltd. All rights reserved. No part of this publication
Hitachi Virtual Storage Platform G1000
Hitachi Virtual Storage Platform G1000 Encryption License Key User Guide FASTFIND LINKS Contents Product Version Getting Help MK-92RD8009-02 2014 Hitachi, Ltd. All rights reserved. No part of this publication
FlexArray Virtualization
Updated for 8.2.1 FlexArray Virtualization Installation Requirements and Reference Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support
Windows Host Utilities 6.0.2 Installation and Setup Guide
Windows Host Utilities 6.0.2 Installation and Setup Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 463-8277
Building a Scalable Microsoft Hyper-V Architecture on the Hitachi Universal Storage Platform Family
Building a Scalable Microsoft Hyper-V Architecture on the Hitachi Universal Storage Platform Family Reference Architecture Guide By Rick Andersen April 2009 Summary Increasingly, organizations are turning
Hitachi Device Manager Software Getting Started Guide
Hitachi Device Manager Software Getting Started Guide FASTFIND LINKS Document Organization Software Version Getting Help Contents MK-98HC149-03 Copyright 2008, 2009 Hitachi Ltd., Hitachi Data Systems Corporation,
EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution
EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution Release 3.0 User Guide P/N 300-999-671 REV 02 Copyright 2007-2013 EMC Corporation. All rights reserved. Published in the USA.
Setup for Microsoft Cluster Service ESX Server 3.0.1 and VirtualCenter 2.0.1
ESX Server 3.0.1 and VirtualCenter 2.0.1 Setup for Microsoft Cluster Service Revision: 20060818 Item: XXX-ENG-QNNN-NNN You can find the most up-to-date technical documentation on our Web site at http://www.vmware.com/support/
Customer Education Services Course Overview
Customer Education Services Course Overview Accelerated SAN Essentials (UC434S) This five-day course provides a comprehensive and accelerated understanding of SAN technologies and concepts. Students will
Hitachi Virtual Storage Platform
Hitachi Virtual Storage Platform Hitachi Storage Navigator User Guide FASTFIND LINKS Contents Product Version Getting Help MK-90RD7027-15 2010-2014 Hitachi, Ltd. All rights reserved. No part of this publication
HP XP P9000 External Storage for Open and Mainframe Systems User Guide
HP XP P9000 External Storage for Open and Mainframe Systems User Guide Abstract This guide describes and provides instructions to use HP XP P9000 External Storage software on HP XP P9000 disk arrays. The
Hitachi Virtual Storage Platform
Hitachi Virtual Storage Platform Hitachi SNMP Agent User Guide FASTFIND LINKS Document Organization Product Version Getting Help Contents MK-90RD7025-09 2010-2013 Hitachi Ltd. All rights reserved. No part
Hitachi Compute Blade 500 Series NVIDIA GPU Adapter User s Guide
Hitachi Compute Blade 500 Series NVIDIA GPU Adapter User s Guide FASTFIND LINKS Getting Help Contents MK-91CB500083-01 2010-2015 Hitachi, Ltd. All rights reserved. No part of this publication may be reproduced
Setup for Failover Clustering and Microsoft Cluster Service
Setup for Failover Clustering and Microsoft Cluster Service ESX 4.0 ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until the
Job Management Partner 1/Performance Management - Remote Monitor for Virtual Machine Description, User's Guide and Reference
Job Management Partner 1 Version 10 Job Management Partner 1/Performance Management - Remote Monitor for Virtual Machine Description, User's Guide and Reference 3021-3-353(E) Relevant program products
Dell PowerVault MD32xx Deployment Guide for VMware ESX4.1 Server
Dell PowerVault MD32xx Deployment Guide for VMware ESX4.1 Server A Dell Technical White Paper PowerVault MD32xx Storage Array www.dell.com/md32xx THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND
HP StorageWorks Auto LUN XP user guide. for the XP12000/XP10000
HP StorageWorks Auto LUN XP user guide for the XP12000/XP10000 Part number: T1715-96006 Sixth edition: September 2007 Legal and notice information Copyright 2005, 2007 Hewlett-Packard Development Company,
Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V
Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Implementation Guide By Eduardo Freitas and Ryan Sokolowski February 2010 Summary Deploying
Violin Memory Arrays With IBM System Storage SAN Volume Control
Technical White Paper Report Best Practices Guide: Violin Memory Arrays With IBM System Storage SAN Volume Control Implementation Best Practices and Performance Considerations Version 1.0 Abstract This
hp StorageWorks DS-KGPSA-CA/CX and FCA2354 PCI host bus adapters
release notes hp StorageWorks DS-KGPSA-CA/CX and FCA2354 PCI host bus adapters for Tru64 UNIX and Open VMS Fourth Edition (August 2003) Part Number: AA RR6YD TE These release notes contain supplemental
Deploying SAP on Microsoft SQL Server 2008 Environments Using the Hitachi Virtual Storage Platform
1 Deploying SAP on Microsoft SQL Server 2008 Environments Using the Hitachi Virtual Storage Platform Implementation Guide By Sean Siegmund June 2011 Feedback Hitachi Data Systems welcomes your feedback.
Windows Host Utilities 6.0 Installation and Setup Guide
Windows Host Utilities 6.0 Installation and Setup Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 4-NETAPP
Hitachi Virtual Storage Platform
Hitachi Virtual Storage Platform User and Reference Guide FASTFIND LINKS Document Organization Product Version Getting Help Contents MK-90RD7042-10 2010-2013 Hitachi Ltd, All rights reserved. No part of
Using High Availability Technologies Lesson 12
Using High Availability Technologies Lesson 12 Skills Matrix Technology Skill Objective Domain Objective # Using Virtualization Configure Windows Server Hyper-V and virtual machines 1.3 What Is High Availability?
Hitachi Storage Replication Adapter Software VMware vcenter Site Recovery Manager Deployment Guide
Hitachi Storage Replication Adapter Software VMware vcenter Site Recovery Manager Deployment Guide FASTFIND LINKS Document Organization Product Version Getting Help Contents MK-09RM6745-01 Copyright 2009
Tuning Manager. Hitachi Command Suite. Server Administration Guide MK-92HC021-38. FASTFIND LINKS Document Organization. Product Version.
Hitachi Command Suite Tuning Manager Server Administration Guide FASTFIND LINKS Document Organization Product Version Getting Help Contents MK-92HC021-38 2015 Hitachi, Ltd. All rights reserved. No part
Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4
Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Application Note Abstract This application note explains the configure details of using Infortrend FC-host storage systems
Setup for Failover Clustering and Microsoft Cluster Service
Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESXi 5.1 vcenter Server 5.1 This document supports the version of each product listed and supports all subsequent versions until the
EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution
EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution Release 8.2 User Guide P/N 302-000-658 REV 01 Copyright 2007-2014 EMC Corporation. All rights reserved. Published in the USA.
How To Manage A Hyperv With Hitachi Universal Storage Platform Family
Hitachi Universal Storage Platform Family Best Practices with Hyper-V Best Practices Guide By Rick Andersen and Lisa Pampuch April 2009 Summary Increasingly, businesses are turning to virtualization to
capacity management for StorageWorks NAS servers
application notes hp OpenView capacity management for StorageWorks NAS servers First Edition (February 2004) Part Number: AA-RV1BA-TE This document describes how to use HP OpenView Storage Area Manager
Hitachi Path Management & Load Balancing with Hitachi Dynamic Link Manager and Global Link Availability Manager
Hitachi Data System s WebTech Series Hitachi Path Management & Load Balancing with Hitachi Dynamic Link Manager and Global Link Availability Manager The HDS WebTech Series Dynamic Load Balancing Who should
Hitachi File Services Manager 5.2.1-00 Release Notes
Hitachi File Services Manager 5.2.1-00 Release Notes Copyright 2011, 2015, Hitachi, Ltd., Hitachi Data Systems Corporation, ALL RIGHTS RESERVED Notice: No part of this publication may be reproduced or
Hitachi Command Suite. Automation Director. Installation and Configuration Guide MK-92HC204-00
Hitachi Command Suite Automation Director Installation and Configuration Guide MK-92HC204-00 2015 Hitachi, Ltd. All rights reserved No part of this publication may be reproduced or transmitted in any form
Hitachi Command Suite
Hitachi Command Suite Hitachi Command Suite User Guide MK-90HC172-15 2014 Hitachi, Ltd. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic
Hitachi Unified Storage VM Block Module
Hitachi Unified Storage VM Block Module Hardware User Guide FASTFIND LINKS Contents Product Version Getting Help MK-92HM7005-04 2012-2013 Hitachi Ltd. All rights reserved. No part of this publication may
Hitachi File Services Manager 5.1.1-00 Release Notes
Hitachi File Services Manager 5.1.1-00 Release Notes Copyright 2011-2015, Hitachi, Ltd., Hitachi Data Systems Corporation, ALL RIGHTS RESERVED Notice: No part of this publication may be reproduced or transmitted
Veritas Cluster Server
APPENDIXE This module provides basic guidelines for the (VCS) configuration in a Subscriber Manager (SM) cluster installation. It assumes basic knowledge of the VCS environment; it does not replace the
Storage Area Network Configurations for RA8000/ESA12000 on Windows NT Intel
Storage Area Network Configurations for RA8000/ESA12000 on Application Note AA-RHH6B-TE Visit Our Web Site for the Latest Information At Compaq we are continually making additions to our storage solutions
EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution
EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution Version 8.2 Service Pack 1 User Guide 302-001-235 REV 01 Copyright 2007-2015 EMC Corporation. All rights reserved. Published
Vicom Storage Virtualization Engine. Simple, scalable, cost-effective storage virtualization for the enterprise
Vicom Storage Virtualization Engine Simple, scalable, cost-effective storage virtualization for the enterprise Vicom Storage Virtualization Engine (SVE) enables centralized administration of multi-platform,
HBA Virtualization Technologies for Windows OS Environments
HBA Virtualization Technologies for Windows OS Environments FC HBA Virtualization Keeping Pace with Virtualized Data Centers Executive Summary Today, Microsoft offers Virtual Server 2005 R2, a software
EMC NetWorker Module for Microsoft Exchange Server Release 5.1
EMC NetWorker Module for Microsoft Exchange Server Release 5.1 Installation Guide P/N 300-004-750 REV A02 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright
EMC NetWorker Module for Microsoft Applications Release 2.3. Application Guide P/N 300-011-105 REV A02
EMC NetWorker Module for Microsoft Applications Release 2.3 Application Guide P/N 300-011-105 REV A02 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright
Hitachi Simple Modular Storage 100 User s Guide
Hitachi Simple Modular Storage 100 User s Guide FASTFIND LINKS Document Organization Product Version Getting Help Table of Contents MK-97DF8061-01 2007 Hitachi Data Systems Corporation, ALL RIGHTS RESERVED
Hitachi Compute Blade Series Hitachi Compute Rack Series Server installation and monitoring tool OS Setup Guide
Hitachi Compute Blade Series Hitachi Compute Rack Series FASTFIND LINKS Document Organization Product Version Getting Help Contents MK-99COM061-07 2012-2015 Hitachi, Ltd. All rights reserved. No part of
HP 3PAR Online Import for HDS Storage 1.3.0 Data Migration Guide
HP 3PAR Online Import for HDS Storage 1.3.0 Data Migration Guide Abstract This guide provides information on using HP 3PAR Online Import for HDS Storage software to migrate data from an HDS Storage system
SAN TECHNICAL - DETAILS/ SPECIFICATIONS
SAN TECHNICAL - DETAILS/ SPECIFICATIONS Technical Details / Specifications for 25 -TB Usable capacity SAN Solution Item 1) SAN STORAGE HARDWARE : One No. S.N. Features Description Technical Compliance
McAfee SMC Installation Guide 5.7. Security Management Center
McAfee SMC Installation Guide 5.7 Security Management Center Legal Information The use of the products described in these materials is subject to the then current end-user license agreement, which can
InForm OS 2.2.3/2.2.4 VMware ESX Server 3.0-4.0 QLogic/Emulex HBA Implementation Guide
InForm OS 2.2.3/2.2.4 VMware ESX Server 3.0-4.0 QLogic/Emulex HBA Implementation Guide InForm OS 2.2.3/2.2.4 VMware ESX Server 3.0-4.0 FC QLogic/Emulex HBA Implementation Guide In this guide 1.0 Notices
HP StorageWorks 8Gb Simple SAN Connection Kit quick start instructions
HP StorageWorks 8Gb Simple SAN Connection Kit quick start instructions Congratulations on your purchase of the 8Gb Simple SAN Connection Kit. This guide provides procedures for installing the kit components,
SAN Implementation Course SANIW; 3 Days, Instructor-led
SAN Implementation Course SANIW; 3 Days, Instructor-led Course Description In this workshop course, you learn how to connect Windows, vsphere, and Linux hosts via Fibre Channel (FC) and iscsi protocols
FUJITSU Storage ETERNUS Multipath Driver (Windows Version) Installation Information
FUJITSU Storage ETERNUS Multipath Driver (Windows Version) Installation Information Jan 2015 Contents About ETERNUS Multipath Driver for Windows... 4 Supported Operation System (OS) Versions... 5 Supported
Acer Hitachi AS2040 specifications
Product overview The Acer Hitachi AS2040 is a reliable and robust storage system designed with the small and mid-sized business in mind. Designed with enterprise-class features and simplified management
EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution
EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution Version 9.0 User Guide 302-001-755 REV 01 Copyright 2007-2015 EMC Corporation. All rights reserved. Published in USA. Published
http://docs.trendmicro.com
Trend Micro Incorporated reserves the right to make changes to this document and to the products described herein without notice. Before installing and using the product, please review the readme files,
IBM TotalStorage SAN Volume Controller. Configuration. Version 1.2.1 SC26-7543-03
IBM TotalStorage SAN Volume Controller Configuration Guide Version 1.2.1 SC26-7543-03 IBM TotalStorage SAN Volume Controller Configuration Guide Version 1.2.1 SC26-7543-03 Fourth Edition (October 2004)
Lab Validation Report. By Steven Burns. Month Year
1 Hyper-V v2 Host Level Backups Using Symantec NetBackup 7.0 and the Hitachi VSS Hardware Provider with the Hitachi Adaptable Modular Storage 2000 Family Lab Validation Report By Steven Burns March 2011
ATTO MultiPath Director Installation and Operation Manual
ATTO MultiPath Director Installation and Operation Manual ATTO Technology, Inc. 155 CrossPoint Parkway Amherst, New York 14068 USA www.attotech.com Tel (716) 691-1999 Fax (716) 691-9353 Sales support:
Hitachi Unified Compute Platform (UCP) Pro for VMware vsphere
Test Validation Hitachi Unified Compute Platform (UCP) Pro for VMware vsphere Author:, Sr. Partner, Evaluator Group April 2013 Enabling you to make the best technology decisions 2013 Evaluator Group, Inc.
ATTO Celerity MultiPath Director Installation and Operation Manual
ATTO Celerity MultiPath Director Installation and Operation Manual ATTO Technology, Inc. 155 CrossPoint Parkway Amherst, New York 14068 USA www.attotech.com Tel (716) 691-1999 Fax (716) 691-9353 Sales
Deploying Microsoft Exchange Server 2010 on the Hitachi Adaptable Modular Storage 2500
Deploying Microsoft Exchange Server 2010 on the Hitachi Adaptable Modular Storage 2500 Implementation Guide By Patricia Brailey August 2010 Summary IT administrators need email solutions that provide data
Remote Replication of SAP Systems on Microsoft SQL Server 2008 Environments Using the Hitachi Virtual Storage Platform
1 Remote Replication of SAP Systems on Microsoft SQL Server 2008 Environments Using the Hitachi Virtual Storage Platform Implementation Guide By Archana Kuppuswamy July 2011 Month Year Feedback Hitachi
How To Use A Microsoft Networker Module For Windows 8.2.2 (Windows) And Windows 8 (Windows 8) (Windows 7) (For Windows) (Powerbook) (Msa) (Program) (Network
EMC NetWorker Module for Microsoft Applications Release 2.3 Application Guide P/N 300-011-105 REV A03 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright
http://docs.trendmicro.com
Trend Micro Incorporated reserves the right to make changes to this document and to the products described herein without notice. Before installing and using the product, please review the readme files,
ASM_readme_6_10_18451.txt -------------------------------------------------------------------- README.TXT
README.TXT Adaptec Storage Manager (ASM) as of June 3, 2009 Please review this file for important information about issues and erratas that were discovered after completion of the standard product documentation.
Software License Registration Guide
Software License Registration Guide When you have purchased new software Chapter 2 Authenticating a License When you would like to use the software on a different PC Chapter 3 Transferring a License to
HP ProLiant PRO Management Pack (v 2.0) for Microsoft System Center User Guide
HP ProLiant PRO Management Pack (v 2.0) for Microsoft System Center User Guide Abstract This guide provides information on using the HP ProLiant PRO Management Pack for Microsoft System Center version
SCSI device drivers are provided for the following operating systems: Microsoft Windows NT Server 4.0 Novell NetWare 5.1
This section describes how to install and configure the Dell small computer system interface (SCSI) device drivers included with your Dell PowerEdge 1400 computer system. These device drivers are designed
vrealize Operations Manager Customization and Administration Guide
vrealize Operations Manager Customization and Administration Guide vrealize Operations Manager 6.0.1 This document supports the version of each product listed and supports all subsequent versions until
Setup for Failover Clustering and Microsoft Cluster Service
Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESX 4.0 ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until
Hitachi Data Systems Software
P r o d u c t L i n e C a r d Data Systems Software and Data Management Basic Operating System Dynamic Provisioning Dynamic Tiering Compatible PAV for IBM z/os Compatible HyperPAV for IBM z/os Cross-OS
Deploying SAP on Oracle with Distributed Instance using Hitachi Virtual Storage Platform
1 Deploying SAP on Oracle with Distributed Instance using Hitachi Virtual Storage Platform Implementation Guide By Sean Siegmund August 2011 Month Year Feedback Hitachi Data Systems welcomes your feedback.
HP Intelligent Management Center v7.1 Virtualization Monitor Administrator Guide
HP Intelligent Management Center v7.1 Virtualization Monitor Administrator Guide Abstract This guide describes the Virtualization Monitor (vmon), an add-on service module of the HP Intelligent Management
XP24000/XP20000 Performance Monitor User Guide
HP StorageWorks XP24000/XP20000 Performance Monitor User Guide Abstract This user's guide describes and provides instructions for using the Performance Monitor software. Part Number: T5214-96088 Eleventh
How to configure Failover Clustering for Hyper-V hosts on HP ProLiant c-class server blades with All-in-One SB600c storage blade
How to configure Failover Clustering for Hyper-V hosts on HP ProLiant c-class server blades with All-in-One SB600c storage blade Executive summary... 2 System requirements... 2 Hardware requirements...
-------------------------------------------------------------------- README.TXT
README.TXT Adaptec ASR-4000/ASR-4800SAS/ASR-4805SAS RAID Controllers Adaptec AAR-2420SA/AAR-2820SA SATA RAID Controller Adaptec ASR-2130SLP/ASR-2230SLP SCSI RAID Controller as of October 20, 2006 Please
