FUJITSU Storage ETERNUS DX Configuration Guide -Server Connection-



Similar documents
FUJITSU Storage ETERNUS DX Configuration Guide -Server Connection-

Setup Guide. An installation space and network environment must be prepared in advance.

FUJITSU Storage ETERNUS DX Configuration Guide -Server Connection-

Windows Host Utilities Installation and Setup Guide

Windows Host Utilities 6.0 Installation and Setup Guide

Step-by-Step Guide to Open-E DSS V7 Active-Active Load Balanced iscsi HA Cluster

Compellent Storage Center

Step-by-Step Guide. to configure Open-E DSS V7 Active-Active iscsi Failover on Intel Server Systems R2224GZ4GC4. Software Version: DSS ver. 7.

Step-by-Step Guide to Open-E DSS V7 Active-Active iscsi Failover

Drobo How-To Guide. Set Up and Configure a Drobo iscsi SAN as Shared Storage for Citrix XenServer. Topics. Initial Configuration STEP 1

This guide consists of the following two chapters and an appendix. Chapter 1 Installing ETERNUSmgr This chapter describes how to install ETERNUSmgr.

FUJITSU Storage ETERNUS Multipath Driver (Windows Version) Installation Information

Microsoft Windows Server Multiprotocol Multipathing with the Oracle ZFS Storage Appliance

capacity management for StorageWorks NAS servers

EXPRESSCLUSTER X for Windows Quick Start Guide for Microsoft SQL Server Version 1

Linux Host Utilities 6.1 Installation and Setup Guide

Drobo How-To Guide. Topics. What You Will Need. Configure Windows iscsi Multipath I/O (MPIO) with Drobo iscsi SAN

Using iscsi with BackupAssist. User Guide

Lab 5 Explicit Proxy Performance, Load Balancing & Redundancy

IP SAN Fundamentals: An Introduction to IP SANs and iscsi

Configuring iscsi Multipath

FlexArray Virtualization

Introduction to MPIO, MCS, Trunking, and LACP

Setup for Failover Clustering and Microsoft Cluster Service

Dell EqualLogic Red Hat Enterprise Linux 6.2 Boot from SAN

IP SAN Best Practices

Step-by-Step Guide to Open-E DSS V7 Active-Passive iscsi Failover

Customer Education Services Course Overview

Setup for Failover Clustering and Microsoft Cluster Service

StarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with Hyper-V Cluster

istorage Server: High-Availability iscsi SAN for Windows Server 2008 & Hyper-V Clustering

Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION

Thecus OS6 Step-by-Step Initialization Guide & OS6 Features Overview

Drobo How-To Guide. Deploy Drobo iscsi Storage with VMware vsphere Virtualization

Interlink Networks Secure.XS and Cisco Wireless Deployment Guide

24 Port Gigabit Ethernet Web Smart Switch. Users Manual

How To Set Up A Two Node Hyperv Cluster With Failover Clustering And Cluster Shared Volume (Csv) Enabled

Setup for Failover Clustering and Microsoft Cluster Service

Performance and Recommended Use of AB545A 4-Port Gigabit Ethernet Cards

Implementing Storage Concentrator FailOver Clusters

Management Software. Web Browser User s Guide AT-S106. For the AT-GS950/48 Gigabit Ethernet Smart Switch. Version Rev.

CCT vs. CCENT Skill Set Comparison

Bosch Video Management System High Availability with Hyper-V

Setup for Microsoft Cluster Service ESX Server and VirtualCenter 2.0.1

SAN Implementation Course SANIW; 3 Days, Instructor-led

The data between TC Monitor and remote devices is exchanged using HTTP protocol. Monitored devices operate either as server or client mode.

QNAP and Failover Technologies

Dell PowerVault Modular Disk Storage Manager User s Guide

StarWind iscsi SAN: Configuring HA File Server for SMB NAS February 2012

StarWind iscsi SAN Configuring HA File Server for SMB NAS

Our target is an EqualLogic PS100 Storage Array with a portal address of

Parallels Virtuozzo Containers 4.7 for Linux

vsphere Networking ESXi 5.0 vcenter Server 5.0 EN

How To Set Up A Backupassist For An Raspberry Netbook With A Data Host On A Nsync Server On A Usb 2 (Qnap) On A Netbook (Qnet) On An Usb 2 On A Cdnap (

Using Symantec NetBackup with VSS Snapshot to Perform a Backup of SAN LUNs in the Oracle ZFS Storage Appliance

Fibre Channel and iscsi Configuration Guide

Server Virtualization with QNAP Turbo NAS and Citrix XenServer How to Set up QNAP Turbo NAS as Storage Repositories on Citrix XenServer via iscsi

High Performance Tier Implementation Guideline

Data ONTAP DSM 4.1 For Windows MPIO

Dell PowerVault MD Series Storage Arrays: IP SAN Best Practices

Chapter 3 Startup and Shutdown This chapter discusses how to startup and shutdown ETERNUSmgr.

Using HP ProLiant Network Teaming Software with Microsoft Windows Server 2008 Hyper-V or with Microsoft Windows Server 2008 R2 Hyper-V

SAN Conceptual and Design Basics

Data Center Fabric Convergence for Cloud Computing (the Debate of Ethernet vs. Fibre Channel is Over)

Validating Host Multipathing with EMC VPLEX Technical Notes P/N REV A01 June 1, 2011

The Shortcut Guide To. Architecting iscsi Storage for Microsoft Hyper-V. Greg Shields

Clustered Data ONTAP 8.3

If you already have your SAN infrastructure in place, you can skip this section.

Qsan Document - White Paper. How to use QReplica 2.0

StarWind iscsi SAN Software: Installing StarWind on Windows Server 2008 R2 Server Core

HP-UX System and Network Administration for Experienced UNIX System Administrators Course Summary

istorage Server: High Availability iscsi SAN for Windows Server 2012 Cluster

ADVANCED NETWORK CONFIGURATION GUIDE

Integrity Virtual Machines Technical Overview

English ETERNUS CS800 S3. Backup Exec OST Guide

Installation Guide July 2009

StarWind iscsi SAN Software: Using an existing SAN for configuring High Availability storage with Windows Server 2003 and 2008

NEC Express5800 Series NEC ESMPRO AlertManager User's Guide

Multipathing Configuration for Software iscsi Using Port Binding

StarWind iscsi SAN Software: Using StarWind with MS Cluster on Windows Server 2008

White paper FUJITSU Storage ETERNUS DX series

StarWind iscsi SAN Software: Using StarWind with MS Cluster on Windows Server 2003

StarWind iscsi SAN & NAS: Configuring HA Shared Storage for Scale- Out File Servers in Windows Server 2012 January 2013

Install SQL Server 2014 Express Edition

Best Practices when implementing VMware vsphere in a Dell EqualLogic PS Series SAN Environment

HP 3PAR Online Import for HDS Storage Data Migration Guide

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server

HP-UX Support Tools Manager (STM) Release Notes

Security Overview of the Integrity Virtual Machines Architecture

Enabling Multi-pathing on ESVA with Red Hat Enterprise Linux 6 Device Mapper

Nutanix Hyperconverged Appliance with the Brocade VDX ToR Switch Deployment Guide

Intermec Ethernet Adapter

StorSimple Appliance Quick Start Guide

HP CloudSystem Enterprise

IP SAN BEST PRACTICES

iscsi Quick-Connect Guide for Red Hat Linux

Configuring and Managing a Red Hat Cluster. Red Hat Cluster for Red Hat Enterprise Linux 5

Transcription:

FUJITSU Storage ETERNUS DX Configuration Guide -Server Connection- (iscsi) for HP-UX

This page is intentionally left blank.

Preface This manual briefly explains the operations that need to be performed by the user in order to connect an ETERNUS DX to a server running HP-UX via an iscsi interface. This manual should be used in conjunction with any other applicable user manuals, such as those for the ETERNUS DX, server, OS, LAN cards, and drivers. Refer to "FUJITSU Storage ETERNUS DX Configuration Guide -Server Connection- Notations" for the notations used in this manual such as product trademarks and product names. For storage systems that are supported by the OS, refer to the Server Support Matrix of the ETERNUS DX. 15th Edition June 2015 The Contents and Structure of this Manual This manual is composed of the following eight chapters. "Chapter 1 Workflow" (page 6) This chapter describes how to connect the ETERNUS DX to a server. "Chapter 2 Checking the Server Environment" (page 7) This chapter describes which servers can be connected to ETERNUS DX storage systems. "Chapter 3 Notes" (page 8) This chapter describes issues that should be noted when connecting the ETERNUS DX storage systems and server. "Chapter 4 Checking the Server Information" (page 12) This chapter describes how to check the server information registered in the ETERNUS DX. "Chapter 5 Setting Up the ETERNUS DX" (page 13) This chapter describes how to use ETERNUS Web GUI or ETERNUSmgr to set up the ETERNUS DX storage systems. "Chapter 6 Installing the iscsi Software Initiator and Setting Up the Server" (page 14) This chapter describes how to install the iscsi Software Initiator and set up the server. "Chapter 7 Checking Connections" (page 21) This chapter describes how to check the connection status between the server and ETERNUS DX. "Chapter 8 Setting the Multipaths" (page 23) This chapter describes the PV-Links settings. 3

Table of Contents Chapter 1 Workflow 6 Chapter 2 Checking the Server Environment 7 2.1 Hardware... 7 2.2 OS (Operating System)... 7 2.3 LAN Cards... 7 2.4 Multipath Configuration... 7 Chapter 3 Notes 8 3.1 ETERNUS DX Setup Notes... 8 3.2 Notes Regarding the Number of LUNs... 8 3.3 LAN Environment Notes... 8 3.4 LVM Notes... 11 3.5 Server Startup and Power Supply Control Notes... 11 3.6 Jumbo Frame Setting Notes... 11 Chapter 4 Checking the Server Information 12 Chapter 5 Setting Up the ETERNUS DX 13 Chapter 6 Installing the iscsi Software Initiator and Setting Up the Server 14 6.1 Downloading and Installing the iscsi Software Initiator... 14 6.2 Setting Up the Server... 15 6.2.1 Setting Up the Command Queue Depth...15 6.2.2 Maximum Number of Volume Group Settings...16 6.2.3 iscsi Basic Settings...16 6.2.4 Server Settings when CHAP Authentication is Used...18 6.2.5 Server Settings when Bidirectional CHAP Authentication is Used...19 Chapter 7 Checking Connections 21 7.1 Turning on the Devices... 21 4

Table of Contents 7.2 Setting Up the Server to Recognize the Logical Units... 21 Chapter 8 Setting the Multipaths 23 8.1 Multipath Configuration using PV-Links... 23 8.2 Multipath Configuration using Native Multipath... 24 8.2.1 Setting the I/O Load Balance Policy (load_bal_policy)...24 8.2.2 Enabling ALUA (alua_enabled)...25 5

Chapter 1 Workflow This chapter describes how to connect the ETERNUS DX storage systems to a server. The workflow is shown below. Workflow Installing the iscsi Software Initiator and Setting Up the Server Set up the OS to connect to the ETERNUS DX. "Chapter 6 Installing the iscsi Software Initiator and Setting Up the Server" (page 14) Setting Up the Server to Recognize the Logical Units Set up the server so that it can recognize the LUNs (logical unit numbers) of the ETERNUS DX. "7.2 Setting Up the Server to Recognize the Logical Units" (page 21) Setting Up the Server as Required by the System Configuration Set PV-Links - "Chapter 8 Setting the Multipaths" (page 23) Set Logical Volume Manager (LVM) - "3.4 LVM Notes" (page 11) 6

Chapter 2 Checking the Server Environment Connection to servers is possible in the following environments. Check the "Server Support Matrix" for server environment conditions. 2.1 Hardware Refer to the "Server Support Matrix". 2.2 OS (Operating System) Refer to the "Server Support Matrix". 2.3 LAN Cards Refer to the "Server Support Matrix". 2.4 Multipath Configuration Refer to the "Server Support Matrix". 7

Chapter 3 Notes Note the following issues when connecting the ETERNUS DX to a server. 3.1 ETERNUS DX Setup Notes Be sure to set the sub system parameters. 3.2 Notes Regarding the Number of LUNs When using PV-Links, no more than 128 LUNs should be visible on any given path. If more than 128 LUNs are found in one path, assignment of unique Target IDs and Disk IDs will not be possible and desired behavior will not be observed in this environment. This is because the PV-Links function assumes that if devices on different paths have the same Target ID and Disk ID, then they must connect to the same LUN. The maximum number of unique Target IDs and Disk IDs for the device is 128. 3.3 LAN Environment Notes The ETERNUS DX can be connected to a server via an existing LAN. However, a decrease in LAN performance may be observed by prior users of said LAN. Since an iscsi LAN handles large amounts of data (traffic volumes) like an FC-SAN, the iscsi LAN must have its own switch and be a dedicated LAN that is separate from the business LANs. iscsi LAN redundancy is achieved by the use of multipaths. For IP network security, separate the iscsi LAN from the management LAN (for administration). The iscsi LAN must be configured as a dedicated LAN for each path from a server to the ETERNUS DX. 8

Chapter 3 Notes 3.3 LAN Environment Notes Example of a LAN switch connection configuration LAN switch [Business LAN] LAN Card#1 LAN Card#1 LAN Card#1 Business LAN Business server A LAN Card#2 LAN Card#3 Business server B LAN Card#2 LAN Card#3 Business server C LAN Card#2 LAN Card#3 Similarly to FC-SAN, a dedicated LAN is used for iscsi, not the business LAN. 1 2 3 1 2 3 LAN switch #1 [iscsi LAN] LAN switches should not be inter-connected (*1) LAN switch #2 [iscsi LAN] iscsi LAN 10 11 12 10 11 12 Management LAN LUN1 LUN2 ETERNUS DX LUN3 Management LAN port LAN switch [Management LAN] A separate LAN segment is used for each Server/Storage grouping (*2). Here, LAN ports 1, 2 and 10 comprise VLAN1, while LAN ports 3 and 11 comprise VLAN2. The iscsi LAN and management LAN segments are kept separate in the iscsi LAN switches, helping maintain the security of each. *1: In this system configuration, multipaths provide redundant connections between the servers and storage system. LAN switches #1 and #2 provide physical separation of the network paths. *2: A separate LAN segment is provided in the LAN switch (using the switch VLAN function) for each grouping of business servers and storage systems (equivalent to the FC zones). 9

Chapter 3 Notes 3.3 LAN Environment Notes Example of a network address configuration The following example shows a configuration in which multiple servers are connected to multiple CAs. Server A Server B 192.168.10.1/24 192.168.20.1/24 192.168.10.10/24 192.168.30.10/24 LAN switch #1 (VLAN is recommended) LAN switch #2 (VLAN is recommended) 192.168.30.1/24 192.168.40.1/24 192.168.20.10/24 192.168.40.10/24 P0 CM0 P1 P0 P1 CM1 ETERNUS DX The following example shows a configuration in which a single server is connected to multiple CAs. Server 192.168.10.1/24 192.168.20.1/24 LAN switch #1 LAN switch #2 192.168.10.10/24 192.168.20.10/24 192.168.10.11/24 192.168.20.11/24 P0 CM0 P1 P0 P1 CM1 ETERNUS DX 10

Chapter 3 Notes 3.4 LVM Notes 3.4 LVM Notes A mirrored LVM configuration using ETERNUS DX storage systems' LUNs is not recommended. [Bad Block Relocation] attribute must be disabled on LVM Logical volume which resides on ETERNUS DX storage systems. Example: # lvchange -r N /dev/vg01/lvol_name Execute this command for all logical volumes created on ETERNUS DX storage systems' LUNs to disable [Bad Block Relocation]. If [Bad Block Relocation] is not disabled, file system may be damaged. Raw device access to an LVM logical volume must use an I/O block size that is a multiple of 1024bytes. 3.5 Server Startup and Power Supply Control Notes Before turning the server on, check that the ETERNUS DX storage systems and LAN switches are all "Ready". If the server is turned on and they are not "Ready", the server will not be able to recognize the ETERNUS DX storage systems. Also, when the ETERNUS DX power supply is being controlled by a connected server, make sure that the ETERNUS DX does not shut down before the connected servers. Similarly, the LAN switches must also be turned off after the connected servers have been shut down. Similarly, the LAN switches must also be turned off after the connected servers have been shut down. If turned off, data writes from the running server cannot be saved to the ETERNUS DX storage systems, and already saved data may also be affected. 3.6 Jumbo Frame Setting Notes To enable Jumbo Frame, all the connected devices must support Jumbo Frame. Set the appropriate values for various parameters (such as the MTU size) on each connected device. For details about how to set Jumbo Frame for a LAN card and LAN switch, refer to the OS and each device's manuals. Rebooting the server may be required to apply the new settings. The MTU size that is supported by the ETERNUS DX is 9,000 bytes. 11

Chapter 4 Checking the Server Information The server information registered in the ETERNUS DX should be checked. A variety of commands are used to check the following server information. iscsi initiator name Execute the following command: iscsiutil -l The iscsi initiator name is displayed in the "Initiator Name" column. Server-side LAN card IP address for connecting to the ETERNUS DX Execute the following command: ifconfig Set this IP address if it is not already set. Initiator CHAP Name The Initiator CHAP Name is the user name for unidirectional CHAP authentication. Execute the following command: This is not required if CHAP authentication is not used. iscsiutil -pd The Initiator CHAP Name is displayed in the "Initiator CHAP Name" column of the applicable target information. CHAP Secret The CHAP Secret is the password for unidirectional CHAP authentication. Execute the following command: This is not required if CHAP authentication is not used. iscsiutil -pd The CHAP Secret is displayed in the "CHAP Secret" column of the applicable target information. 12

Chapter 5 Setting Up the ETERNUS DX Set up the ETERNUS DX storage systems using ETERNUS Web GUI or ETERNUSmgr. ETERNUS DX setup can be performed independently of server setup. For details on how to perform these settings, refer to the following manuals. "FUJITSU Storage ETERNUS DX Configuration Guide -Server Connection- Disk Storage System Settings" that corresponds to the ETERNUS DX to be connected "ETERNUS Web GUI User's Guide" or "ETERNUSmgr User Guide" 13

Chapter 6 Installing the iscsi Software Initiator and Setting Up the Server 6.1 Downloading and Installing the iscsi Software Initiator To turn on the connected devices, use the following procedure: 1 Use a Web browser to access the web-site at the following URL: http://www.software.hp.com 2 Use the web-site's Search function to search for "iscsi Software Initiator". The search results table is displayed. 3 In the search results table, click the [Receive for Free >>] button. 4 Enter the required information on the screen and click the [Next >>] button. 5 In the "Download Software" field, click the [Download Directly >>] button. The "iscsi Software Initiator" package is downloaded. 6 In the "Document" field, click the "Installation Instructions" link. Install the iscsi Software Initiator according to the displayed installation procedure. 14

Chapter 6 Installing the iscsi Software Initiator and Setting Up the Server 6.2 Setting Up the Server 6.2 Setting Up the Server iscsi connection settings depend on the CHAP authentication method. Perform the settings to match the CHAP authentication to be used. CHAP authentication method Required settings None "6.2.1 Setting Up the Command Queue Depth" (page 15) "6.2.2 Maximum Number of Volume Group Settings" (page 16) "6.2.3 iscsi Basic Settings" (page 16) Unidirectional CHAP authentication "6.2.1 Setting Up the Command Queue Depth" (page 15) "6.2.2 Maximum Number of Volume Group Settings" (page 16) "6.2.3 iscsi Basic Settings" (page 16) "6.2.4 Server Settings when CHAP Authentication is Used" (page 18) Bidirectional CHAP authentication "6.2.1 Setting Up the Command Queue Depth" (page 15) "6.2.2 Maximum Number of Volume Group Settings" (page 16) "6.2.3 iscsi Basic Settings" (page 16) "6.2.4 Server Settings when CHAP Authentication is Used" (page 18) "6.2.5 Server Settings when Bidirectional CHAP Authentication is Used" (page 19) 6.2.1 Setting Up the Command Queue Depth Set the number of commands allowed to be queued (command queue depth) on each LUN in the ETERNUS DX storage systems from the LAN card. This setting optimizes the connection between the ETERNUS DX storage systems and the server. Model Setting value Command queue depth ETERNUS DX Arbitrary (*1) (Up to 512 for each iscsi port of the ETERNUS DX) *1: Recommended value = 512 (number of iscsi ports that are connected to a single CA port) number of LUNs (Round the result down) Use the value of "8" if the actual result is lower. For HP-UX11iv3 Set the command queue depth using the disk device attribute "max_q_depth". The "max_q_depth" is set using the "scsimgr" command. - Example: Setting the command queue depth of disk20 to 8 # scsimgr save_attr -D /dev/rdisk/disk20 -a max_q_depth=8 Refer to the HP-UX manuals for details of the "scsimgr" command. 15

Chapter 6 Installing the iscsi Software Initiator and Setting Up the Server 6.2 Setting Up the Server For HP-UX11iv2 Set the command queue depth using the kernel parameter "scsi_max_qdepth". The "scsi_max_qdepth" is set using System Administration Manager (SAM). When the command queue depth is set using SAM, all SCSI devices are affected. When changing the command queue depth of a particular SCSI device, the "scsictl" command is used. However, because the information set by the "scsictl" command becomes invalid when the server is shut down, it is necessary to carry out setting every time the server is activated. For details about the "scsictl" command, refer to the HP-UX manual. 6.2.2 Maximum Number of Volume Group Settings For HP-UX11iv3, this setting is not required. Set the kernel parameter maxvgs that limits the number of volume groups. Default value is 10. If 11 or more volume groups are to be created, maxvgs will need to be changed. Kernel parameter maxvgs is set using System Administration Manager (SAM). One volume group is created for OS installation area. If 10 (default) is set for maxvgs, up to 9 new volume groups can be created. 6.2.3 iscsi Basic Settings The following items are required for the iscsi connection and must be set: iscsi Initiator name iscsi connection target information Authentication Method 1 Set the shell environment variable PATH. Add the location of the iscsi Software Initiator executable files to the PATH environment variable. # PATH=$PATH:/opt/iscsi/bin 2 Set the iscsi Initiator name. This setting is not required if the default iscsi Initiator name is to be used. Either the iscsi Qualified Name (iqn) or IEEE EUI-64 (eui) format may be used. To set the iscsi Initiator name, execute the "iscsiutil" command as follows: # iscsiutil [iscsi-device-file] -i -N <initiator-name> 16

Chapter 6 Installing the iscsi Software Initiator and Setting Up the Server 6.2 Setting Up the Server 3 Confirm the iscsi Initiator name. Use the "iscsiutil" command to check the iscsi Initiator name. The iscsi Initiator name shown here is used for the ETERNUS DX settings. # iscsiutil -l 4 Save the statically-recognized iscsi target information in the kernel registry. Use the "iscsiutil" command to save the iscsi target information. # iscsiutil [iscsi-device-file] -a -I <ip-address> [-P <tcp-port>][-m <portal-grp-tag>] Example: For an ETERNUS DX with a connection target iscsi port IP address of 192.1.1.120: # iscsiutil -a -I 192.1.1.120 5 Confirm the iscsi target information is now in the kernel registry. # iscsiutil -p -D 6 Set whether CHAP authentication is to be used or not. If CHAP authentication is to be performed, set "CHAP". If CHAP authentication is not required, set "None". Example: When CHAP authentication is not used: # iscsiutil -t authmethod None Note: # iscsiutil -t authmethod CHAP None The above command sets CHAP authentication to operate according to the ETERNUS DX setting. The CHAP authentication method is selected if the ETERNUS DX uses CHAP for its responses, otherwise CHAP authentication is not performed. 17

Chapter 6 Installing the iscsi Software Initiator and Setting Up the Server 6.2 Setting Up the Server 6.2.4 Server Settings when CHAP Authentication is Used To use CHAP authentication, the following items must be set for the iscsi ports of the ETERNUS DX and for the iscsi Initiators. When setting the iscsi port parameters for the ETERNUS DX, specify the "-I" option and input the IP address. When setting the iscsi Initiator parameters, do not specify the "-I" option. CHAP Method Initiator CHAP Name CHAP Secret 1 Set the CHAP Method. The CHAP authentication method (unidirectional or bidirectional) must be set. Set the CHAP Method value indicated in the following table: CHAP authentication method Unidirectional CHAP authentication Bidirectional CHAP authentication CHAP Method setting CHAP_UNI CHAP_BI Use the "iscsiutil" command to set the CHAP Method. # iscsiutil [iscsi-device-file] -u -H <chap-authentication-type> [-T <target-name> [-I <ip-address>] [-P <tcp-port>] Example: To set unidirectional CHAP authentication for the ETERNUS DX whose iscsi port IP address is 192.1.1.120 # iscsiutil -u -H CHAP_UNI -I 192.1.1.120 2 Set the Initiator CHAP Name. An Initiator CHAP Name must be set for each target set in Step 4 of "6.2.3 iscsi Basic Settings" (page 16). The Initiator CHAP Name set here must match the Initiator CHAP Name that is to be set for Target. Use the "iscsiutil" command to set the CHAP Method. # iscsiutil [iscsi-device-file] -u -N <chap-initiator-name> [-T <target-name> [-I <ip-address>] [-P <tcp-port>] [-M <portal-grp-tag>]] Example: To set "mychapname" as the CHAP name for a specific target (192.1.1.120): # iscsiutil -u -N mychapname -I 192.1.1.120 18

Chapter 6 Installing the iscsi Software Initiator and Setting Up the Server 6.2 Setting Up the Server 3 Set CHAP Secret. A CHAP Secret must be set for each target set in Step 4 of "6.2.3 iscsi Basic Settings" (page 16). The CHAP Secret set here must match the CHAP Secret that is to be set for Target. Use the "iscsiutil" command to set the CHAP Secret. # iscsiutil [iscsi-device-file] -u -W <chap-initiator-secret> [-T <target-name> [-I <ip-address>] [-P <tcp-port>] [-M <portal-grp-tag>]] Example: To set "mychapsecret" as the CHAP Secret for a specific target (192.1.1.120): # iscsiutil -u -W mychapsecret -I 192.1.1.120 6.2.5 Server Settings when Bidirectional CHAP Authentication is Used The following items are required for bidirectional CHAP authentication and must be set: NAS Hostname CHAP Secret RADIUS Server Hostname iradd activation 1 Set the parameters relating to the NAS (Network Access Server) and RADIUS server. Use the "iscsiutil" command to set the NAS Hostname, NAS Secret, and RADIUS Server Hostname: # iscsiutil [iscsi-device-file] -u -R <nas-hostname> <nas-secret> <radius-server-hostname> Description of the NAS and RADIUS parameters: Setting item <nas-hostname > <nas-secret > <radius-server-hostname > Description Specify the IP address or hostname of the Network Access Server (NAS). This is the server that is to connect to the ETERNUS DX via the iscsi interface. The specified IP address or hostname must be for a LAN card that is able to communicate with the RADIUS server. Set the NAS secret already set in the RADIUS server. Specify the IP address or hostname of the RADIUS server. 2 Activate the daemon (iradd) that communicates with the RADIUS server. # iradd 19

Chapter 6 Installing the iscsi Software Initiator and Setting Up the Server 6.2 Setting Up the Server The iradd daemon will automatically restart every time the system is started. 20

Chapter 7 Checking Connections 7.1 Turning on the Devices To turn on the connected devices, use the following procedure: 1 Turn on any devices (such as LAN switches) on the LAN path between the ETERNUS DX and server. 2 After checking that the Ready (or equivalent) LED of the LAN switch is lit, turn on the ETERNUS DX storage systems. 3 After checking that the ETERNUS DX's Ready LED is lit, turn on the server. 7.2 Setting Up the Server to Recognize the Logical Units Make the server recognize the logical units (LUN) set in the ETERNUS DX storage systems. 1 Use the "ioscan" command to check which devices are connected. Enter as follows for LegacyView (old format) results: # ioscan -fn Enter as follows for AgileView (HP-UX11iv3 and newer format) results: # ioscan -fnn 21

Chapter 7 Checking Connections 7.2 Setting Up the Server to Recognize the Logical Units Check that all connected devices are listed in the ioscan output and "CLAIMED" is displayed in "S/W State". The following example shows the output of ioscan (LegacyView) when ETERNUS DX storage systems are connected to a server. # ioscan H/W Path Class Description ============================================================ : 255/0 iscsi iscsi Virtual Node 255/0/0.0 ext_bus iscsi-scsi Protocol Interface 255/0/0.0.0 target 255/0/0.0.0.0 disk FUJITSU ETERNUS_DXL 255/0/0.0.0.1 disk FUJITSU ETERNUS_DXL 255/0/0.0.0.2 disk FUJITSU ETERNUS_DXL 255/0/0.0.0.3 disk FUJITSU ETERNUS_DXL 255/0/0.0.0.4 disk FUJITSU ETERNUS_DXL : # When the ETERNUS DX storage systems are connected for the first time, the device file (for example, /dev/dsk/c4t0d0) sometimes may not be created. If this happens, use the "insf" command to create the device file. Specify the hardware path name as follows, and execute. # insf -H hw_path_to_device -e 22

Chapter 8 Setting the Multipaths Two types of multipath configurations are available for the connection between the ETERNUS DX and an HP- UX server: "PV-Links" and "Native Multipaths". This chapter explains how to set up the "PV-Links" or "Native Multipaths". 8.1 Multipath Configuration using PV-Links PV-Links" is a Logical Volume Manager (LVM) function, available for all HP-UX versions. LVM for HP-UX supplies PV-Links as a standard function for configuring paths in a redundant configuration (path failover function). PV-Links is set by including multiple paths connected to one physical volume in one volume group. Only one of the paths will be set as the Active path, with the other paths being made Standby paths. However, load on the system may be better balanced by spreading the server accesses over multiple Active paths. When using PV-Links, note the following: Confirm that the PV-Links paths have identical target IDs. When set correctly, the device files for all paths that connect to a given ETERNUS DX LUN should all be the same after the "t" part of the name (e.g. "/dev/dsk/c4t0d0" g"0d0"). For HP-UX11iv3, set the "leg_mpath_enable" parameter to "false" and use LegacyDSF to specify the "pv_path" parameter to create VolumeGroup. - Reference Device special file type Legacy DSF Persistent DSF Example device special file /dev/dsk/c4t0d0 /dev/rdsk/c4t0d0 /dev/disk/disk25 /dev/rdisk/disk25 - Example: Setting "leg_mpath_enable" to "false" # scsimgr save_attr -D /dev/rdisk/disk25 -a leg_mpath_enable=false Using LegacyDSF to specify the "pv_path" parameter to create VolumeGroup # vgcreate vg01 /dev/dsk/c4t0d0 /dev/dsk/c6t0d0 23

Chapter 8 Setting the Multipaths 8.2 Multipath Configuration using Native Multipath 8.2 Multipath Configuration using Native Multipath "Native Multipaths" is an OS function available for HP-UX11iv3 onwards. Native Multipaths can be used to set the I/O load balance policy (load_bal_policy) and enable ALUA (alua_enabled). Perform these settings to match the ETERNUS DX model and depending on whether the OS supports ALUA. 8.2.1 Setting the I/O Load Balance Policy (load_bal_policy) Set the I/O load balance policy to match the ETERNUS DX model being used as shown below. The policy setting is required for all LUNs in the ETERNUS DX. I/O load balance policy setting vs ETERNUS DX model ETERNUS DX DX60 S3/DX100 S3/DX200 S3, DX500 S3/DX600 S3, DX8700 S3/DX8900 S3, DX200F, DX80 S2/DX90 S2, DX400/DX400 S2 series, DX8000/DX8000 S2 series DX60/DX60 S2, DX80 Example procedure of I/O load balance policy setting - Setting the I/O load balance policy to "round_robin" I/O load balance policy setting round_robin preferred_path # scsimgr save_attr -D /dev/rdisk/disk25 -a load_bal_policy=round_robin Execute this "load_bal_policy" setting command for all LUNs in the ETERNUS DX. If the I/O load balance policy is "preferred_path", a priority path must be set. Set the path that is used when the I/O load is balanced as the priority path. Example procedure priority path setting - Specify the hardware path of the "Lunpath" that is to be the priority path. # scsimgr save_attr -D /dev/rdisk/disk25 -a preferred_path=64000/0x0/0x0.0x0.0x4 000000000000000 Lunpath hardware paths can be checked using the "ioscan -m lun" and "scsimgr lun_map" commands. 24

Chapter 8 Setting the Multipaths 8.2 Multipath Configuration using Native Multipath 8.2.2 Enabling ALUA (alua_enabled) ALUA is supported by HP-UX11iv3 Sep 2007 ver. or later. If the OS supporting ALUA is used, check the "alua_enabled" setting and set it to "true" if it is not. 1 Check whether the OS supports ALUA. To check whether the OS supports ALUA, use the "scsimgr" command and check existence of the "alua_enabled" attribute. Example of OS supporting ALUA # scsimgr get_attr grep alua_enabled name = alua_enabled If executing the command displays nothing, the OS does not support ALUA. In this case, the operations described below are not required. 2 Check the "alua_enabled" setting. Use the "scsimgr" command to check the "alua_enabled" attribute for each LUN. Execute the command for all LUNs in the ETERNUS DX. Example for checking the "alua_enabled" setting # scsimgr get_attr -D /dev/rdisk/disk25 -a alua_enabled SCSI ATTRIBUTES FOR LUN : /dev/rdisk/disk25 name = alua_enabled current = true default = true saved = true If "current" and "saved" are indicated as "true", ALUA is already enabled. In this case, the operations described below are not required. 3 Set "alua_enabled". Use the "scsimgr" command to set the "alua_enabled" attribute to "true" for each LUN. Example for setting "alua_enabled" # scsimgr save_attr -D /dev/rdisk/disk25 -a alua_enabled=true Value of attribute alua_enabled saved successfully 25

FUJITSU Storage ETERNUS DX Configuration Guide -Server Connection- (iscsi) for HP-UX Date of issuance: June 2015 Issuance responsibility: FUJITSU LIMITED The content of this manual is subject to change without notice. This manual was prepared with the utmost attention to detail. However, Fujitsu shall assume no responsibility for any operational problems as the result of errors, omissions, or the use of information in this manual. Fujitsu assumes no liability for damages to third party copyrights or other rights arising from the use of any information in this manual. The content of this manual may not be reproduced or distributed in part or in its entirety without prior permission from Fujitsu.