HP Integrity NonStop NS16000 Series Planning Guide HP Part Number: 529567-023 Published: February 2012 Edition: H06.11 and subsequent H-series RVUs
Copyright 2012 Hewlett-Packard Development Company, L.P. Legal Notice Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor s standard commercial license. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Export of the information contained in this publication may require authorization from the U.S. Department of Commerce. Microsoft, Windows, and Windows NT are U.S. registered trademarks of Microsoft Corporation. Intel, Pentium, and Celeron are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Java is a registered trademark of Oracle and/or its affiliates. Motif, OSF/1, Motif, OSF/1, UNIX, X/Open, and the "X" device are registered trademarks, and IT DialTone and The Open Group are trademarks of The Open Group in the U.S. and other countries. Open Software Foundation, OSF, the OSF logo, OSF/1, OSF/Motif, and Motif are trademarks of the Open Software Foundation, Inc. OSF MAKES NO WARRANTY OF ANY KIND WITH REGARD TO THE OSF MATERIAL PROVIDED HEREIN, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. OSF shall not be liable for errors contained herein or for incidental consequential damages in connection with the furnishing, performance, or use of this material. 1990, 1991, 1992, 1993 Open Software Foundation, Inc. The OSF documentation and the OSF software to which it relates are derived in part from materials supplied by the following: 1987, 1988, 1989 Carnegie-Mellon University. 1989, 1990, 1991 Digital Equipment Corporation. 1985, 1988, 1989, 1990 Encore Computer Corporation. 1988 Free Software Foundation, Inc. 1987, 1988, 1989, 1990, 1991 Hewlett-Packard Company. 1985, 1987, 1988, 1989, 1990, 1991, 1992 International Business Machines Corporation. 1988, 1989 Massachusetts Institute of Technology. 1988, 1989, 1990 Mentat Inc. 1988 Microsoft Corporation. 1987, 1988, 1989, 1990, 1991, 1992 SecureWare, Inc. 1990, 1991 Siemens Nixdorf Informationssysteme AG. 1986, 1989, 1996, 1997 Sun Microsystems, Inc. 1989, 1990, 1991 Transarc Corporation.OSF software and documentation are based in part on the Fourth Berkeley Software Distribution under license from The Regents of the University of California. OSF acknowledges the following individuals and institutions for their role in its development: Kenneth C.R.C. Arnold, Gregory S. Couch, Conrad C. Huang, Ed James, Symmetric Computer Systems, Robert Elz. 1980, 1981, 1982, 1983, 1985, 1986, 1987, 1988, 1989 Regents of the University of California.
Contents About This Document...7 Supported Release Version Updates (RVUs)...7 Intended Audience...7 New and Changed Information...7 New and Changed Information for 529567 023...7 New and Changed Information for 529567 022...8 New and Changed Information for 529567 021...10 New and Changed Information for 529567 020...11 New and Changed Information for 529567 019...13 New and Changed Information for 529567 018...13 New and Changed Information for 529567 017...14 New and Changed Information for 529567 016...15 New and Changed Information for 529567 015...15 Document Organization...15 Notation Conventions...16 General Syntax Notation...16 Publishing History...18 HP Encourages Your Comments...18 1 NonStop NS16000 Series System Overview...19 NonStop NS16000 Series System Architecture...20 Coexistence of Processor Types...20 NonStop NS16000 Series Hardware...20 Preparation for Other Server Hardware...22 Component Location and Identification...22 Terminology...23 Rack and Offset Physical Location...24 NonStop Blade Element Group-Module-Slot Numbering...24 LSU Group-Module-Slot Numbering...25 Processor Switch Group-Module-Slot Numbering...26 CLIM Connection Group-Module-Slot-Port Numbering...27 IOAM Group-Module-Slot Numbering...27 Fibre Channel Disk Module Group-Module-Slot Numbering...28 NonStop S-Series I/O Enclosure Group Numbers...29 System Installation Document Packet...30 Technical Document for the Factory-Installed Hardware Configuration...31 Configuration Forms for CLIMs and ServerNet Adapters...31 ServerNet Cluster Configuration Form...31 BladeCluster Solution Installation and Migration Tasks...31 2 Site Preparation Guidelines...32 Modular Cabinet Power and I/O Cable Entry...32 Emergency Power-Off Switches...32 EPO Requirement for Integrity NonStop NS16000 Series Servers...32 EPO Requirement for HP 5000 UPS...32 EPO Requirement for HP 5500 XR UPS...32 EPO Requirement for NonStop S-Series I/O Enclosure...32 Electrical Power and Grounding Quality...33 Power Quality...33 Grounding Systems...33 Power Consumption...33 Uninterruptible Power Supply (UPS)...33 Contents 3
Cooling and Humidity Control...34 Weight...35 Flooring...35 Dust and Pollution Control...35 Zinc Particulates...35 Space for Receiving and Unpacking...36 Operational Space...36 3 System Installation Specifications...37 Modular Cabinets...37 Monitored Single-Phase PDUs...37 AC Power Feeds, Monitored Single-Phase PDUs...38 Input and Output Power Characteristics, Monitored Single-Phase PDUs...41 Branch Circuits and Circuit Breakers, Monitored Single-Phase PDUs...42 Monitored Three-Phase PDUs...42 AC Power Feeds, Monitored Three-Phase PDUs...43 Input and Output Power Characteristics, Monitored Three-Phase PDUs...46 Branch Circuits and Circuit Breakers, Monitored Three-Phase PDUs...47 Modular Three-Phase PDUs...47 AC Power Feeds, Modular Three-Phase PDUs...48 Input and Output Power Characteristics, Modular Three-Phase PDUs...52 Branch Circuits and Circuit Breakers, Modular Three-Phase PDUs...53 Circuit Breaker Ratings for UPS...54 PDU Strapping Configurations...54 Grounding...54 Enclosure AC Input...54 Enclosure Power Loads...55 Dimensions and Weights...56 Plan View From Above the Modular Cabinet...57 Service Clearances for the Modular Cabinet...57 Unit Sizes...57 Modular Cabinet Physical Specifications...58 Enclosure Dimensions...58 Modular Cabinet and Enclosure Weights With Worksheet...59 Modular Cabinet Stability...60 Environmental Specifications...60 Calculating Specifications for Enclosure Combinations...62 4 System Configuration Guidelines...66 Internal ServerNet Interconnect Cabling...66 Cable Labeling...66 Cable Management System...67 Internal Interconnect Cables...67 Dedicated Service LAN Cables...68 Cable Length Restrictions...69 Internal Cable Product IDs...70 NonStop Blade Elements to LSUs...70 NonStop Blade Element to NonStop Blade Element...70 LSUs to Processor Switches and Processor IDs...70 Processor Switch ServerNet Connections...75 Processor Switches to Networking CLIMs...76 Processor Switches to Storage CLIMs...77 Processor Switches to IOAM Enclosures...78 FCSA to Fibre Channel Disk Modules...79 FCSA to Tape Devices...79 Storage CLIM Devices...80 4 Contents
Factory-Default Disk Volume Locations for SAS Disk Devices...82 SAS Ports to SAS Disk Enclosures...82 SAS Ports to SAS Tape Devices...82 Configuration Restrictions for Storage CLIMs...82 Configurations for Storage CLIMs and SAS Disk Enclosures...83 P-Switch to NonStop S-Series I/O Enclosure Cabling...90 IOAM Enclosure and Disk Storage Considerations...92 Fibre Channel Devices...92 Factory-Default Disk Volume Locations...94 Configurations for Fibre Channel Devices...94 Configuration Restrictions for Fibre Channel Devices...95 Configuration Recommendations for Fibre Channel Devices...95 Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module...96 Ethernet to Networks...103 IP CLIM Ethernet Interfaces...103 Telco CLIM Ethernet Interfaces...104 Gigabit Ethernet 4-Port ServerNet Adapter (G4SA) Ethernet Ports...104 Default Naming Conventions...105 5 Modular System Hardware...107 NonStop Blade Element...107 Front Panel Buttons...109 Front Panel Indicator LEDs...109 Logical Synchronization Unit (LSU)...110 LSU Indicator LEDs...111 Processor Switch...112 P-Switch Indicator LEDs...114 Processor Numbering...114 CLuster I/O Modules (CLIMs)...115 IP CLuster I/O Module (CLIM)...116 Telco CLuster I/O Module (CLIM)...119 IB CLuster I/O Module (CLIM) (Optional)...121 CLIM Cable Management Ethernet Patch Panel...122 Storage CLuster I/O Module (CLIM)...123 Serially Attached SCSI (SAS) Disk Enclosure...124 I/O Adapter Module (IOAM) Enclosure and I/O Adapters...125 IOAM Enclosure...126 IOAM Enclosure Indicator LEDs...127 Fibre Channel ServerNet Adapter (FCSA)...128 Gigabit Ethernet 4-Port ServerNet Adapter...129 Fibre Channel Disk Module...130 Tape Drive and Interface Hardware...130 Maintenance Switch (Ethernet)...130 UPS and ERM (Optional)...131 System Console...132 Enterprise Storage System...133 NonStop S-Series I/O Enclosure...134 6 Hardware Configurations...135 Minimum and Maximum Hardware Configuration...135 Enclosure Locations in Cabinets...135 7 Maintenance and Support Connectivity...136 Dedicated Service LAN...136 Basic LAN Configuration...137 Fault-Tolerant Configuration...138 Contents 5
DHCP, TFTP, and DNS Windows-Based Services...139 IP Addresses...140 Ethernet Cables...142 SWAN Concentrator Restriction...142 Dedicated Service LAN Links Using IP CLIMs...142 Dedicated Service LAN Links Using G4SAs...142 Dedicated Service LAN Links With One IOAM Enclosure...143 Dedicated Service LAN Links to Two IOAM Enclosures...144 Dedicated Service LAN Links With IOAM Enclosure and NonStop S-Series I/O Enclosure...145 Dedicated Service LAN Links With NonStop S-Series I/O Enclosure...145 Initial Configuration for a Dedicated Service LAN...146 Additional Configuration for OSM...147 System Console...147 A Cables...150 Cable Types, Connectors, Lengths, and Product IDs...150 M8201R to Tape Device Cables...154 Cable Management System...154 B Operations and Management Using OSM Applications...155 Using OSM for Down-System Support...156 AC Power Monitoring...156 OSM Power Fail Support...156 Considerations for Ride-Through Time Configuration...157 Considerations for Site UPS Configurations...158 AC Power-Fail States...158 C Default Startup Characteristics...159 D NonStop S-Series Systems: Connecting to or Migrating From...161 Connecting to NonStop S-Series I/O Enclosures...161 IOMF 2 CRU...162 NonStop S-Series Disk Drives and ServerNet Adapters...162 Migrating From a NonStop S-Series Systems to a NonStop NS16000 Series Systems...162 Migrating Applications...162 Migration Considerations...163 Migrating Hardware Products to Integrity NonStop NS16000 Series Servers...163 Moving IOAM Enclosures to NonStop NS16000 Series Servers...163 Reusing NonStop S-Series I/O Enclosures and Processor Enclosures...163 Other Manuals Containing Software Migration Information...163 Index...165 6 Contents
About This Document This guide describes HP Integrity NonStop NS16000 series servers and provides examples of system configurations to assist you in planning for installation of a new system. The NonStop NS16000 series of servers consists of the NonStop NS16000 server and the NonStop NS16200 server. Supported Release Version Updates (RVUs) This manual supports H06.11 and all subsequent H-series RVUs until otherwise indicated in a replacement publication. NOTE: Integrity NonStop NS-series, Integrity NonStop NS16000 series, and NonStop S-series refer to hardware systems. H-series and G-series refer to release version updates (RVUs). Intended Audience This guide is intended for those responsible for planning the installation, configuration, and maintenance of the server and the software environment at a particular site. Appropriate personnel must have completed HP training courses on system support for Integrity NonStop NS-series servers. New and Changed Information New and Changed Information for 529567 023 Changed R5500 XR UPS to UPS to also include R5000 UPS in Modular Cabinet Power and I/O Cable Entry (page 32). Changed R5500 XR UPS to R5000 UPS in EPO Requirement for Integrity NonStop NS16000 Series Servers (page 32). Added EPO Requirement for HP 5000 UPS (page 32). Removed URL in EPO Requirement for HP 5500 XR UPS (page 32). Changed R5500 XR UPS to R5000 UPS and added R5000 UPS and R5000 ERM in Uninterruptible Power Supply (UPS) (page 33)and UPS and ERM (Optional) (page 131). Changed R5500 XR UPS to R5000 UPS in Modular Cabinets (page 37), Top AC Power Feed When Optional UPS and ERM are Installed (page 39), Bottom AC Power Feed When Optional UPS and ERM are Installed (page 40), Branch Circuits and Circuit Breakers, Monitored Single-Phase PDUs (page 42), Branch Circuits and Circuit Breakers, Modular Three-Phase PDUs (page 53), Enclosure Power Loads (page 55), Unit Sizes (page 57), Calculating Specifications for Enclosure Combinations (page 62), AC Power Monitoring (page 156), and AC Power-Fail States (page 158). Removed R5500 XR UPS designation from illustrations to cover both R5500 XR UPS and R5000 UPS in Figure 4 (page 40), Figure 5 (page 41), Figure 8 (page 45), and Figure 9 (page 46). Also added refernces to UPS and ERM (Optional) in text above those figures and above Figure 12 (page 51) and Figure 13 (page 52). Added R5000 UPS in Circuit Breaker Ratings for UPS (page 54), Enclosure Dimensions (page 58), and Modular Cabinet and Enclosure Weights With Worksheet (page 59). Added R5000 ERM in Unit Sizes (page 57), Enclosure Dimensions (page 58), and Modular Cabinet and Enclosure Weights With Worksheet (page 59). Added cable lengths between p-switch and CLIM and made corrections in Cable Length Restrictions (page 69) and Cables (page 150). Supported Release Version Updates (RVUs) 7
Changed QMS Tech Doc to Technical Document shipped with the system in CLIM Cable Management Ethernet Patch Panel (page 122). Restricted coexistence of DL385 G5 Storage CLIMs and DL380 G6 Storage CLIMs in Storage CLuster I/O Module (CLIM) (page 123). Changed RVU requirement to H06.24 or later to support SSDs in D2700 disk enclosures in Serially Attached SCSI (SAS) Disk Enclosure (page 124). New and Changed Information for 529567 022 With H06.23 and later RVUs, NonStop NS16000 series systems support IB CLIMs. Networking CLIMs include IP, Telco, and IB CLIMs. With H06.23 and later RVUs, NonStop NS16000 series systems support Solid State Drives (SSDs) contained in D2700 disk enclosures connected to DL380 G6 Storage CLIMs. These changes have been made. All changes are marked with change bars. Changed category 5 cable (CAT 5) to Category 6 cable (CAT 6) throughout the document and indicated that Category 5e cable (CAT 5e) is also acceptable. Changed Table 1 (page 19) in NonStop NS16000 Series System Overview (page 19). Changed HP StorageWorks to HP Storage and added IB CLIM and SSD to NonStop NS16000 Series Hardware (page 20). Added IB CLIM as type of networking CLIM in Processor Switch Group-Module-Slot Numbering (page 26) and CLIM Connection Group-Module-Slot-Port Numbering (page 27). Changed location of extension bars connected to modular UPS or R5500 XR UPS to rear left side of cabinet in Modular Cabinet Power and I/O Cable Entry (page 32). Also changed remaining PDU location to rear right side. Added cross-references and changed location of extension bars connected to modular UPS or R5500 XR UPS to rear left side of cabinet in Uninterruptible Power Supply (UPS) (page 33). Changed location of extension bars connected to modular UPS or R5500 XR UPS to rear left side of cabinet and changed remaining PDU location to rear right side in Modular Cabinets (page 37), Top AC Power Feed When Optional UPS and ERM are Installed (page 39), Figure 4 (page 40), Bottom AC Power Feed When Optional UPS and ERM are Installed (page 40), Figure 5 (page 41), Branch Circuits and Circuit Breakers, Monitored Single-Phase PDUs (page 42), Top AC Power Feed When Optional UPS and ERM are Installed (page 44), Figure 8 (page 45), Top AC Power Feed When Optional UPS and ERM are Installed (page 50), and Figure 9 (page 46). Removed R5500 XR UPS designation from Branch Circuits and Circuit Breakers, Monitored Three-Phase PDUs (page 47). Added IB CLIM and SSD to Enclosure Power Loads (page 55). Changed DL385 G2 or G5 CLIM height to 3.4 inches (8.6 cm) in Enclosure Dimensions (page 58). Added SSD to Modular Cabinet and Enclosure Weights With Worksheet (page 59). Added IB CLIM (as networking CLIM) and SSD to Heat Dissipation Specifications and Worksheet (page 60). In System Configuration Guidelines (page 66) changed second paragraph, removed example configuration, and moved section Enclosure Locations in Cabinets to Hardware Configurations (page 135). Changed IP or Telco CLIM to networking CLIM to add IB CLIM in Cable Length Restrictions (page 69) and Processor Switch ServerNet Connections (page 75). 8
Added IB CLIM by changing IP or Telco CLIM to networking CLIM in Processor Switches to Networking CLIMs (page 76). Added notes to illustrations in Storage CLIM Devices (page 80). Changed SAS Ports to SAS Disk Enclosures (page 82). Changed Configuration Restrictions for Storage CLIMs (page 82). Added IB CLIM and corrected Storage CLIM naming convention in Default Naming Conventions (page 105). Changed IP CLIM to CLIM in Processor Switch (page 112). Changed PID to Name on Label and added IB CLIM in CLuster I/O Modules (CLIMs) (page 115) and Table 5 (page 116). Added IB CLuster I/O Module (CLIM) (Optional) (page 121). Added coexistence of DL385 G5 Storage CLIMs and DL380 G6 Storage CLIMs, note about maximum SAS disk enclosures, and notes in tables in Storage CLuster I/O Module (CLIM) (page 123). Added footnote to table, changed HP StorageWorks to HP Storage, and for D2700 SAS disk enclosure added SSDs and disk partitioning feature to Serially Attached SCSI (SAS) Disk Enclosure (page 124). In Hardware Configurations (page 135): Changed chapter title. Removed initial paragraph and note. Changed title of Minimum and Maximum Hardware Configuration (page 135), removed typical values, and grouped CLIMs with a cross-reference. Removed examples of configurations. Moved Enclosure Locations in Cabinets (page 135) into chapter, removed table of enclosure location rules, and pointed to the Technical Document shipped with the system or components. Deleted IP from IP CLIM to Include all types of CLIMs in Maintenance and Support Connectivity (page 136), Basic LAN Configuration (page 137), Fault-Tolerant Configuration (page 138), and One System Console Managing Multiple Systems (page 148). Changed table row to Maintenance switch (ProCurve) (Additional switches) and added range of IP addresses in IP Addresses (page 140) to cover more that two maintenance switches. In Appendix Cables (page 150): Combined cable tables into Cable Types, Connectors, Lengths, and Product IDs (page 150). Changed Storage CLIM SAS HBA port to SAS tape to DL385 G5 Storage CLIM SAS HBA port to SAS tape (carrier-grade tape only). Added cable for DL380G6 Storage CLIM to M8381-25 (D2700) SAS disk enclosure. Added cable for M8380-25 (MSA70) SAS disk enclosure to M8380-25 (MSA70) SAS disk enclosure (daisy-chain). Added row for DL380 G6 IB CLIM HCA port to customer-supplied IB switch. Added Maintenance LAN interconnect (CAT 5e and CAT 6). Below table added notes about ServerNet cluster connections and BladeCluster connections. New and Changed Information 9
In Operations and Management Using OSM Applications (page 155) changed table and note below table. Changed AC Power Monitoring (page 156), including OSM Power Fail Support (page 156) and Considerations for Ride-Through Time Configuration (page 157). New and Changed Information for 529567 021 Added DL385 G2 and G5 CLIM and DL380 G6 CLIM designations throughout the document. Added Telco CLIM to NonStop NS16000 Series System Overview (page 19), Processor Switch Group-Module-Slot Numbering (page 26), CLIM Connection Group-Module-Slot-Port Numbering (page 27), Configuration Forms for CLIMs and ServerNet Adapters (page 31), Cable Length Restrictions, Processor Switch ServerNet Connections (page 75), Processor Switches to Networking CLIMs (page 76), Ethernet to Networks (page 103), Default Naming Conventions (page 105), and Cable Length Restrictions. Added Telco CLIM, DL380 G6 CLIM, D2700 SAS disk enclosure, and CLIM cable management Ethernet patch panel to NonStop NS16000 Series Hardware (page 20). Revised Enclosure Power Loads (page 55). Changed footnotes to Enclosure Power Loads (page 55), Unit Sizes (page 57), Enclosure Dimensions (page 58), and Modular Cabinet and Enclosure Weights With Worksheet (page 59). Added CLIM cable management Ethernet patch panel to Unit Sizes (page 57). Added DL380 G6 CLIM, D2700 SAS disk enclosure, and CLIM cable management Ethernet patch panel to Enclosure Dimensions (page 58). Added DL380 G6 CLIM, D2700 SAS disk enclosure, SAS disk drives, disk blank, and CLIM cable management Ethernet patch panel to Modular Cabinet and Enclosure Weights With Worksheet (page 59). Revised Heat Dissipation Specifications and Worksheet (page 60). Added SAS disk enclosure to Operating Temperature, Humidity, and Altitude (page 61). Revised cabinet load calculations in Table 2 (page 63), Table 3 (page 64), and Table 4 (page 64). Added Telco CLIM and CLIM cable management Ethernet patch panel to Enclosure Locations in Cabinets (page 135). Cable Length Restrictions Added DL380 Storage CLIM and D2700 SAS disk enclosure to Storage CLIM Devices (page 80). Added DL380 G6 Storage CLIM to SAS Ports to SAS Disk Enclosures (page 82) and Configuration Restrictions for Storage CLIMs (page 82). Revised Configurations for Storage CLIMs and SAS Disk Enclosures (page 83): Changed DL385 G2 or G5 Storage CLIM and SAS Disk Enclosure Configurations (page 83), including Two DL385 G2 or G5 Storage CLIMs, Two MSA70 SAS Disk Enclosures (page 83), Four DL385 G2 or G5 Storage CLIMs, Four MSA70 SAS Disk Enclosures (page 84), and Daisy-Chain Configurations (DL385 G2 or G5 Storage CLIMs Only with MSA70 SAS Disk Enclosures) (page 84). Added DL380 G6 Storage CLIM and SAS Disk Enclosure Configurations (page 85), including Two DL380 G6 Storage CLIMs, Two D2700 SAS Disk Enclosures (page 85), Two DL380 G6 Storage CLIMs, Four D2700 SAS Disk Enclosures (page 87), Four DL380 G6 Storage CLIMs, Four D2700 SAS Disk Enclosures (page 87), and Four DL380 G6 Storage CLIMs, Eight D2700 SAS Disk Enclosures (page 88). 10
Changed IP CLIM A and IP CLIM B to IP CLIM option 1 and IP CLIM option 2, respectively, in IP CLIM Ethernet Interfaces (page 103) and added DL380 G6 IP CLIMs. Added Telco CLIM Ethernet Interfaces (page 104). Revised CLuster I/O Modules (CLIMs) (page 115) to distinguish between DL380 G6 and DL385 G2 or G5 G2 or G5 base server models of CLIMs, to add Table 5 (page 116) listing CLIM models and RVU requirements, to add Telco CLuster I/O Module (CLIM) (page 119), to add the CLIM Cable Management Ethernet Patch Panel (page 122), and to revise Storage CLuster I/O Module (CLIM) (page 123). Added D2700 SAS disk enclosure, 6Gbps protocol 2.5 inch SAS disk drive, and 300 GB 10K rpm SAS disk to Serially Attached SCSI (SAS) Disk Enclosure (page 124). Removed number of ports on ProCurve maintenance switch in Maintenance Switch (Ethernet) (page 130). Revised System Console (page 132) to add recommendation for two system consoles and to add information about DHCP, TFTP, and DNS services. Added Telco CLIM and CLIM cable management Ethernet patch panel to Minimum and Maximum Hardware Configuration (page 135) and changed maximum number of LSU logic board and optics adapter and maximum number of IP CLIM. Revised Maintenance and Support Connectivity (page 136) and added that maximum of eight NonStop systems can be connected to the dedicated service LAN. Revised figure in Basic LAN Configuration (page 137). Revised figure in Fault-Tolerant Configuration (page 138). Added DHCP, TFTP, and DNS Windows-Based Services (page 139). In IP Addresses (page 140) for default IP address for CLIM ilos, removed on the NonStop system console to allow the DHCP server on the LAN to be on NonStop system console or CLIM. Changed section title from Operating Configurations for Dedicated Service LANs to Additional Configuration for OSM (page 147) and made changes. In System Console (page 147) changed references table. Removed Multiple System Consoles Managing One System and Cascading Ethernet Switch or Hub Configuration in System Console Configurations (page 147). Changed section title to Primary and Backup System Consoles Managing Multiple Systems (page 149). Changed Operations and Management Using OSM Applications (page 155). Changed section title to Using OSM for Down-System Support (page 156). New and Changed Information for 529567 020 Storage CLIMs added to: Table in NonStop NS16000 Series System Overview (page 19) Listing in NonStop NS16000 Series Hardware (page 20) Table and Figure 1 (page 27) in Processor Switch Group-Module-Slot Numbering (page 26) Table in CLIM Connection Group-Module-Slot-Port Numbering (page 27) System Installation Document Packet (page 30) Configuration Forms for CLIMs and ServerNet Adapters (page 31) Table in Enclosure Power Loads (page 55) New and Changed Information 11
Unit Sizes (page 57) Enclosure Dimensions (page 58) Modular Cabinet and Enclosure Weights With Worksheet (page 59) Heat Dissipation Specifications and Worksheet (page 60) Operating Temperature, Humidity, and Altitude (page 61) Enclosure Locations in Cabinets (page 135) Cable Length Restrictions (page 69) Processor Switch ServerNet Connections (page 75) Processor Switches to Storage CLIMs (page 77) Storage CLIM Devices (page 80) Configuration Restrictions for Storage CLIMs (page 82) Configurations for Storage CLIMs and SAS Disk Enclosures (page 83) Table in Default Naming Conventions (page 105) Storage CLuster I/O Module (CLIM) (page 112) Minimum and Maximum Hardware Configuration (page 135) Cable Length Restrictions SAS disk enclosures and/or SAS disk drives added to: Table in NonStop NS16000 Series System Overview (page 19) Listing in NonStop NS16000 Series Hardware (page 20) Table in Enclosure Power Loads (page 55) Unit Sizes (page 57) Enclosure Dimensions (page 58) Modular Cabinet and Enclosure Weights With Worksheet (page 59) Heat Dissipation Specifications and Worksheet (page 60) Enclosure Locations in Cabinets (page 135) Cable Length Restrictions (page 69) SAS Ports to SAS Disk Enclosures (page 82) SAS Ports to SAS Tape Devices (page 82) Factory-Default Disk Volume Locations for SAS Disk Devices (page 82) Configurations for Storage CLIMs and SAS Disk Enclosures (page 83) Table in Default Naming Conventions (page 105) Serially Attached SCSI (SAS) Disk Enclosure (page 124) Cable Length Restrictions Term slice has been changed to NonStop Blade Element in IP CLIM table row in Enclosure Locations in Cabinets (page 135). Processor Switches to IP CLIMs (page 81) has been changed. IP CLIM has been changed in Minimum and Maximum Hardware Configuration (page 135). AC Power Monitoring (page 156) has been changed. OSM Power Fail Support (page 156) has been changed. 12
New and Changed Information for 529567 019 Changed the C13 receptacle type from 10A to 12A in these sections: Monitored Single-Phase PDUs (page 37) North America and Japan: 200 to 240 V AC, Monitored Single-Phase PDUs (page 41) International: 200 to 240 V AC, Monitored Single-Phase PDUs (page 42) Monitored Three-Phase PDUs (page 42) North America and Japan: 200 to 240 V AC, Monitored Three-Phase PDUs (page 46) International: 380 to 415 V AC, Monitored Three-Phase PDUs (page 47) Modular Three-Phase PDUs (page 47) North America and Japan: 200 to 240 V AC, Modular Three-Phase PDUs and Extension Bars (page 52) International: 380 to 415 V AC, Modular Three-Phase PDUs and Extension Bars (page 53) Changed the C19 receptacle type from 12A to 16A in these sections: Monitored Single-Phase PDUs (page 37) North America and Japan: 200 to 240 V AC, Monitored Single-Phase PDUs (page 41) International: 200 to 240 V AC, Monitored Single-Phase PDUs (page 42) Monitored Three-Phase PDUs (page 42) North America and Japan: 200 to 240 V AC, Monitored Three-Phase PDUs (page 46) International: 380 to 415 V AC, Monitored Three-Phase PDUs (page 47) Made changes to the Enclosure Dimensions (page 58) table. Made changes to the Modular Cabinet and Enclosure Weights With Worksheet (page 59) table. Added note to Chapter 7: Maintenance and Support Connectivity (page 136) and Appendix B: Operations and Management Using OSM Applications (page 155) indicating that HP Insight Remote Support Advanced is the go-forward remote support solution for NonStop systems, replacing the OSM Notification Director in both modem-based and HP Instant Support Enterprise Edition (ISEE) remote support solutions. Added information to OSM Power Fail Support (page 156). Added a statement to Appendix A: Cables (page 150) indicating that GESA external ports support CAT 6 cables that you provide. New and Changed Information for 529567 018 Made additions and changes to the Internal Cables table. Under IP Addresses (page 140), changed locations and numbers of rack-mounted UPSs. Added entry about the support of the NonStop BladeCluster Solution to the NS16000-series system architecture table under Chapter 1: NonStop NS16000 Series System Overview (page 19). Added note to Processor Switch ServerNet Connections (page 75) referencing the BladeCluster Solution Manual for information about connecting processor switches to a BladeCluster. New and Changed Information 13
Added section, BladeCluster Solution Installation and Migration Tasks (page 31), indicating that the checklist for installing a BladeCluster Solution is located in the BladeCluster Solution Manual. Under Appendix C: Default Startup Characteristics (page 159), added a note stating that the configurations documented here are typical for most sites. Your system load paths might be different, depending upon how your system is configured. To determine the configuration of your system, refer to the system attributes in the OSM Service Connection. You can select this from within the System Load dialog box in the OSM Low-Level Link. Under Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module (page 96), corrected the description of Two FCSAs, Two FCDMs, One IOAM Enclosure (page 97) and changed the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay (GMSB) identifications for the factory-default system disk locations under Four FCSAs, Four FCDMs, One IOAM Enclosure (page 97) and Four FCSAs, Four FCDMs, Two IOAM Enclosures (page 99). New and Changed Information for 529567 017 14 Added the new modular power distribution units (PDUs) for three-phase power configurations. Updated these sections: Modular Cabinets (page 37) Monitored Single-Phase PDUs (page 37) Monitored Three-Phase PDUs (page 42) Modular Three-Phase PDUs (page 47) Circuit Breaker Ratings for UPS (page 54) Dimensions and Weights (page 56), Modular Cabinet and Enclosure Weights With Worksheet (page 59) Under Primary and Backup System Consoles Managing Multiple Systems (page 149), added text to indicate that as of the J06.07 and H06.18 RVUs, you can configure a CLuster I/O Module (CLIM) as the DHCP/DNS server instead of the default NonStop system console. Changed references to PC system to Windows Servers in Chapter 6: Examples of Configurations under Triplex 8 Processor System, Two Cabinets and throughout Chapter 7: Maintenance and Support Connectivity (page 136). Changed figures to show appropriate system console icon. Under Chapter 7: Maintenance and Support Connectivity (page 136), indicated that the HP ISEE call-out and call-in access is determined by what access method (for example, direct connect or VPN router) you have ordered. Changed figure. Under IP CLuster I/O Module (CLIM) (page 111), indicated that the CLIM complies with Internet Protocol version 6 (IPv6), an Internet Layer protocol for packet-switched networks, and has passed official certification of IPv6 readiness. Under Modular Cabinets (page 37), added note: For instructions on grounding the modular cabinet's G2 rack using the HP Rack Grounding Kit (AF074A), ask your service provider to refer to the instructions in the HP 10000 G2 Series Rack Options Installation Guide. This manual is available at: http://bizsupport1.austin.hp.com/bc/docs/support/supportmanual/ c01493702/c01493702.pdf. Added new procedures to AC Power Monitoring (page 156), including the new subsections, OSM Power Fail Support (page 156), Considerations for Ride-Through Time Configuration (page 157), and Considerations for Site UPS Configurations (page 158). Under Modular Cabinet and Enclosure Weights With Worksheet (page 59), changed the weight of the IP CLIM to 53 lbs and 24 kg.
New and Changed Information for 529567 016 Corrected the operating temperatures under Operating Temperature, Humidity, and Altitude (page 61) and Nonoperating Temperature, Humidity, and Altitude (page 62). Indicated that when connecting a networking CLIM, slot 4 might already be used for S-series networking. If that is the case, use slots 5 through 9, starting with 9, for NS-series networking CLIMs. Updated: Processor Switch Group-Module-Slot Numbering (page 26) CLIM Connection Group-Module-Slot-Port Numbering (page 27) Processor Switch ServerNet Connections (page 75) Indicated under Default Naming Conventions (page 105) that changing the process names for $ZTCP0 and $ZTCP1 will make some components inaccessible. Changed the weight of the PDU monitored configuration from 303 lbs. to 328 lbs. due to the additional weight of the rack extension. See Modular Cabinet and Enclosure Weights With Worksheet (page 59), Table 2: Cabinet One Load Calculations (page 63), Table 3: Cabinet Two Load Calculations (page 64), and Table 4: Cabinet Three Load Calculations (page 64). New and Changed Information for 529567 015 The following topics have been updated: Chapter 1 (page 19) Chapter 3 (page 37) Chapter 4 (page 66) Chapter 5 (page 107) Chapter 6 (page 135) Chapter 7 (page 136) Appendix B (page 155) Appendix D (page 161) Document Organization This document has the following chapters. Chapter 1: NonStop NS16000 Series System Overview This chapter provides an overview of the modular Integrity NonStop NS16000 series system hardware. Chapter 2: Site Preparation Guidelines This chapter outlines topics to consider when planning or upgrading the installation site. Chapter 3: System Installation Specifications This chapter provides the installation specifications for fully populated Integrity NonStop NS16000 series enclosures. Chapter 4: System Configuration Guidelines This chapter describes the guidelines for implementing the modular hardware. Chapter 5: Modular System Hardware This chapter includes topics to consider when planning or upgrading the installation site. Chapter 6 (page 135) This chapter shows example configurations of the Integrity NonStop NS16000 series modular hardware. Document Organization 15
Chapter 7 (page 136) This chapter describes the planning tasks for your dedicated service LAN and system consoles. Appendix A: Cables This appendix identifies the cables used with the Integrity NonStop NS16000 series hardware. Appendix B: Operations and Management Using OSM Applications This appendix describes the OSM management tools used in Integrity NonStop NS16000 series systems. Appendix C (page 159) This appendix describes the default startup characteristics for system disks. Appendix D (page 161) This appendix describes connection and migration considerations for a NonStop S-Series system when used with an Integrity NonStop NS16000 series system. Notation Conventions General Syntax Notation This list summarizes the notation conventions for syntax presentation in this manual. UPPERCASE LETTERS Uppercase letters indicate keywords and reserved words. Type these items exactly as shown. Items not enclosed in brackets are required. For example: MAXATTACH Italic Letters Italic letters, regardless of font, indicate variable items that you supply. Items not enclosed in brackets are required. For example: file-name Computer Type Computer type letters indicate: C and Open System Services (OSS) keywords, commands, and reserved words. Type these items exactly as shown. Items not enclosed in brackets are required. For example: Use the cextdecs.h header file. Text displayed by the computer. For example: Last Logon: 14 May 2006, 08:02:23 A listing of computer code. For example if (listen(sock, 1) < 0) { perror("listen Error"); exit(-1); } Bold Text Bold text in an example indicates user input typed at the terminal. For example: ENTER RUN CODE?123 CODE RECEIVED: 123.00 The user must press the Return key after typing the input. 16
[ ] Brackets Brackets enclose optional syntax items. For example: TERM [\system-name.]$terminal-name INT[ERRUPTS] A group of items enclosed in brackets is a list from which you can choose one item or none. The items in the list can be arranged either vertically, with aligned brackets on each side of the list, or horizontally, enclosed in a pair of brackets and separated by vertical lines. For example: FC [ num ] [ -num ] [ text ] K [ X D ] address { } Braces A group of items enclosed in braces is a list from which you are required to choose one item. The items in the list can be arranged either vertically, with aligned braces on each side of the list, or horizontally, enclosed in a pair of braces and separated by vertical lines. For example: LISTOPENS PROCESS { $appl-mgr-name } { $process-name } ALLOWSU { ON OFF } Vertical Line A vertical line separates alternatives in a horizontal list that is enclosed in brackets or braces. For example: INSPECT { OFF ON SAVEABEND } Ellipsis An ellipsis immediately following a pair of brackets or braces indicates that you can repeat the enclosed sequence of syntax items any number of times. For example: M address [, new-value ] - ] {0 1 2 3 4 5 6 7 8 9} An ellipsis immediately following a single syntax item indicates that you can repeat that syntax item any number of times. For example: "s-char " Punctuation Parentheses, commas, semicolons, and other symbols not previously described must be typed as shown. For example: error := NEXTFILENAME ( file-name ) ; LISTOPENS SU $process-name.#su-name Quotation marks around a symbol such as a bracket or brace indicate the symbol is a required character that you must type as shown. For example: "[" repetition-constant-list "]" Notation Conventions 17
Item Spacing Spaces shown between items are required unless one of the items is a punctuation symbol such as a parenthesis or a comma. For example: CALL STEPMOM ( process-id ) ; If there is no space between two items, spaces are not permitted. In this example, no spaces are permitted between the period and any other items: $process-name.#su-name Line Spacing If the syntax of a command is too long to fit on a single line, each continuation line is indented three spaces and is separated from the preceding line by a blank line. This spacing distinguishes items in a continuation line from items in a vertical list of selections. For example: Publishing History ALTER [ / OUT file-spec / ] LINE [, attribute-spec ] Part Number 529567 019 529567 020 529567 021 529567 022 529567 023 Product Version N.A. N.A. N.A. N.A. N.A. Publication Date November 2009 February 2010 September 2010 August 2011 February 2012 HP Encourages Your Comments HP encourages your comments concerning this document. We are committed to providing documentation that meets your needs. Send any errors found, suggestions for improvement, or compliments to docsfeedback@hp.com. Include the document title, part number, and any comment, error found, or suggestion for improvement you have concerning this document. 18
1 NonStop NS16000 Series System Overview Integrity NonStop NS16000 series servers use the NonStop NS16000 Series System Architecture (page 20), a number of duplex or triplex processors, and various combinations of modular hardware components installed in 42U modular cabinets. All Integrity NonStop NS16000 series server components are field-replaceable units (FRUs) that can only be serviced by service providers trained by HP. The NonStop NS16000 series includes the NonStop NS16000 server and the NonStop NS16200 server and has these characteristics: Table 1 Characteristics of a NonStop NS16000 Series Systems Processor/Processor model Intel Itanium/NSE-A for NS16000 Intel Itanium/NSE-T for NS16200 Supported RVU Cabinet Minimum/max. memory Minimum/max. processors Supported configurations H06.11 and later RVUs 42U, 19 inch rack 4 GB to 16 GB main memory per logical processor 2 to 16 2, 4, 6, 8, 10, 12, 14, or 16 processors Minimum CLIMs 0 CLIMs (if there are IOAM enclosures) 2 Storage CLIMs and 2 Networking (IP, Telco, or IB) CLIMs if there are no IOAM enclosures NOTE: Although some configurations allow for 1 Networking CLIM, HP highly recommends using 2 Networking CLIMs of the same type for fault-tolerance. Max. ServerNet links for I/O connections Number of ServerNet links used by I/O connection type per fabric Maximum SAS disk enclosures per Storage CLIM pair Maximum SAS disk drives Maximum FCSAs and G4SAs Maximum FC disk drives Maximum FCDMs via IOAM enclosure Maximum IOAM enclosures ESS support via Storage CLIMs or IOAMs Volume Level Encryption NonStop ServerNet Clusters connection 16 links per fabric (if connected to a BladeCluster) 24 links per fabric (if not connected to a BladeCluster) 1 link used per NonStop S-series I/O enclosure 1 2 links used per Networking CLIM (depending on configuration) 2 links used per Storage CLIM 4 links used per IOAM A Storage CLIM pair supports a maximum of 4 SAS disk enclosures. This maximum applies to all Storage CLIM types: G2, G5, G6. 100 per Storage CLIM pair Up to 60 in combination 112 per FCSA pair 4 Fibre-Channel Disk Modules (FCDMs) daisy-chained with 14 disk drives per FCDM 6 IOAMs. Depending on your configuration, the maximum number of IOAMs might be different. Check with your HP representative. Supported Supported Supported 19
Table 1 Characteristics of a NonStop NS16000 Series Systems (continued) BladeCluster Solution connection NonStop S-series I/O enclosures connection M8201R Fibre Channel to SCSI router Supported Supported Supported NonStop NS16000 Series System Architecture Integrity NonStop NS16000 series systems employ a unique method for achieving fault tolerance in a clustered processor environment: the modular NonStop NS16000 series system architecture, which utilizes Intel Itanium microprocessors without cycle-by-cycle lock-stepping. Instead, two or three microprocessors run the same instruction stream concurrently in a loose lockstep process. In loose lockstep: Each microprocessor runs at its own clock rate. Results of each command execution are compared on processor output to the ServerNet fabric. Error recovery and minor indeterminate processing results from one microprocessor do not cause output comparison errors. Coexistence of Processor Types NS16000 NSE-A Blade Complexes can coexist with NS16200 NSE-T Blade Complexes on both types of NonStop NS16000 series servers, subject to the following rules: All processor elements in a logical processor must be of the same type. All processor elements in a NonStop Blade Complex must be of the same type. All Blade Elements in a Blade Complex must have the same number of processor elements. NSE-T and NSE-A Blade Complexes can co-exist in the same system (H06.12 or later RVUs). No changes to the CONFTEXT file are required to support mixed processor types. For information on upgrading a NonStop NS16000 server with NSE-T Blade Complexes, have your service provider see Upgrading to NS16200 Blade Complexes in the Service Procedures collection of NTL. NonStop NS16000 Series Hardware Standard hardware for a NonStop NS16000 series system includes: NonStop Blade Element Processor switch IP CLuster I/O Module (CLIM) Telco CLuster I/O Module (CLIM) IB CLuster I/O Module (CLIM) CLIM cable management Ethernet patch panel Storage CLuster I/O Module (CLIM) NOTE: DL385 G2 or G5 Storage CLIMs support MSA70 SAS disk enclosures, but not HP Storage D2700 SAS disk enclosures. DL380 G6 Storage CLIMs support HP Storage D2700 SAS disk enclosures, but not MSA70 SAS disk enclosures. 20 NonStop NS16000 Series System Overview
SAS disk enclosure NOTE: MSA70 SAS disk enclosures can contain hard disk drives (HDDs). D2700 SAS disk enclosures can contain hard disk drives (HDDs) or Solid State Drives (SSDs). Logical synchronization unit (LSU) I/O adapter module (IOAM) Enclosure Fibre Channel disk module (FCDM) Maintenance switch (Ethernet) Optional Hardware for a NonStop NS16000 series system includes: UPS and ERM Enterprise Storage System (ESS) Tape Drive and Interface Hardware Connections to NonStop S-Series I/O Enclosure are supported. NOTE: All of these hardware components are described in Chapter 5 (page 107). Each HP modular cabinet includes two power distribution units (PDUs) and AC power feed cables factory-installed at either the upper or lower rear corners. This illustration shows an example NonStop NS16000 series system with a duplex processor (front and rear view). This illustration contains IP CLIMs, but not Telco, IB, or Storage CLIMs. NonStop NS16000 Series Hardware 21
A large number of enclosure combinations are possible within the modular cabinet(s) that make up an Integrity NonStop NS16000 series server. The applications and purpose of any NonStop NS-series server determine the number and combinations of the enclosures and modular cabinets. Because of the large number of possible configurations, you calculate the total power consumption, heat dissipation, and weight of each modular cabinet based on the hardware configuration that you order from HP. For site preparation specifications of the modular cabinet and the individual enclosures, see Chapter 3 (page 37). Preparation for Other Server Hardware This guide provides the specifications only for the Integrity NonStop NS16000 series modular cabinet and enclosures identified earlier in this section. For site preparation specifications for other HP hardware that will be installed at the site with the Integrity NonStop NS16000 series servers, consult with your HP account team. For site preparation specifications relating to hardware from other manufacturers, refer to the documentation for those devices. Component Location and Identification Topics discussed in this subsection are: Terminology (page 23) Rack and Offset Physical Location (page 24) 22 NonStop NS16000 Series System Overview
Terminology NonStop Blade Element Group-Module-Slot Numbering (page 24) LSU Group-Module-Slot Numbering (page 25) Processor Switch Group-Module-Slot Numbering (page 26) CLIM Connection Group-Module-Slot-Port Numbering (page 27) IOAM Group-Module-Slot Numbering (page 27) Fibre Channel Disk Module Group-Module-Slot Numbering (page 28) NonStop S-Series I/O Enclosure Group Numbers (page 29) These terms are used in locating and describing components: Term Cabinet Rack Rack Offset Group Module Slot (or Bay or Position) Port Group-Module-Slot (GMS) NonStop Blade Complex NonStop Blade Element LSU Definition Computer system housing that includes a structure of external panels, front and rear doors, internal racking, and dual PDUs. Structure integrated into the cabinet into which rackmountable components are assembled. The physical location of components installed in a modular cabinet, measured in U values numbered 1 to 42, with 1U at the bottom of the cabinet. A U is 1.75 inches (44 millimeters). A subset of a system that contains one or more modules. A group does not necessarily correspond to a single physical object, such as an enclosure. A subset of a group that is usually contained in an enclosure. A module contains one or more slots (or bays). A module can consist of components sharing a common interconnect, such as a backplane, or it can be a logical grouping of components performing a particular function. A subset of a module that is the logical or physical location of a component within that module. A connector to which a cable can be attached and which transmits and receives data. A notation method used by hardware and software in NonStop systems for organizing and identifying the location of certain hardware components. A set of two or three NonStop Blade Elements, identified as A, B, or C, and their associated LSUs. Each NonStop Blade Complex usually has four logical processors. A 16-processor system employs four NonStop Blade Complexes. A physical portion of a logical processor containing up to four processor elements, Each processor element supports a different logical processor numbered 0-15. A component of the system that synchronizes the processor elements of a logical processor and validates all output operations from each processor element before passing the output to the ServerNet fabric. Component Location and Identification 23
On Integrity NonStop NS16000 series systems, locations of the physical and logical modular components are identified by: Physical location: Rack number Rack offset Logical location: GMS notation determined by the position of the component on ServerNet In NonStop S-series systems, group, module, and slot (GMS) notation identifies the physical location of a component. However, GMS notation in Integrity NonStop NS16000 series systems is the logical location of particular components rather than the physical location. Rack and Offset Physical Location Rack name and rack offset identify the physical location of components in an Integrity NonStop NS16000 series system. The rack name is located on an external label affixed to the rack, which includes the system name plus a 2-digit rack number. Rack offset is labeled on the rails in each side of the rack. These rails are measured vertically in units called U, with one U measuring 1.75 inches (44 millimeters). The rack is 42U high, with 1U located at the bottom and 42U at the top. The rack offset is the lowest number on the rack that the component occupies. This example shows the location of NonStop Blade Element A in rack 1 with an offset of 3U and NonStop Blade Element B with an offset of 8U: NonStop Blade Element Group-Module-Slot Numbering Processor group: 400 through 403 relates to NonStop Blade Complex 0 through 3. Example: group 403 = NonStop Blade Complex 3 Module: 1 through 3 relates to the processor NonStop Blade Element ID A through C. Example: module 2 = NonStop Blade Element B Slot: 71 through 78 relates to location of the Blade optics adapter. Example: Slot 72 = Blade optics adapter in slot 72 Port: J0 through J7 or K0 through K7 relates to the two optics ports in a specific slot. 24 NonStop NS16000 Series System Overview
A number of GMS configurations are possible in the modular Integrity NonStop NS16000 series system. This table shows the default numbering for the logical processors: Logical Processors Group (NonStop Blade Complex) Module (NonStop Blade Element) Slot (Optics) Port 0-3* 400 1 (A) 2 (B) 3 (C) Blade optics adapters 1-8 4-7 401 (software 8-12 402 identified as slots 71-78) 13-15 403 J0-J7, K0-K7 * Logical processor 0 must be in NonStop Blade Complex 0 (group 400). All other processors can be in any user-defined group. This illustration shows GMS numbering for a NonStop Blade Element enclosure: LSU Group-Module-Slot Numbering This table shows the default numbering for the LSUs: Item Group (NonStop Blade Complex) 1 Module I/O Position (Slot) Individual LSU J set 400-403 100 + NonStop Blade Complex number 1 - Optics adapter (rear side, slots 20-27) 2 - Logic board (front side, slots 50-57) Individual LSU K set Not used at this time 1 See NonStop Blade Element Group-Module-Slot Numbering (page 24). Group and module numbers correspond to the logical processor number (0, 1, 2, 3) and NonStop Blade Element (A, B, C) as determined by the ServerNet connection to the p-switch. This illustration shows an example LSU configuration equipped with four optic adapters (rear side) in slots 20 through 23 and four LSU logic boards (front side) in positions 50 through 53: Component Location and Identification 25
Processor Switch Group-Module-Slot Numbering This table shows the default numbering for the p-switch: Group X ServerNet Module Y ServerNet Module Slot Item 100 2 3 1 Maintenance PIC 2 Cluster PIC 3 Crosslink PIC 4-9 1 ServerNet I/O PICs 10 ServerNet PIC (processors 0-3) 11 ServerNet PIC (processors 4-7) 12 ServerNet PIC (processors 8-11) 13 ServerNet PIC (processors 12-16) 14 P-switch logic board 15, 18 Power supply A and B 16, 17 Fan A and B 1 Networking CLIMs (IB, IP and Telco CLIMs) can use slots 4-9, starting with slot 9. If slot 4 is needed for S-series, networking CLIMs would use slots 5-9. Storage CLIMs can use slots 4-9, starting with slot 4. If slot 4 is needed for S-series networking, Storage CLIMs would use slots 5-9. For RVU requirements for CLIMs, see Table 5 (page 116). This illustration shows the slot and connector locations for the p-switch: 26 NonStop NS16000 Series System Overview
Figure 1 Slot and Connector Locations for the Processor Switch. CLIM Connection Group-Module-Slot-Port Numbering This table lists the default numbering for P-switch connections to a CLIM: CLIM Group 1 Module P-Switch PIC Slot 2 PIC Port Numbers 100 2 4-9 1-4 100 3 4-9 1-4 1 For RVU requirements for CLIMs, see Table 5 (page 116). 2 CLIMs can use slots 4-9. Networking CLIMs start with slot 9. Storage CLIMs start with slot 4. If slot 4 is needed for S-series networking, CLIMs use slots 5-9. IOAM Group-Module-Slot Numbering These tables shows the default numbering for the IOAM enclosure: IOAM Group P-Switch PIC Slot PIC Port Numbers 110 4 1-4 111 5 1-4 112 6 1-4 113 7 1-4 114 8 1-4 115 9 1-4 IOAM Group X ServerNet Module Y ServerNet Module Slot Item Port 110-115 (See preceding table.) 2 3 1 to 5 ServerNet adapters 1 - n: where n is number of ports on adapter 14 ServerNet switch logic board 1-4 Component Location and Identification 27
IOAM Group X ServerNet Module Y ServerNet Module Slot Item Port 15, 18 Power supplies - 16, 17 Fans - This illustration shows the slot locations for the IOAM enclosure: Fibre Channel Disk Module Group-Module-Slot Numbering This table shows the default numbering for the Fibre Channel disk module: IOAM Enclosure FCDM Group Module Slot FCSA F-SACs Shelf Slot Item 110-115 2 - X fabric; 3 - Y fabric 1-5 1, 2 1-4 if daisy-chained; 1 if single disk enclosure 0 1-14 89 Fibre Channel disk module Disk drive bays Transceiver A1 90 Transceiver A2 91 Transceiver B1 92 Transceiver B2 93 Left FC-AL board 94 Right FC-AL board 95 Left power supply 96 Right power supply 97 Left blower 28 NonStop NS16000 Series System Overview
IOAM Enclosure FCDM Group Module Slot FCSA F-SACs Shelf Slot Item 98 Right blower 99 EMU The form of the GMS numbering for a disk in a Fibre Channel disk module is: This example shows the disk in bay 03 of the Fibre Channel disk module that connects to the FCSA in the IOAM group 111, module 2, slot 1, FSAC 1: NonStop S-Series I/O Enclosure Group Numbers Assignment of the group number of each NonStop S-series I/O enclosure depends on the cable connection to the p-switch PIC by slot and port. For PIC slot and connector locations, see Processor Switch Group-Module-Slot Numbering (page 26). The cables from the two IOMF 2 CRUs must connect to PICs residing in the same slot number in both the X and Y p-switch and to the same port number on each PIC. For example, the preceding illustration shows the cables from the IOMF 2 CRUS in the NonStop S-series server connected to port 1 of the PICs in slot 4 of the X and Y p-switch, assigning the group number of 11. This table shows the group number assignments for the NonStop S-series I/O enclosures: P-Switch PIC Slot (X and Y Fabrics) 4 5 P-Switch PIC Connector 1 2 3 4 1 2 3 4 NonStop S-Series I/O Enclosure Group 11 12 13 14 21 22 23 24 Component Location and Identification 29
P-Switch PIC Slot (X and Y Fabrics) 6 7 8 9 P-Switch PIC Connector 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 NonStop S-Series I/O Enclosure Group 31 32 33 34 41 42 43 44 51 52 53 54 61 62 63 64 This illustration shows the group number assignments on the p-switch: System Installation Document Packet To keep track of the hardware configuration, internal and external communications cabling, IP addresses, and connect networks, assemble and retain as the systems records an Installation Document Packet. This packet can include: Technical Document for the Factory-Installed Hardware Configuration (page 31) Configuration Forms for CLIMs and ServerNet Adapters (page 31) ServerNet Cluster Configuration Form (page 31) 30 NonStop NS16000 Series System Overview
Technical Document for the Factory-Installed Hardware Configuration Each new Integrity NonStop NS16000 series system includes a document called a technical document. It serves as the physical location and connection map for the system and describes: Each cabinet included with the system Each hardware enclosure installed in the cabinet Cabinet U location of the bottom edge of each enclosure Each ServerNet cable with: Source and destination enclosure, component, and connector Cable part number Source and destination connection labels Configuration Forms for CLIMs and ServerNet Adapters To add configuration forms for ServerNet adapters or CLIMs to your Installation Document Packet, copy the necessary forms from the adapter manuals or the Cluster I/O Protocols (CIP) Configuration and Management Manual for the CLIMs to be configured. Follow any planning instructions in these manuals. For RVU and SPR requirements for CLIMs, see Table 5 (page 116). ServerNet Cluster Configuration Form The configuration form for installing a ServerNet cluster is located in the ServerNet Cluster Manual. BladeCluster Solution Installation and Migration Tasks The checklist for installing a BladeCluster Solution is located in the BladeCluster Solution Manual. System Installation Document Packet 31
2 Site Preparation Guidelines This section describes power, environmental, and space considerations for your site. Modular Cabinet Power and I/O Cable Entry Power and I/O cables can enter the Integrity NonStop NS16000 series server from either the top or the bottom rear of the modular cabinets, depending on how the cabinets are ordered from HP and the routing of the AC power feeds at the site. Integrity NonStop NS16000 series cabinets can be ordered with the AC power cords for the PDUs exiting either: Top: Power and I/O cables are routed from above the modular cabinet. Bottom: Power and I/O cables are routed from below the modular cabinet NOTE: If your system includes the optional rackmounted UPS, the modular cabinet will have one PDU located on the rear right side and four extension bars on the rear left side. To provide redundancy, components are plugged into the right-side PDU and the extension bars. Each extension bar is plugged into the UPS. The rackmounted UPS AC input power cord is routed as described above, and the large output receptacle is unused. For information about modular cabinet power and cable options, refer to Chapter 3 (page 37). Emergency Power-Off Switches Emergency Power Off (EPO) switches are required by local codes or other applicable regulations when computer equipment contains batteries capable of supplying more than 750 volt-amperes (VA) for more that five minutes. Systems that have these batteries also have internal EPO hardware for connection to a site EPO switch or relay. In an emergency, activating the EPO switch or relay removes power from all electrical equipment in the computer room (except that used for lighting and fire-related sensors and alarms). EPO Requirement for Integrity NonStop NS16000 Series Servers Integrity NonStop NS16000 series servers without the optional rackmounted UPS installed in each modular cabinet do not contain batteries capable of supplying more than 750 volt-amperes (VA) for more that five minutes, so they do not require connection to a site EPO switch. EPO Requirement for HP 5000 UPS The rackmounted HP 5000 UPS that can be optionally installed in a modular cabinet contains batteries and has an EPO circuit. For site EPO switches or relays, consult your HP site preparation specialist or electrical engineer regarding requirements. If an EPO switch or relay contactor is required for your site, contact your HP representation or refer to the HP UPS R5000 User Guide for connector and wiring. EPO Requirement for HP 5500 XR UPS The rackmounted HP 5500 XR UPS that can be optionally installed in a modular cabinet contains batteries and has an EPO circuit. For site EPO switches or relays, consult your HP site preparation specialist or electrical engineer regarding requirements. If an EPO switch or relay contactor is required for your site, contact your HP representation or refer to the HP UPS R5500 User Guide for connector and wiring. EPO Requirement for NonStop S-Series I/O Enclosure Each NonStop S-series I/O enclosure contains batteries and an EPO circuit. If an EPO switch or relay contactor is required for your site, contact your HP representation or refer to the NonStop 32 Site Preparation Guidelines
S-Series Hardware Installation and FastPath Guide for connector and wiring information. This guide is available in the NonStop Technical Library (NTL). Electrical Power and Grounding Quality Proper design and installation of a power distribution system for an Integrity NonStop NS16000 series server requires specialized skills, knowledge, and understanding of appropriate electrical codes and the limitations of the power systems for computer and data processing equipment. For power and grounding specifications, refer to Chapter 3: System Installation Specifications (page 37). Power Quality This equipment is designed to operate reliably over a wide range of voltages and frequencies, described in Enclosure AC Input (page 54). However, damage can occur if these ranges are exceeded. Severe electrical disturbances can exceed the design specifications of the equipment. Common sources of such disturbances are: Fluctuations occurring within the facility s distribution system Utility service low-voltage conditions (such as sags or brownouts) Wide and rapid variations in input voltage levels Wide and rapid variations in input power frequency Electrical storms Large inductive sources (such as motors and welders) Faults in the distribution system wiring (such as loose connections) Computer systems can be protected from the sources of many of these electrical disturbances by using: A dedicated power distribution system Power conditioning equipment Lightning arresters on power cables to protect equipment against electrical storms For steps to take to ensure proper power for the servers, consult with your HP site preparation specialist or power engineer. Grounding Systems The site building must provide a power distribution safety ground/protective earth for each AC service entrance to all NonStop server equipment. This safety grounding system must comply with local codes and any other applicable regulations for the installation locale. For proper grounding/protective earth connection, consult with your HP site preparation specialist or power engineer. Power Consumption In Integrity NonStop NS16000 series systems, the power consumption and inrush currents per connection can vary because of the unique combination of enclosures housed in the modular cabinet. Thus, the total power consumption for the hardware installed in the cabinet should be calculated as described in Enclosure Power Loads (page 55). Uninterruptible Power Supply (UPS) Modular cabinets do not have built-in batteries to provide power during power failures. To support system operation through a power failure, Integrity NonStop NS16000 series servers require either an optional UPS (such as the HP R5000 UPS) installed in each modular cabinet or a site UPS to support system operation through a power failure. This system operation support can include a Electrical Power and Grounding Quality 33
planned orderly shutdown at a predetermined time in the event of an extended power failure. A timely and orderly shutdown prevents an uncontrolled and asymmetric shutdown of the system resources from depleted UPS batteries. The R5000 UPS supports the OSM power failure support function that allows you to set a ride-through time. If AC power is not restored before the specified ride-through time expires, OSM initiates an orderly system shutdown. For additional information, see AC Power Monitoring (page 156), OSM Power Fail Support (page 156), and Considerations for Ride-Through Time Configuration (page 157). You can order an optional R5000 UPS for each modular cabinet to supply power to the enclosures within that cabinet. Up to two extended runtime modules (ERMs) can be included with the R5000 UPS to extend the power back-up time. If you add an R5000 UPS to a modular cabinet in the field, the PDU on the left side is replaced with HP extension bars. The extension bars are oriented inward, facing the components within the cabinet. Use R5000 ERM(s) with the R5000 UPS. Use R5500 XR ERM(s) with the R5500 XR UPS. For complete information and specifications on the R5000 UPS, contact your HP representative or refer to the HP UPS R5000 User Guide. If you install a UPS other than the R5000 UPS or R5500 XR UPS in each modular cabinet of an Integrity NonStop NS16000 series system, these requirements must be met to insure the system can survive a total AC power failure: The UPS output voltage can support the HP PDU input voltage requirements. The UPS phase output matches the PDU phase input. Both single-phase and 3-phase output UPSs are supported. Both single-phase and 3-phase input HP PDUs are supported. The UPS output can support the targeted system in the event of an AC power failure. Calculate each cabinet load to insure the UPS can support a proper ride-through time in the event of a total AC power failure. NOTE: A UPS other than the HP R5000 UPS or HP R5500 XR UPS will not be able to utilize the OSM power failure support function. If your applications require a UPS that supports the entire system or even a UPS or motor generator for all computer and support equipment in the site, you must plan the site s electrical infrastructure accordingly. Cooling and Humidity Control Do not rely on an intuitive approach to design cooling or to simply achieve an energy balance that is, summing up to the total power dissipation from all the hardware and sizing a comparable air conditioning capacity. Today s high-performance servers use semiconductors that integrate multiple functions on a single chip with very high power densities. These chips, plus high-power-density mass storage and power supplies, are mounted in ultra-thin server and storage enclosures, and then deployed into computer racks in large numbers. This higher concentration of devices results in localized heat, which increases the potential for hot spots that can damage the equipment. Additionally, variables in the installation site layout can adversely affect air flows and create hot spots by allowing hot and cool air streams to mix. Studies have shown that above 70 F (20 C), every increase of 18 F (10 C) reduces long-term electronics reliability by 50%. Cooling airflow through each enclosure in the Integrity NonStop NS16000 series server is front-to-back. Because of high heat densities and hot spots, an accurate assessment of air flow around and through the server equipment and specialized cooling design is essential for reliable server operation. For an airflow assessment, consult with your HP cooling consultant or your heating, ventilation, and air conditioning (HVAC) engineer. 34 Site Preparation Guidelines
Weight Flooring NOTE: Failure of site cooling with the server continuing to run can cause rapid heat buildup and excessive temperatures within the hardware. Excessive internal temperatures can result in full or partial system shutdown. Ensure that the site s cooling system remains fully operational when the server is running. Because each modular cabinet houses a unique combination of enclosures, use the Heat Dissipation Specifications and Worksheet (page 60) to calculate the total heat dissipation for the hardware installed in each cabinet. For air temperature levels at the site, refer to Operating Temperature, Humidity, and Altitude (page 61). Modular cabinets for Integrity NonStop NS16000 series servers have a foot print and height comparable to NonStop S-series servers with stacked enclosures. But each populated modular cabinet in Integrity NonStop NS16000 series servers can be more than twice as heavy as a NonStop S-series server stack. Because each modular cabinet houses a unique combination of enclosures, total weight must be calculated based on what is in the specific cabinet, as described in Modular Cabinet and Enclosure Weights With Worksheet (page 59). Integrity NonStop NS16000 series servers can be installed either on the site s floor with the cables entering from above the equipment or on raised flooring with power and I/O cables entering from underneath. Because cooling airflow through each enclosure in the modular cabinets is front-to-back, raised flooring is not required for system cooling. The site floor structure and any raised flooring (if used) must be able to support the total weight of the installed computer system as well as the weight of the individual modular cabinets and their enclosures as they are moved into position. To determine the total weight of each modular cabinet with its installed enclosures, refer to Modular Cabinet and Enclosure Weights With Worksheet (page 59). For your site s floor system, consult with your HP site preparation specialist or an appropriate floor system engineer. If raised flooring is to be used, the design of the Integrity NonStop NS16000 series modular cabinet is optimized for placement on 24-inch floor panels. Dust and Pollution Control NonStop servers do not have air filters. Any computer equipment can be adversely affected by dust and microscopic particles in the site environment. Airborne dust can blanket electronic components on printed circuit boards inhibiting cooling airflow and causing premature failure from excess heat, humidity, or both. Metallically conductive particles can short circuit electronic components. Tape drives and some other mechanical devices can experience failures resulting from airborne abrasive particles. For recommendations to keep the site as free of dust and pollution as possible, consult with your heating, ventilation, and air conditioning (HVAC) engineer or your HP site preparation specialist. Zinc Particulates Over time, fine whiskers of pure metal can form on electroplated zinc, cadmium, or tin surfaces such as aged raised flooring panels and supports. If these whiskers are disturbed, they can break off and become airborne, possibly causing computer failures or operational interruptions. This metallic particulate contamination is a relatively rare but possible threat. Kits are available to test for metallic particulate contamination, or you can request that your site preparation specialist or HVAC engineer test the site for contamination before installing any electronic equipment. Weight 35
Space for Receiving and Unpacking Identify areas that are large enough to receive and to unpack the system from its shipping cartons and pallets. Be sure to allow adequate space to remove the system equipment from the shipping pallets using supplied ramps. Also be sure adequate personnel are present to remove each cabinet from its shipping pallet and to safely move it to the installation site. WARNING! A fully populated cabinets is unstable when moving down the unloading ramp from its shipping pallet. Arrange for enough personnel to stabilize each cabinet during removal from the pallet and to prevent the cabinet from falling. A falling cabinet can cause serious or fatal personal injury. Ensure sufficient pathways and clearances for moving the server equipment safely from the receiving and unpacking areas to the installation site. Verify that door and hallway width and height as well as floor and elevator loading will accommodate not only the server equipment but also all required personnel and lifting or moving devices. If necessary, enlarge or remove any obstructing doorway or wall. All modular cabinets have small casters to facilitate moving them on hard flooring from the unpacking area to the site. Because of these small casters, rolling modular cabinets along carpeted or tiled pathways might be difficult. If necessary, plan for a temporary hard floor covering in affected pathways for easier movement of the equipment. For physical dimensions of the server equipment, refer to Dimensions and Weights (page 56). Operational Space When planning the layout of the server site, use the equipment dimensions, door swing, and service clearances listed in Dimensions and Weights (page 56). Because location of the lighting fixtures and electrical outlets affects servicing operations, consider an equipment layout that takes advantage of existing lighting and electrical outlets. Also consider the location and orientation of current or future air conditioning ducts and airflow direction and eliminate any obstructions to equipment intake or exhaust air flow. Refer to Cooling and Humidity Control (page 34). Space planning should also include the possible addition of equipment or other changes in space requirements. Depending on the current or future equipment installed at your site, layout plans can also include provisions for: Channels or fixtures used for routing data cables and power cables Access to air conditioning ducts, filters, lighting, and electrical power hardware Communications cables, patch panels, and switch equipment Power conditioning equipment Storage area or cabinets for supplies, media, and spare parts 36 Site Preparation Guidelines
3 System Installation Specifications This section provides the specifications necessary for planning your system installation site. Modular Cabinets The modular cabinet is a EIA standard 19-inch, 42U high, rack for mounting modular components. The modular cabinet comes equipped with front and rear doors and includes a rear extension that makes it deeper than some industry-standard racks. The Power Distribution Units (PDUs) are mounted along the rear extension without occupying any U-space in the cabinet and are oriented inward, facing the components within the modular cabinet. NOTE: For instructions on grounding the modular cabinet's G2 rack using the HP Rack Grounding Kit (AF074A), ask your service provider to refer to the instructions in the HP 10000 G2 Series Rack Options Installation Guide. This manual is available at: http://bizsupport1.austin.hp.com/ bc/docs/support/supportmanual/c01493702/c01493702.pdf. Depending on your NS16000 power configuration, your system uses one of these PDU types: Monitored Single-Phase PDUs (page 37) Monitored Three-Phase PDUs (page 42) Modular Three-Phase PDUs (page 47) NOTE: If your system includes the optional rackmounted HP R5000 UPS, the modular cabinet will have one PDU located on the rear right side and four extension bars on the rear left side. To provide redundancy, components are plugged into the right-side PDU and the extension bars. Each extension bar is plugged into the UPS. Each PDU is wired to distribute the load segments to its receptacles. CAUTION: If you are installing NonStop NS16000 series enclosures in a modular cabinet, balance the current load among the available load segments. Using only one of the available load segments, especially for larger systems, can cause unbalanced loading and might violate applicable electrical codes. Connecting the two power plugs from an enclosure to the same load segment causes failure of the hardware if that load segment fails. Monitored Single-Phase PDUs Two monitored single-phase power distribution units (PDUs) are installed to provide redundant power outlets for the components mounted in the modular cabinet. The PDUs are oriented inward, facing the components within the modular cabinet. Each PDU is 60 inches long and has 39 AC receptacles, three circuit breakers, and an AC power cord. The PDU is oriented with the AC power cord exiting the modular cabinet at either the top or bottom rear corners of the cabinet, depending on the sites power feed needs. For information about specific characteristics for PDUs factory-installed in monitored single-phase cabinets, refer to AC Power Feeds, Monitored Single-Phase PDUs (page 38) Input and Output Power Characteristics, Monitored Single-Phase PDUs (page 41) Branch Circuits and Circuit Breakers, Monitored Single-Phase PDUs (page 42) Modular Cabinets 37
Each single-phase PDU in a modular cabinet has: 36 AC receptacles per PDU (12 per segment) - IEC 320 C13 12A receptacle type 3 AC receptacles per PDU (1 per segment) - IEC 320 C19 16A receptacle type 3 circuit-breakers These PDU options are available to receive power from the site AC power source: 200 to 240 V AC, single-phase for North America and Japan 200 to 240 V AC single-phase for International Each PDU distributes the site AC power as single phase 200 to 240 V AC to the 39 outlets for connecting the power cords from the components mounted in the modular cabinet. AC Power Feeds, Monitored Single-Phase PDUs Power can enter the NonStop NS16000 series server from either the top or the bottom rear of the modular cabinets, depending on how the cabinets are ordered from HP and the AC power feeds are routed at the site. NonStop NS16000 series server cabinets can be ordered with the AC power cords for the PDU installed either: Top: Power and I/O cables are routed from above the modular cabinet. Bottom: Power and I/O cables are routed from below the modular cabinet For information on the modular cabinets, refer to Modular Cabinets (page 37). The AC power feed cables for the PDUs are mounted to exit the modular cabinet at either the top or bottom rear corners of the cabinet depending on what is ordered for the site power feed. Top AC Power Feed, Monitored Single-Phase PDUs Figure 2 shows the AC power feed cables on PDUs for AC feed at the top of the cabinet: Figure 2 Top AC Power Feed, Monitored Single-Phase PDUs 38 System Installation Specifications
Bottom AC Power Feed, Monitored Single-Phase PDUs Figure 3 shows the AC power feed cables on PDUs for AC feed at the bottom of the cabinet and the AC power outlets along the PDU. These power outlets face in toward the cabinet: Figure 3 Bottom AC Power Feed, Monitored Single-Phase PDUs Top AC Power Feed When Optional UPS and ERM are Installed If your system includes the optional rackmounted HP R5000 UPS, the modular cabinet will have one PDU located on the rear right side and four extension bars on the rear left side. The PDU and extension bars are oriented inward, facing the components within the modular cabinet. To provide redundancy, components are plugged into the right-side PDU and the extension bars. Each extension bar is plugged into the UPS. Figure 4 shows the AC power feed cables for the PDU and UPS for AC power feed from the top of the cabinet when the optional UPS and ERM are installed. Also see UPS and ERM (Optional) (page 131). Monitored Single-Phase PDUs 39
Figure 4 Top AC Power Feed When Optional UPS and ERM are Installed Bottom AC Power Feed When Optional UPS and ERM are Installed If your system includes the optional rackmounted HP R5000 UPS, the modular cabinet will have one PDU located on the rear right side and four extension bars on the rear left side. The PDU and extension bars are oriented inward, facing the components within the modular cabinet. To provide redundancy, components are plugged into the right-side PDU and the extension bars. Each extension bar is plugged into the UPS. Figure 5 shows the AC power feed cables for the PDU and UPS for AC power feed from the bottom of the cabinet when the optional UPS and ERM are installed. Also see UPS and ERM (Optional) (page 131). 40 System Installation Specifications
Figure 5 Bottom AC Power Feed When Optional UPS and ERM are Installed Input and Output Power Characteristics, Monitored Single-Phase PDUs The cabinet includes two monitored single-phase PDUs. North America and Japan: 200 to 240 V AC, Monitored Single-Phase PDUs The North America and Japan PDU power characteristics are: PDU input characteristics 200 to 240 V AC, single phase, 40A RMS, 3-wire 50/60Hz Non-NEMA Locking CS8265C, 50A input plug 6.5 feet (2 m) attached power cord PDU output characteristics 3 circuit-breaker-protected 20A load segments 36 AC receptacles per PDU (12 per segment) - IEC 320 C13 12A receptacle type 3 AC receptacles per PDU (1 per segment) - IEC 320 C19 16A receptacle type Monitored Single-Phase PDUs 41
International: 200 to 240 V AC, Monitored Single-Phase PDUs The international PDU power characteristics are: PDU input characteristics 200 to 240 V AC, single phase, 32A RMS, 3-wire 50/60Hz IEC309 3-pin, 32A input plug 6.5 feet (2 m) attached harmonized power cord PDU output characteristics 3 circuit-breaker-protected 20A load segments 36 AC receptacles per PDU (12 per segment) - IEC 320 C13 12A receptacle type 3 AC receptacles per PDU (1 per segment) - IEC 320 C19 16A receptacle type Branch Circuits and Circuit Breakers, Monitored Single-Phase PDUs Modular cabinets for the NonStop NS16000 series system contain two PDUs. In cabinets without the optional rack-mounted UPS, each of the two PDUs requires a separate branch circuit of these ratings: Region North America and Japan International 1 Category D circuit breaker is required. Volts (Phase-to-Phase) 200 to 240 200 to 240 Amps (see following CAUTION ) 50 32 1 CAUTION: Be sure the hardware configuration and resultant power loads of each cabinet within the system do not exceed the capacity of the branch circuit according to applicable electrical codes and regulations. NOTE: If your system includes the optional rackmounted HP R5000 UPS, the modular cabinet will have one PDU located on the rear right side and four extension bars on the rear left side. To provide redundancy, components are plugged into the right-side PDU and the extension bars. Each extension bar is plugged into the UPS. Branch circuit requirements vary by the input voltage and the local codes and applicable regulations regarding maximum circuit and total distribution loading. Select circuit breaker ratings according to local codes and any applicable regulations for the circuit capacity. Note that circuit breaker ratings vary if your system includes the optional rackmounted HP R5000 UPS (see Circuit Breaker Ratings for UPS (page 54)). Monitored Three-Phase PDUs Two monitored three-phase power distribution units (PDUs) are installed to provide redundant power outlets for the components mounted in the modular cabinet. The PDUs are oriented inward, facing the components within the modular cabinet. Each PDU is 60 inches long and has 39 AC receptacles, three circuit breakers, and an AC power cord. The PDU is oriented with the AC power cord exiting the modular cabinet at either the top or bottom rear corners of the cabinet, depending on the sites power feed needs. 42 System Installation Specifications
For information about specific characteristics for PDUs factory-installed in monitored three-phase cabinets, refer to AC Power Feeds, Monitored Three-Phase PDUs (page 43) Input and Output Power Characteristics, Monitored Three-Phase PDUs (page 46) Branch Circuits and Circuit Breakers, Monitored Three-Phase PDUs (page 47) Each three-phase monitored PDU in a modular cabinet has: 36 AC receptacles per PDU (12 per segment) - IEC 320 C13 12A receptacle type 3 AC receptacles per PDU (1 per segment) - IEC 320 C19 16A receptacle type 3 circuit-breakers These PDU options are available to receive power from the site AC power source: 200 to 240 V AC, three-phase delta for North America and Japan 380 to 415 V AC, three-phase wye for International Each PDU distributes the site AC power as three-phase 200 to 240 V AC to the 39 outlets for connecting the power cords from the components mounted in the modular cabinet. AC Power Feeds, Monitored Three-Phase PDUs Power can enter the NonStop NS16000 series server from either the top or the bottom rear of the modular cabinets, depending on how the cabinets are ordered from HP and the AC power feeds are routed at the site. NonStop NS16000 series server cabinets can be ordered with the AC power cords for the PDU installed either: Top: Power and I/O cables are routed from above the modular cabinet. Bottom: Power and I/O cables are routed from below the modular cabinet The AC power feed cables for the PDUs are mounted to exit the modular cabinet at either the top or bottom rear corners of the cabinet depending on what is ordered for the site power feed. Top AC Power Feed, Monitored Three-Phase PDUs Figure 6 shows the AC power feed cables on PDUs for AC feed at the top of the cabinet: Monitored Three-Phase PDUs 43
Figure 6 Top AC Power Feed, Monitored Three-Phase PDUs Bottom AC Power Feed, Monitored Three-Phase PDUs Figure 7 shows the AC power feed cables on PDUs for AC feed at the bottom of the cabinet and the AC power outlets along the PDU. These power outlets face in toward the cabinet: Figure 7 Bottom AC Power Feed, Monitored Three-Phase PDUs Top AC Power Feed When Optional UPS and ERM are Installed If your system includes the optional rackmounted HP UPS, the modular cabinet will have one PDU located on the rear right side and four extension bars on the rear left side. The PDU and extension 44 System Installation Specifications
bars are oriented inward, facing the components within the modular cabinet. To provide redundancy, components are plugged into the right-side PDU and the extension bars. Each extension bar is plugged into the UPS. Figure 8 shows the AC power feed cables for the PDU and UPS for AC power feed from the top of the cabinet when the optional UPS and ERM are installed. Also see UPS and ERM (Optional) (page 131). Figure 8 Top AC Power Feed When Optional UPS and ERM are Installed Bottom AC Power Feed When Optional UPS and ERM are Installed If your system includes the optional rackmounted HP UPS, the modular cabinet will have one PDU located on the rear right side and four extension bars on the rear left side. The PDU and extension bars are oriented inward, facing the components within the modular cabinet. To provide redundancy, components are plugged into the right-side PDU and the extension bars. Each extension bar is plugged into the UPS. Figure 9 shows the AC power feed cables for the PDU and UPS for AC power feed from the bottom of the cabinet when the optional UPS and ERM are installed. Also see UPS and ERM (Optional) (page 131). Monitored Three-Phase PDUs 45
Figure 9 Bottom AC Power Feed When Optional UPS and ERM are Installed Input and Output Power Characteristics, Monitored Three-Phase PDUs The cabinet includes two monitored three-phase PDUs. North America and Japan: 200 to 240 V AC, Monitored Three-Phase PDUs The North America and Japan PDU power characteristics are: PDU input characteristics 200 to 240 V AC, 3 phase delta, 30A, 4-wire 50/60Hz NEMA L15-30 input plug 6.5 feet (2 m) attached power cord PDU output characteristics 3 circuit-breaker-protected 13.86A load segments 36 AC receptacles per PDU (12 per segment) - IEC 320 C13 12A receptacle type 3 AC receptacles per PDU (1 per segment) - IEC 320 C19 16A receptacle type 46 System Installation Specifications
International: 380 to 415 V AC, Monitored Three-Phase PDUs The international PDU power characteristics are: PDU input characteristics 380 to 415 V AC, 3-phase Wye, 16A RMS, 5-wire 50/60Hz IEC309 5-pin, 16A input plug 6.5 feet (2 m) attached harmonized power cord PDU output characteristics 3 circuit-breaker-protected 16A load segments 36 AC receptacles per PDU (12 per segment) - IEC 320 C13 12A receptacle type 3 AC receptacles per PDU (1 per segment) - IEC 320 C19 16A receptacle type Branch Circuits and Circuit Breakers, Monitored Three-Phase PDUs Modular cabinets for the NonStop NS16000 series system that use a three-phase power configuration with monitored three-phase PDUs contain two PDUs. In cabinets without the optional rack-mounted UPS, each of the two PDUs requires a separate branch circuit of these ratings: Region North America and Japan International 1 Category D circuit breaker is required. Volts (Phase-to-Phase) 200 to 240 380 to 415 Amps (see following CAUTION ) 30 16 1 CAUTION: Be sure the hardware configuration and resultant power loads of each cabinet within the system do not exceed the capacity of the branch circuit according to applicable electrical codes and regulations. Branch circuit requirements vary by the input voltage and the local codes and applicable regulations regarding maximum circuit and total distribution loading. Select circuit breaker ratings according to local codes and any applicable regulations for the circuit capacity. Select circuit breaker ratings according to local codes and any applicable regulations for the circuit capacity. Note that circuit breaker ratings vary if your system includes the optional rackmounted UPS. Modular Three-Phase PDUs Two three-phase modular power distribution units (PDUs) are installed to provide redundant power outlets for the components mounted in the modular cabinet. Each 1U rack-mounted modular PDU comes with four modular PDU extension bars. The PDUs are oriented facing each other within the rack. Each PDU has 28 AC receptacles, six circuit breakers, and an AC power cord. The PDU is oriented with the AC power cord exiting the modular cabinet at either the top or bottom rear corners of the cabinet, depending on the site's power feed needs. For information about specific characteristics for PDUs factory-installed in modular three-phase cabinets, refer to AC Power Feeds, Modular Three-Phase PDUs (page 48) Input and Output Power Characteristics, Modular Three-Phase PDUs (page 52) Branch Circuits and Circuit Breakers, Modular Three-Phase PDUs (page 53) Modular Three-Phase PDUs 47
Each three-phase modular PDU in a modular cabinet has: 28 AC receptacles per PDU (7 per extension bar) - IEC 320 C13 12A receptacle type 6 circuit-breakers These PDU options are available to receive power from the site AC power source: 200 to 240 V AC, three-phase delta for North America and Japan 380 to 415 V AC, three-phase wye for International Each PDU distributes site three-phase power to 34 single-phase 200 to 240 V AC outlets for connecting the power cords from the components mounted in the modular cabinet. AC Power Feeds, Modular Three-Phase PDUs Power can enter the NonStop NS16000 series server from either the top or the bottom rear of the modular cabinets, depending on how the cabinets are ordered from HP and the AC power feeds are routed at the site. NonStop NS16000 series server cabinets can be ordered with the AC power cords for the PDU installed either: Top: Power and I/O cables are routed from above the modular cabinet. Bottom: Power and I/O cables are routed from below the modular cabinet The AC power feed cables for the PDUs are mounted to exit the modular cabinet at either the top or bottom rear corners of the cabinet depending on what is ordered for the site power feed. Top AC Power Feed, Modular Three-Phase PDUs Figure 10 shows the three-phase modular PDUs with AC feed at the top of the cabinet. 48 System Installation Specifications
Figure 10 Top AC Power Feed, Modular Three-Phase PDUs Bottom AC Power Feed, Modular Three-Phase PDUs Figure 11 shows the power feed cables on modular three-phase PDUs with AC feed at the bottom of the cabinet and the output connections for the three-phase modular PDU. Modular Three-Phase PDUs 49
Figure 11 Bottom AC Power Feed, Modular Three-Phase PDUs Top AC Power Feed When Optional UPS and ERM are Installed Figure 12 (page 51) shows the AC power feed cables for the PDU and UPS for AC power feed from the top of the cabinet when the optional UPS and ERM are installed. Also see UPS and ERM (Optional) (page 131). 50 System Installation Specifications
Figure 12 Top AC Power Feed, Modular Three-Phase PDU with UPS and ERM Bottom AC Power Feed When Optional UPS and ERM are Installed Figure 13 (page 52) shows the AC power feed cables for the PDU and UPS for AC power feed from the bottom of the cabinet when the optional UPS and ERM are installed. Also see UPS and ERM (Optional) (page 131). Modular Three-Phase PDUs 51
Figure 13 Bottom AC Power Feed, Modular Three-Phase PDU with UPS and ERM Input and Output Power Characteristics, Modular Three-Phase PDUs The cabinet includes two modular three-phase PDUs. North America and Japan: 200 to 240 V AC, Modular Three-Phase PDUs and Extension Bars The North America and Japan PDU power characteristics are: PDU input characteristics 200 to 240 V AC, 3-phase delta, 30A, 4-wire 50/60Hz NEMA L15-30 input plug 12 feet (3.6 m) attached power cord PDU output characteristics 6 IEC 320 C19 receptacles per PDU with 20A circuit-breaker labels (L1, L2, L3, L4, L5, and L6) 52 System Installation Specifications
Extension bar input characteristics 200V to 240 V AC, 3-phase delta, 16A RMS, 4-wire 50/60Hz IEC 320 C20 input plug 6.5 feet (2.0 m) attached power cord Extension bar output characteristics 7 IEC 320 C13 receptacles per PDU with 12A maximum per outlet International: 380 to 415 V AC, Modular Three-Phase PDUs and Extension Bars The international PDU power characteristics are: PDU input characteristics 380 to 415 V AC, 3-phase Wye, 16A RMS, 5-wire 50/60Hz IEC309 5-pin, 16A input plug 12 feet (3.6 m) attached power cord PDU output characteristics 6 AC IEC 320 C19 receptacles per PDU with 20A circuit-breaker labels (L1, L2, L3, L4, L5, and L6) Extension bar input characteristics 200V to 240 V AC, 3-phase delta, 16A max RMS, 4-wire 50/60Hz IEC 320 C20 input plug 6.5 feet (2.0 m) attached power cord Extension bar output characteristics 7 AC IEC 320 C13 receptacles per PDU with 12A maximum per outlet Branch Circuits and Circuit Breakers, Modular Three-Phase PDUs Modular cabinets for the NonStop NS16000 series system that use a three-phase power configuration with modular three-phase PDUs contain two PDUs. In cabinets without the optional rack-mounted UPS, each of the two PDUs requires a separate branch circuit of these ratings: Region North America and Japan International 1 Category D circuit breaker is required. Volts (Phase-to-Phase) 200 to 240 380 to 415 Amps (see following CAUTION ) 30 16 1 CAUTION: Be sure the hardware configuration and resultant power loads of each cabinet within the system do not exceed the capacity of the branch circuit according to applicable electrical codes and regulations. NOTE: If your system includes the optional rackmounted HP R5000 UPS, the modular cabinet will have one PDU located on the rear left side and four extension bars on the rear right side. To provide redundancy, components are plugged into the left-side PDU and the extension bars. Each extension bar is plugged into the UPS. Branch circuit requirements vary by the input voltage and the local codes and applicable regulations regarding maximum circuit and total distribution loading. Select circuit breaker ratings according to local codes and any applicable regulations for the circuit capacity. Modular Three-Phase PDUs 53
Select circuit breaker ratings according to local codes and any applicable regulations for the circuit capacity. Note that circuit breaker ratings vary if your system includes the optional rackmounted HP R5000 UPS. Circuit Breaker Ratings for UPS These ratings apply to systems with the optional rack-mounted HP R5000 UPS Integrated UPS that is used for a single-phase power configuration: Version Operating Voltage Settings Power Out (VA/Watts) Input Plug UPS Input Rating 1 North America and Japan 200/208 2 /220/230/240 5000/4500 L6-30P Dedicated 30 Amp International 200/208/220/230 2 /240 5000/4500 IEC 60309 32A Dedicated 32 Amp 1 The UPS input requires a dedicated (unshared) branch circuit that is suitably rated for your specific UPS. 2 Factory-default setting For further information and specifications on the R5000 UPS, refer to the HP UPS R5000 User Guide. These ratings apply to systems with the optional rack-mounted HP R5500 XR Integrated UPS that is used for a single-phase power configuration: Version Operating Voltage Settings Power Out (VA/Watts) Input Plug UPS Input Rating 1 North America and Japan 200/208 2, 220, 230, 240 5000/4500 For R5500 XR UPS: L6-30P Dedicated 30 Amp International 200, 230 2, 240 If set at 200/208 6000/5400 Then 5000/4500 For R5500 XR UPS: IEC-309 32 Amp Dedicated 30 Amp 1 The UPS input requires a dedicated (unshared) branch circuit that is suitably rated for your specific UPS. 2 Factory-default setting For further information and specifications on the R5500 XR UPS, refer to the HP R5500 UPS User Guide. PDU Strapping Configurations Grounding PDUs are available in four static strapping configurations that are factory-installed in a modular cabinet. The specific PDU strapping configuration for a particular site depends on the type and voltage of AC power at the intended installation site for the system. A safety ground/protective earth conductor is required for each AC service entrance to the NonStop server equipment. This ground must comply with local codes and any other applicable regulations Enclosure AC Input Enclosures (NonStop Blade Element, processor switch, IOAM, and so forth) require: Specification Nominal input voltage Voltage range 1 Value 200/208/220/230/240 V AC RMS 180-264 V AC 54 System Installation Specifications
Specification Nominal line frequency Frequency ranges Value 50 or 60 Hz 47-53 Hz or 57-63 Hz Number of phases 1 Voltage range for the maintenance switch is 200-240 V AC. 1 Each PDU is wired to distribute the load segments to its receptacles. Factory-installed enclosures are connected to the PDUs for a balanced load among the load segments. CAUTION: If you are installing Integrity NonStop NS16000 series system enclosures in a modular cabinet, balance the current load among the available load segments. Using only one of the available load segments, especially for larger systems, can cause unbalanced loading and might violate applicable electrical codes. Connecting the two power plugs from an enclosure to the same load segment causes failure of the hardware if that load segment fails. Enclosure Power Loads The total power and current load for each modular cabinet depends on the number and type of enclosures installed in it. Therefore, the total load is the sum of the loads for all enclosures installed. For examples of calculating the power and current load for various enclosure combinations, refer to Calculating Specifications for Enclosure Combinations (page 62). In normal operation, the AC power is split equally between the two PDUs in the modular cabinet. However, if one of the two AC power feeds fails, the remaining AC power feed and PDU must carry the power for all enclosures in that cabinet. NOTE: If your system includes the optional rackmounted HP R5000 UPS, the modular cabinet will have one PDU located on the rear left side and four extension bars on the rear right side. To provide redundancy, components are plugged into the left-side PDU and the extension bars. Each extension bar is plugged into the UPS. Power and current specifications for each type of enclosure and disk drive are: Enclosure Type AC Power Lines per Enclosure 1 Typical Power Consumption (VA) Maximum Power Consumption (VA) Peak Inrush Current (amps) NonStop Blade Element chassis 2 460 480 17 Processor assembly, 2P - 125 150 - Processor assembly, 4P - 250 300 - LSU (with four LSU boards) 2 2 220 220 67 Processor switch 3 2 200 200 5 DL385 G2 or G5 IP, Telco, or Storage CLIM 2 296 364 15 DL380 G6 Storage CLIM 2 135 225 15 DL380 G6 networking CLIM (IP, Telco, or IB CLIM) 2 130 200 15 Enclosure Power Loads 55
Enclosure Type AC Power Lines per Enclosure 1 Typical Power Consumption (VA) Maximum Power Consumption (VA) Peak Inrush Current (amps) MSA70 SAS disk enclosure (empty) 2 125 180 5 D2700 SAS disk enclosure (empty) 2 75 125 5 SAS 2.5 in., 10k rpm disk drive - 5 9 - SAS 2.5 in., 15k rpm disk drive - 4 7 - SAS SSD - 6 6 1.5 (5V current) 1.0 (12V current) IOAM enclosure 4 4 530 530 68 Fibre Channel disk module (no disk) 2 110 110 14 Fibre Channel disk drive - 17 17 - Maintenance switch (Ethernet) 5 1 20 20 4 Rack-mounted system console system (NSCR4) 1 176 176 - Rack-mounted system console system (NSCR110) 1 105 115 2 Rack-mounted console system unit keyboard and monitor 1 28 28 4 1 Half of the plugs for an enclosure must be connected to the left-side PDU and the other half connected to the right-side PDU or extension bars (if the optional UPS is installed). PDUs must be supplied from separate branch circuits. 2 Measured with four LSU optical adapters installed and active. Each LSU logic board consumes 55 W. 3 Measured with three PICs installed and active. Each MMF PIC or SMF PIC consumes 13 W. 4 Measured with 10 Fibre Channel ServerNet adapters installed and active. Each FCSA or G4SA consumes 30 W. 5 Maintenance switch has only one plug. If a UPS is installed in the modular cabinet, the maintenance switch plug must be connected to the extension bars on the right-side of the modular cabinet. Dimensions and Weights This subsection provides information about the dimensions and weights for modular cabinets and enclosures installed in a modular cabinet and covers these topics: Service Clearances for the Modular Cabinet (page 57) Unit Sizes (page 57) Modular Cabinet Physical Specifications (page 58) Enclosure Dimensions (page 58) Modular Cabinet and Enclosure Weights With Worksheet (page 59) 56 System Installation Specifications
Plan View From Above the Modular Cabinet Service Clearances for the Modular Cabinet Unit Sizes Aisles: 6 feet (182.9 centimeters) Front: 3 feet (91.4 centimeters) Rear: 3 feet (91.4 centimeters) Enclosure Type Modular cabinet NonStop Blade Element Processor switch LSU CLIM 1 CLIM cable management Ethernet patch panel SAS disk enclosure IOAM Fibre Channel disk module Maintenance switch (Ethernet) R5000 UPS R5000 ERM (extended runtime module) Rackmount console with keyboard and monitor Height (U) 42 5 3 4 2 1 2 11 3 1 3 3 2 Dimensions and Weights 57
1 For RVU requirements for CLIMs, see Table 5 (page 116). Modular Cabinet Physical Specifications Item Height Width Depth Weight in. cm in. cm in. cm Modular Cabinet (HP 10000 G2 Series rack with extension, doors, and side panels) Rack 78.7 78.5 199.9 199.4 24.0 23.62 60.96 60.0 46.7 42.5 118.6 108.0 Depends on the enclosures installed. Refer to Modular Cabinet and Enclosure Weights With Worksheet (page 59). Front door 78.5 199.4 23.5 59.7 3.2 8.1 Left-rear door 78.5 199.4 11.0 27.9 1.0 2.5 Right-rear door 78.5 199.4 12.0 30.5 1.0 2.5 Shipping (palletized) 86.5 219.71 35.75 90.80 54.25 137.80 Enclosure Dimensions Enclosure Type Height Width Depth in cm in cm in cm NonStop Blade Element 8.8 22.2 19.0 48.3 27.0 68.6 Processor switch 5.3 13.3 19.0 48.3 24.5 62.2 LSU 7.0 17.9 19.0 48.3 27.0 68.6 DL385 G2 or G5 CLIM 1 3.4 8.6 17.5 44.5 26 66 DL380 G6 CLIM 1 3.4 8.6 17.5 44.6 27.3 69.2 CLIM cable management Ethernet patch panel 1.7 4.4 18.9 47.9 28.3 72.1 MSA70 SAS disk enclosure 3.4 8.8 17.6 44.8 23.2 59 D2700 SAS disk enclosure 3.5 8.8 18.0 45.7 22.3 56.6 IOAM 19.25 48.9 19.0 48.3 27.0 68.6 Fibre Channel disk module 5.2 13.1 19.9 50.5 17.6 44.8 Maintenance switch (Ethernet) 1.73 4.39 17.4 44.2 9.3 23.62 Rackmount console system unit 1.7 4.3 16.8 42.7 24.0 60.9 Rackmount console system unit with keyboard and display 1.7 4.3 15.6 39.6 17.0 43.2 R5000 UPS 5.0 12.7 17.2 43.7 29.3 74.4 R5000 ERM 5.0 12.7 17.24 43.8 28.3 71.9 R5500 XR UPS 5.1 13.0 17.5 44.5 26.0 66.0 R5500 XR ERM 5.1 13.0 17.5 44.5 25.1 63.8 58 System Installation Specifications
1 For RVU requirements for CLIMs, see Table 5 (page 116). Modular Cabinet and Enclosure Weights With Worksheet The total weight of each modular cabinet is the sum the weights of the cabinet plus each enclosure installed in it. Use this worksheet to determine the total weight: Enclosure Type Number of Enclosures Weight lbs kg Total lbs kg 42U (single-phase with modular PDUs) 1 328 148.8 42U (three-phase with modular PDUs) 1 328 149 42U (three-phase with monitored PDUs) 1 303 137 NonStop Blade Element 112 50.8 Processor switch 70 32.8 LSU 96 43.5 DL385 G2 or G5 CLIM 2 53 24 DL380 G6 CLIM 2 58 26 CLIM cable management Ethernet patch panel 19 8.6 MSA70 3GB protocol SAS disk enclosure (empty) 43 19.4 D2700 6GB protocol SAS disk enclosure (empty) 38 17 2.5 in SAS HDD, 3GB protocol, 72 GB, 15K rpm.5.20 2.5 in SAS HDD, 3GB protocol, 146 GB, 10K rpm 1.45 2.5 in SAS HDD, 6GB protocol, 300 GB, 10K rpm 1.45 2.5 in SAS SSD.52.24 Disk blank.1.04 IOAM 200 90.7 Fibre Channel disk module 78 35.4 Maintenance switch (Ethernet) 4.89 2.22 Rackmount console system unit keyboard and display 34 15.4 R5000 UPS 126 57 R5000 ERM 139 63 R5500 XR UPS 160 72.6 R5500 XR ERM 170 77.1 Total -- -- 1 Modular cabinet weight includes the PDUs and their associated wiring and receptacles. 2 For RVU requirements for CLIMs, see Table 5 (page 116). Dimensions and Weights 59
For examples of calculating the weight for various enclosure combinations, refer to Calculating Specifications for Enclosure Combinations (page 62). Modular Cabinet Stability Cabinet stabilizers are required when you have fewer than four cabinets bayed together. NOTE: Cabinet stability is of special concern when equipment is routinely installed, removed, or accessed within the cabinet. Stability is addressed through the use of leveling feet, baying kits, fixed stabilizers, and/or ballast. For information about best practices for cabinets, see: HP 10000 G2 Series Rack User Guide Best practices for HP 10000 Series and HP 10000 G2 Series Racks Environmental Specifications This subsection provides information about environmental specifications and covers these topics: Heat Dissipation Specifications and Worksheet (page 60) Operating Temperature, Humidity, and Altitude (page 61) Nonoperating Temperature, Humidity, and Altitude (page 62) Cooling Airflow Direction (page 62) Typical Acoustic Noise Emissions (page 62) Tested Electrostatic Immunity (page 62) Heat Dissipation Specifications and Worksheet Enclosure Type Number Installed Unit Heat (Btu/hour, typical) Unit Heat (Btu/hour, maximum) Total (BTU/hour) NonStop Blade Element chassis 1570 1638 Processor assembly, 2P 427 512 Processor assembly, 4P 853 1024 LSU (with four LSU boards) 751 751 Processor switch 1 682 682 DL385 G2 or G5 IP, Telco, or Storage CLIM 1010 1242 DL380 G6 Storage CLIM 461 768 DL380 G6 networking CLIM 444 682 MSA70 SAS disk enclosure (empty) 427 614 D2700 SAS disk enclosure (empty) 256 427 60 System Installation Specifications
Enclosure Type Number Installed Unit Heat (Btu/hour, typical) Unit Heat (Btu/hour, maximum) Total (BTU/hour) SAS 2.5 in., 10k rpm HDD 17 31 SAS 2.5 in., 15k rpm HDD 14 23 SAS SSD 19.4 19.9 IOAM 2 1808 1808 Fibre Channel disk module (no disk) 375 375 Fibre Channel disk drive 58 58 Maintenance switch (Ethernet) 3 68 68 Rack-mounted system console (NSCR4) 600 600 Rack-mounted system console (NSCR110) 358 392 Rack-mounted keyboard and display 96 96 1 Measured with three PICs installed and active. 2 Measured with 10 Fibre Channel ServerNet adapters installed and active. 3 Maintenance switch has only one plug. If a UPS is installed in the modular cabinet, the maintenance switch plug must be connected to the extension bars on the right-side of the modular cabinet. Operating Temperature, Humidity, and Altitude Specification Operating Range 1 Recommended Range 1 Maximum Rate of Change per Hour Temperature (all except Fibre Channel disk module, CLIMs and SAS disk enclosures 41 to 95 F (5 to 35 C) 68 to 77 F (20 to 25 C) 9 F (5 C) Repetitive 36 F (20 C) Nonrepetitive Temperature (Fibre Channel disk module, CLIMs, and SAS disk enclosure) 50 to 95 F (10 to 35 C) - 1.8 F (1 C) Repetitive 5.4 F (3 C) Nonrepetitive Humidity Altitude 2 15% to 80%, noncondensing 0 to 10,000 feet (0 to 3,048 meters) 1 Operating and recommended ranges refer to the ambient air temperature and humidity measured 19.7 in. (50 cm) from the front of the air intake cooling vents. 2 For each 1000 feet (305 m) increase in altitude above 10,000 feet (up to a maximum of 15,000 feet), subtract 1.5 F (0.83 C) from the upper limit of the operating and recommended temperature ranges. - - - - Dimensions and Weights 61
Nonoperating Temperature, Humidity, and Altitude Temperature: Up to 72-hour storage: -40 to 151 F (-40 to 66 C) Up to 6-month storage: -20 to 131 F (-29 to 55 C) Reasonable rate of change with noncondensing relative humidity during the transition from warm to cold Relative humidity: 10% to 80%, noncondensing Altitude: 0 to 40,000 feet (0 to 12,192 meters) Cooling Airflow Direction Each enclosure includes its own forced-air cooling fans or blowers. Air flow for each enclosure enters from the front of the modular cabinet and exhausts at the rear. Typical Acoustic Noise Emissions 70 db(a) (sound pressure level at operator position) Tested Electrostatic Immunity Contact discharge: 8 KV Air discharge: 20 KV Calculating Specifications for Enclosure Combinations Figure 14 (page 63) shows components installed in 42U modular cabinets. Cabinet weight includes the PDUs and their associated wiring and receptacles. Power and thermal calculations assume that each enclosure in the cabinet is fully populated; for example, a NonStop Blade Element with four processors. The power and heat load is less when enclosures are not fully populated, such as a NonStop Blade Element with fewer processors or less memory, a Fibre Channel disk module with fewer disk drives, or an LSU enclosure with fewer LSUs. AC current calculations assume that one PDU delivers all power. In normal operation, the power is split equally between the two PDUs in the cabinet. However, calculate the power load to assume delivery from only one PDU to allow the system to continue to operate if one of the two AC power sources or PDUs fails. NOTE: If your system includes the optional rackmounted HP R5000 UPS, the modular cabinet will have one PDU located on the rear left side and four extension bars on the rear right side. To provide redundancy, components are plugged into the left-side PDU and the extension bars. Each extension bar is plugged into the UPS. Figure 14 has 16 logical processors with two IOAM enclosures, and 14 Fibre Channel disk modules installed in three 42U modular cabinets. The weight, power, and thermal calculations for the components in each cabinet are shown in Table 2 (page 63), Table 3 (page 64), and Table 4 (page 64). For a total thermal load for a system with multiple cabinets, add the heat outputs for all the cabinets in the system. 62 System Installation Specifications
Figure 14 Example Duplex Configuration Table 2 shows the weight, power, and thermal calculations for Cabinet One in Figure 14 (page 63). Table 2 Cabinet One Load Calculations Component Quantity Height (U) Weight Total Volt-amps (VA) BTU/hour lbs kg Typical Power Consumption Maximum Power Consumption Typical Power Consumption Maximum Power Consumption NonStop Blade Element (chassis and 4P processor assembly) 3 15 336 152 2130 2340 7268 7984 LSU with 8 logic boards 1 4 96 44 440 440 1501 1501 Processor switch 2 6 140 64 400 400 1365 1365 IOAM enclosure (with 10 Fibre Channel ServerNet adapters) 1 11 235 107 530 530 1808 1808 Maintenance switch 1 1 5 2 20 20 68 68 Cabinet 1 42 328 149 - - - - Total - 37 1140 518 3520 3730 12011 12727 Calculating Specifications for Enclosure Combinations 63
Table 3 shows the weight, power, and thermal calculations for Cabinet Two in Figure 14 (page 63). Table 3 Cabinet Two Load Calculations Component Quantity Height (U) Weight Total Volt-amps (VA) BTU/hour lbs kg Typical Power Consumption Maximum Power Consumption Typical Power Consumption Maximum Power Consumption NonStop Blade Element (chassis and 4P processor assembly) 3 15 336 152 2130 2340 7268 7984 LSU with 8 logic boards 1 4 96 44 440 440 1501 1501 Fibre Channel disk module with 14 disk drives 7 21 546 248 2436 2436 8312 8312 Rack-mounted System Console (NSCR4) (includes keyboard and monitor) 1 2 41 19 204 204 695 695 Cabinet 1 42 328 149 - - - - Total - 37 1140 612 5210 5420 17776 18492 Table 4 shows the weight, power, and thermal calculations for Cabinet Three in Figure 14 (page 63). Table 4 Cabinet Three Load Calculations Component Quantity Height (U) Weight Total Volt-amps (VA) BTU/hour lbs kg Typical Power Consumption Maximum Power Consumption Typical Power Consumption Maximum Power Consumption NonStop Blade Element (chassis and 4P processor assembly) 2 10 224 102 1420 1560 4845 5323 IOAM enclosure (with 10 Fibre Channel ServerNet adapters) 1 11 235 107 530 530 1808 1808 64 System Installation Specifications
Table 4 Cabinet Three Load Calculations (continued) Component Quantity Height (U) Weight Total Volt-amps (VA) BTU/hour lbs kg Typical Power Consumption Maximum Power Consumption Typical Power Consumption Maximum Power Consumption Fibre Channel disk module with 14 disk drives 7 21 546 248 2436 2436 8312 8312 Cabinet 1 42 328 149 - - - - Total - 37 1333 605 4386 4526 14965 15443 Calculating Specifications for Enclosure Combinations 65
4 System Configuration Guidelines This section provides guidelines for Integrity NonStop NS16000 series system configurations. Integrity NonStop NS16000 series systems use a flexible modular architecture. Therefore, almost any configuration of the system s modular components is possible within a few configuration restrictions stated in IOAM Enclosure and Disk Storage Considerations (page 92). Internal ServerNet Interconnect Cabling This subsection includes the following topics: Cable Labeling Cable Management System Internal Interconnect Cables Dedicated Service LAN Cables Cable Length Restrictions Internal Cable Product IDs NonStop Blade Elements to LSUs LSUs to Processor Switches and Processor IDs Processor Switch ServerNet Connections Processor Switches to Networking CLIMs (page 76) Processor Switches to Storage CLIMs (page 77) Processor Switches to IOAM Enclosures FCSA to Fibre Channel Disk Modules FCSA to Tape Devices Cable Labeling Fiber-optic cables provide all ServerNet and other I/O data signal interconnections within Integrity NonStop NS16000 series systems. With the exception of the dedicated service LAN, no copper I/O interconnect cables are used. Although fiber-optic cables provide high-speed, low-latency communications, all of the cables look the same and are the same color, usually orange. To identify correct cable connections to factory-installed hardware, every interconnect cable has a plastic label affixed to each end. Extra sheets of preprinted labels that you can fill in are also provided. These labels are attached to the cable and connector and contain information about which enclosure and connector the cable is to connect. This label identifies the cable connecting the p-switches at U31 of both cabinet 1 and 2 at slot 3, connectors 1 or 2, which are crosslink connections between the two p-switches: 66 System Configuration Guidelines
Each label conveys this information: Nn Rn Un nn.nn Near Far Identifies the node number. One node can include up to six cabinets. Identifies the cabinet number within the node. Identifies the offset that is the physical location of the component within the cabinet. n is the lowest U number in the cabinet that the component occupies. All but LSU: Identifies the slot location and port connection of the component. LSU: Identifies the position of the logic board and optics adapter pair and the port connection of the optics adapter. Refers to the information for this end of this cable. Refers to the information for the other end of this cable. When you replace a cable and either install or move an enclosure, be sure to update information on the labels at both ends of the cables. Cable Management System Integrity NonStop NS16000 series systems include the cable management system (CMS) to protect all power, fiber-optic, and CAT 5e and CAT 6 Ethernet cables within the systems. The CMS maintains a 25 millimeter (1 inch) minimum bend radius for the fiber-optic cables and provides strain relief for all cables. Several Integrity NonStop NS16000 series enclosures, specifically the NonStop Blade Element, p-switch, and LSU, integrate CMS provisions for routing and securing the fiber-optic cables to prevent damaging them when the enclosures are moved out and back into the cabinet for servicing. Additionally, cable spools mounted on the inside of the cabinet structure provide the means for looping and containing free lengths of fiber-optic cables to prevent damage. Internal Interconnect Cables Integrity NonStop NS16000 series systems use predominantly multimode fiber-optic (MMF) cables for ServerNet interconnection of: NonStop Blade Elements to LSUs LSUs to p-switches P-switches to CLIMs Internal ServerNet Interconnect Cabling 67
P-switches to IOAMs P-switches to IOMF 2 CRUs in NonStop S-series I/O enclosure P-switches to p-switches (crosslink) P-switches to ESS disks FCSA to M8201R Fibre Channel to SCSI router MMF cables are usually orange with a minimum bend radius of: Unsheathed: 1.0 inch (25 millimeter) Sheathed (ruggedized): 4.2 inch (107 millimeter) You can use fiber-optic cables available from HP, or you can provide your own fiber-optic cables. If you provide your own fiber-optic cables, follow the appropriate HP specification for the type of cable you provide. You can obtain the specification from your HP product representative. Cable Type MMF SMF HP Specification 526160 526020 Verify that the link does not exceed the maximum allowable loss. Use a calibrated optics power source to measure the optics power at the distant end of the cable and then calculate the attenuation. Loss of 10.5 db or less is acceptable. Attenuation above 10.5 db might appear to work, but the bit error rate might be unacceptable and require repairing or replacing the cable. Connections from a p-switch to a Model 6780 cluster switch can use single-mode fiber-optic (SMF) cabling. Fiber-optic cables use either LC or SC connectors at one or both ends. This illustration shows the connector pair for an LC fiber-optic cable: This illustration shows the connector pair for an SC fiber cable: Dedicated Service LAN Cables The system uses Category 6 (CAT 6), unshielded twisted-pair Ethernet cables for the internal dedicated service LAN and for connections between the G4SA and the application LAN equipment. Category 5e (CAT 5e) cable is also acceptable. 68 System Configuration Guidelines
Cable Length Restrictions Maximum allowable lengths of cables connecting the modular system components are: Connection Fiber Type Connectors Maximum Length Product ID NonStop Blade Element to LSU enclosure MMF LC-LC 100 m M8900nnn 1 NonStop Blade Element to NonStop Blade Element MMF MTP 50 m M8920nnn 1 LSU enclosure to p-switch MMF LC-LC 125 m M8900nnn 1 P-switch to p-switch crosslink MMF LC-LC 125 m M8900nnn 1 P-switch to networking CLIM MMF LC-LC 125 m M8900nnn 1 P-switch to IOAM enclosure MMF LC-LC 125 m M8900nnn 1 P-switch to CLIM MMF LCL-LC 125 m M8900nnn 1 FCSA to Fibre Channel disk module MMF LC-LC 250 m M8900nnn 1 FCSA to ESS MMF LC-LC 250 m M8900nnn 1 FCSA to FC switch MMF LC-LC 250 m M8900nnn 1 P-switch to cluster switch 6780 MMF LC-LC 100 m M8900nnn 1 P-switch to cluster switch 6770 SMF LC-SC 80 m M8922nnn 1 P-switch to NonStop S-series IOMF 2 MMF LC-SC 100 m M8910nnn 1 DL385 G2 or G5 Storage CLIM SAS HBA port to MSA70 SAS disk enclosure N.A. SFF-8470 to SFF 8088 6 m M8905nnn 1 DL380 G6 Storage CLIM SAS HBA port to M8381 25 (D2700) SAS disk enclosure N.A. SFF-8088 to SFF 8088 6 m M8906nnn 1 MSA70 SAS disk enclosure to MSA70 SAS disk enclosure 2 N.A. SFF-8088 to SFF 8088 6 m M8906nnn 1 DL 385 G5 Storage CLIM SAS HBA port to SAS tape (carrier-grade tape only) N.A. SFF-8470 to SFF-8470 4 m M8908nnn 1 DL380 G6 Storage CLIM SAS HBA port to SAS tape (carrier-grade tape only) N.A. SFF-8088 to SFF-8470 4 m M8905nnn 1 Internal ServerNet Interconnect Cabling 69
Connection Fiber Type Connectors Maximum Length Product ID Storage CLIM FC interface to ESS MMF LC-LC 250 m M8900nnn 1 Storage CLIM FC interface to FC tape MMF LC-LC 250 m M8900nnn 1 1 nnn indicates the length of the cable in meters. For example, M8900125 is 125 meters long; M8900-15 is 15 meters long. 2 Daisy-chaining of D2700 SAS disk enclosures is not supported. Although a considerable cable length can exist between the modular enclosures in the system, HP recommends placing all cabinets adjacent to each other and bolting them together, with cable length between each of the enclosures as short as possible. Internal Cable Product IDs For product IDs, see Cable Types, Connectors, Lengths, and Product IDs (page 150). NonStop Blade Elements to LSUs Fiber-optic cables provide communications between each NonStop Blade Element and the LSU as well as between the LSU and the X-fabric and Y-fabric p-switch PICs. The ServerNet X fabric and Y fabric provide the system I/O from the p-switch PICs to the IOAM, with Fibre Channel and high-speed Ethernet links providing connections to storage and communications LANs, WANs, and so forth. Cable connections between the NonStop Blade Elements and the LSU optics adapters affect proper processor synchronization and rendezvous. For each logical processor element (PE), the port for a particular PE, such as PE1 (NonStop Blade Element port J0) on DMR NonStop Blade Elements A and B or TMR NonStop Blade Elements A, B, and C, must connect to their respective ports on the LSU optics adapters as shown in the example connection diagrams in LSUs to Processor Switches and Processor IDs (page 70). Although you can randomly select LSU optics adapters for fiber-optic cable connections, HP recommends connecting these cable to the LSUs in sequential order as shown in the connection diagrams. NonStop Blade Element to NonStop Blade Element Reintegration cables interconnect each of the NonStop Blade Elements within individual duplex or triplex NonStop Blade Complexes using connectors S, T, Q, and R as shown in the illustrations on the next four pages. LSUs to Processor Switches and Processor IDs Each NonStop Blade Element contains four processor elements, and each element is a part of a numbered NonStop Blade Complex, such as 0, 1, 2, or 3. The maintenance entity (ME) firmware running in the p-switches assigns a number to each processor element based on its connection from the LSUs to ServerNet via the p-switch ports in slots 10-13. Therefore, fiber optic cable connections from the LSUs to the p-switch PICs determine the number of each NonStop Blade Complex. This table lists the default p-switch PIC slot and port coupling to the processor number: P-Switch PIC Slot 10 PIC Port 1 2 3 Processor Number 0 1 2 70 System Configuration Guidelines
P-Switch PIC Slot 11 12 13 PIC Port 4 1 2 3 4 1 2 3 4 1 2 3 4 Processor Number 3 4 5 6 7 8 9 10 11 12 13 14 15 The four cabling diagrams on the next pages illustrate the default configuration and connections for a triplex system processor. These diagrams are not for use in installing or cabling the system. For instructions on connecting the cables, see the NonStop NS16000 Series Hardware Installation Manual. NOTE: Individual NonStop Blade Element enclosures might or might not reside in the same cabinet, depending on the physical configuration of the system. But regardless of which cabinet houses the NonStop Blade Element, LSU, and p-switch enclosures, the default cable interconnections between them will be the same as the examples shown in the next four cabling diagrams. This figure shows example connections to the default configuration of the NonStop Blade Element reintegration links (NonStop Blade Element connectors S, T, Q, R) and ports 1 to 4 on the p-switch PIC in slot 10, which defines triplex processor numbers to 0 to 3. Two p-switches are required, one for the X-fabric and the other for the Y-fabric: Internal ServerNet Interconnect Cabling 71
This figure shows example connections of the NonStop Blade Element reintegration links (NonStop Blade Element connectors S, T, Q, R) and ports 1 to 4 of the p-switch PIC in slot 11 for triplex processor numbers to 4 to 7: 72 System Configuration Guidelines
This figure shows example connections of the NonStop Blade Element reintegration links (NonStop Blade Element connectors S, T, Q, R) and ports 1 to 4 of the p-switch PIC in slot 12 for triplex processor numbers to 8 to 11: Internal ServerNet Interconnect Cabling 73
This figure shows example connections of the NonStop Blade Element reintegration links (NonStop Blade Element connectors S, T, Q, R) and ports 1 to 4 of the p-switch PIC in slot 13 for triplex processor numbers to 12 to 15: 74 System Configuration Guidelines
Processor Switch ServerNet Connections ServerNet connections to the system I/O devices (storage disk and tape drive as well as Ethernet communication to networks) radiate out from the p-switches for both the X and Y ServerNet fabrics to the IOAMs in one or more IOAM enclosures or the CLIMs. ServerNet cables connected to the p-switch PICs in slots 10 through 13 come from the LSUs and processors, with the cable connection to these PICs determining the processor identification. (See LSUs to Processor Switches and Processor IDs (page 70). Cables connected to the PICs in slots 6 through 9 can connect to one or more networking CLIMs, starting with slot 9. If more slots are needed for networking CLIMs, then you can also use slots 4 and 5, if 4 is not needed for S-series networking. Cables connected to the PICs in slots 4 though 9 connect to Storage CLIMs, one or more IOAM enclosures or to NonStop S-series I/O enclosures equipped with IOMF 2 CRUs. This illustration shows the connections to the PICs in a fully populated p-switch: Internal ServerNet Interconnect Cabling 75
Unlike the fixed hardware I/O configurations and topologies in NonStop S-series systems, I/O configurations in Integrity NonStop NS16000 series system are flexible with few restrictions. Those few restrictions prevent I/O configurations that compromise fault tolerance or high availability, especially with disk storage as outlined in Configuration Restrictions for Fibre Channel Devices (page 95). NOTE: Refer to the BladeCluster Solution Manual for information about connecting processor switches to a BladeCluster. Processor Switches to Networking CLIMs Each p-switch (for the X or Y ServerNet fabric) has up to four I/O PICs (slots 6 through 9, starting with slot 9) recommended for networking CLIM (IP, Telco, or IB CLIM) connections. Slots 4 and 5 can also be used for networking CLIM connections. A networking CLIM uses either one or two ServerNet connections per fabric. Two ServerNet connections per fabric enhance networking CLIM performance. For one ServerNet connection per fabric, one I/O PIC can support up to four networking CLIMs in the system, allowing up to 16 networking CLIMs in the system (up to 20 networking CLIMs if p-switch slots 4 and 5 are also used). Two ServerNet cables connect two ports of an I/O PIC in the X and Y ServerNet p-switches to the corresponding ports on one of the networking CLIMs in the NonStop NS16000 series enclosure. For two ServerNet connections per fabric, one I/O PIC can support up to two networking CLIMs in the system, allowing up to eight networking CLIMs in the system (up to ten networking CLIMs if p-switch slots 4 and 5 are also used). Four ServerNet cables connect each of the four ports of an I/O PIC in the X and Y ServerNet p-switches to the corresponding ports on one of the networking CLIMs in the NonStop NS16000 series enclosure. 76 System Configuration Guidelines
These restrictions apply to connecting the p-switches to the networking CLIMs: The same PIC number in the X and Y p-switch must be used, such as PIC 9 as shown in the illustration below. There is one connection to the X p-switch and one connection to the Y p-switch. Each port on the p-switch PIC must connect to the same numbered port on the networking CLIM's PIC (port 1 to port 1, port 2 to port 2, and so forth). Connections to a networking CLIM cannot co-exist on the same p-switch PIC with connections to an IOAM or NonStop S-series I/O enclosure. Typically networking CLIMs connect to slots 6 through 9 of the p-switch, although slots 4 and 5 also support networking CLIMs. This illustration shows an example of two DL385 G2 or G5 IP or Telco CLIMs connected to two p-switches (slot 9, ports 1 and 2). Connections for both ServerNet fabrics are shown. Processor Switches to Storage CLIMs Each p-switch (for the X or Y ServerNet fabric) has up to six I/O PICs (slots 4 through 9, starting with slot 4) for Storage CLIM connections. One I/O PIC is required for each Storage CLIM enclosure in the system, allowing up to six CLIM enclosures in the system. Four ServerNet cables connect each of the four ports of an I/O PIC in the X and Y ServerNet p-switches to the corresponding ports on one of the Storage CLIMs in the NonStop NS16000 series enclosure. These restrictions apply to connecting the p-switches to the Storage CLIMs: The same PIC number in the X and Y p-switch must be used, such as PIC 4 as shown in the illustration below. There is one connection to the X p-switch and one connection to the Y p-switch. Each port on the p-switch PIC must connect to the same numbered port on the Storage CLIM's PIC (port 1 to port 1, port 2 to port 2, and so forth). Connections to an Storage CLIM cannot co-exist on the same p-switch PIC with connections to an IOAM or NonStop S-series I/O enclosure. Internal ServerNet Interconnect Cabling 77
This illustration shows an example of two DL385 G2 or G5 Storage CLIMs connected to two p-switches (slot 4, ports 1 through 4). Connections for both ServerNet fabrics are shown. Storage CLIMs connect to slots 4 through 9 of the p-switch: Processor Switches to IOAM Enclosures Each p-switch (for the X or Y ServerNet fabric) has up to six I/O PICs. One I/O PIC is required for each IOAM enclosure in the system, allowing up six IOAM enclosures in the system. Four ServerNet cables connect each of the four ports of an I/O PIC in the X and Y ServerNet p-switches to the corresponding ports on one of the ServerNet switch boards in the IOAM enclosure. These restrictions apply to connecting the p-switches to the IOAMs: The same PIC number in the X and Y p-switch must be used, such as PIC 4 as shown in the illustration on the next page. Each port on the p-switch PIC must connect to the same numbered port on the IOAM enclosure s ServerNet switch board (port 1 to port 1, port 2 to port 2, and so forth). Connections to an IOAM enclosure cannot co-exist on the same p-switch PIC with connections to a NonStop S-series I/O enclosure. This illustration shows an example of a fault-tolerant ServerNet configuration connecting two FCSAs, one in each IOAM module, to a pair of Fibre Channel disk modules: 78 System Configuration Guidelines
FCSA to Fibre Channel Disk Modules See Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module (page 96). FCSA to Tape Devices Fibre Channel tape devices can be connected directly to an FCSA in an IOAM enclosure. Integrity NonStop NS16000 series systems do not support SCSI buses or adapters to connect tape devices. However, SCSI tape devices can be connected through a Fibre Channel to SCSI converter device (model M8201R) that allows connection to SCSI tape drives. For interconnect cable information and installation instructions, see the M8201R Fibre Channel to SCSI Router Installation and User s Guide. NOTE: To enable OSM to monitor the Fibre Channel to SCSI converter, connect it to the maintenance switch and provide OSM with its IP address. For instructions, see the M8201R Fibre Channel to SCSI Router Installation and User s Guide. Internal ServerNet Interconnect Cabling 79
This illustration shows an example communication configuration of a table-top tape drive with ACL to an FCSA via an M8201R Fibre Channel to SCSI router: With a tape drive connected to a server, you can use the BACKUP and RESTORE utilities to save data to and restore data from tape. Storage CLIM Devices The NonStop NS16000 series server uses the rack-mounted SAS disk enclosure and its SAS disk drives, which are controlled through the Storage CLIM. This illustration shows the ports on a Storage CLIM: 80 System Configuration Guidelines
NOTE: All CLIMs use the Cluster I/O Protocols (CIP) subsystem. For more information about the CIP subsystem, refer to the Cluster I/O Protocols (CIP) Configuration and Management Manual. This illustration shows the locations of the hardware in the SAS disk enclosure as well as the I/O modules on the rear of the enclosure for connecting to the Storage CLIM. Storage CLIM Devices 81
SAS disk enclosures connect to Storage CLIMs via SAS cables. For details on cable types, refer to Appendix A (page 150). Factory-Default Disk Volume Locations for SAS Disk Devices This illustration shows where the factory-default locations for the primary and mirror system disk volumes reside in separate disk enclosures: SAS Ports to SAS Disk Enclosures SAS disk enclosures can be connected directly to the HBA SAS ports on a Storage CLIM. A Storage CLIM pair supports a maximum of four SAS disk enclosures. If it is necessary to minimize SAS HBAs, four SAS disk enclosures connected to DL385 G2 or G5 CLIMs can be configured as daisy-chains of two SAS disk enclosures on two SAS ports. The four SAS disk enclosures connected to DL380 G6 CLIMs cannot be daisy-chained. SAS Ports to SAS Tape Devices SAS tape devices have one SAS port that can be directly connected to the HBA SAS port on a Storage CLIM. Each SAS tape enclosure supports two tape drives. With a SAS tape drive connected to the system, you can use the BACKUP and RESTORE utilities to save data to and restore data from tape. Configuration Restrictions for Storage CLIMs The maximum number of logical unit numbers (LUNs) for each CLIM, including SAS disks, ESS and tapes is 512. Each primary, backup, mirror and mirror backup path is counted in this maximum. SAS disk enclosures connected to DL385 G2 or G5 CLIMs support a maximum daisy-chain depth of two SAS disk enclosures. SAS disk enclosures connected to DL380 G6 CLIMs cannot be daisy-chained. 82 System Configuration Guidelines
Use only the supported configurations as described below. Configurations for Storage CLIMs and SAS Disk Enclosures These subsections show the supported configurations for SAS Disk enclosures with Storage CLIMs: DL385 G2 or G5 Storage CLIM and SAS Disk Enclosure Configurations (page 83) DL380 G6 Storage CLIM and SAS Disk Enclosure Configurations (page 85) DL385 G2 or G5 Storage CLIM and SAS Disk Enclosure Configurations Two DL385 G2 or G5 Storage CLIMs, Two MSA70 SAS Disk Enclosures (page 83) Four DL385 G2 or G5 Storage CLIMs, Four MSA70 SAS Disk Enclosures (page 84) Daisy-Chain Configurations (DL385 G2 or G5 Storage CLIMs Only with MSA70 SAS Disk Enclosures) (page 84) Two DL385 G2 or G5 Storage CLIMs, Two MSA70 SAS Disk Enclosures This illustration shows example cable connections for the two DL385 G2 or G5 Storage CLIM, two MSA70 SAS disk enclosure configuration: Figure 15 Two DL385 G2 or G5 Storage CLIMs, Two MSA70 SAS Disk Enclosure Configuration This table lists the Storage CLIM, LUN, and bay identification for the factory-default system disk locations in the configuration of two DL385 G2 or G5 Storage CLIMs and two MSA70 SAS disk enclosures. In this case, $SYSTEM, $DSMSCM, $AUDIT, and OSS are configured as mirrored SAS disk volumes: Disk Volume Name Primary and Mirror-Backup CLIM Backup and Mirror CLIM Primary LUN Mirror LUN Primary Disk Bay in Primary SAS Enclosure Mirror Disk Location in Mirror SAS Enclosure $SYSTEM 100.2.4.1 100.2.5.1 101 201 1 1 $DSMSCM 100.2.4.1 100.2.5.1 102 202 2 2 $AUDIT 100.2.4.1 100.2.5.1 103 203 3 3 $OSS 100.2.4.1 100.2.5.1 104 204 4 4 For an illustration of the factory-default slot locations for a SAS disk enclosure, refer to Factory-Default Disk Volume Locations for SAS Disk Devices (page 82). Storage CLIM Devices 83
Four DL385 G2 or G5 Storage CLIMs, Four MSA70 SAS Disk Enclosures This illustration shows example cable connections for the four DL385 G2 or G5 Storage CLIM, four MSA70 SAS disk enclosures configuration: Figure 16 Four DL385 G2 or G5 Storage CLIMs, Four MSA70 SAS Disk Enclosure Configuration This table lists the Storage CLIM, LUN, and bay identification for the factory-default system disk locations in the configuration of four DL385 G2 or G5 Storage CLIMs and four MSA70 SAS disk enclosures. In this case, $SYSTEM, $DSMSCM, $AUDIT, and OSS are configured as mirrored SAS disk volumes: Disk Volume Name Primary CLIM Backup CLIM Mirror CLIM Mirror-Backup CLIM Primay LUN Mirror LUN Primary Disk Bay in Primary SAS Enclosure Mirror Disk Location in Mirror SAS Enclosure $SYSTEM 100.2.5.3.1 100.2.5.4.1 100.2.5.4.3 100.2.5.3.3 101 101 1 1 $DSMSCM 100.2.5.3.1 100.2.5.4.1 100.2.5.4.3 100.2.5.3.3 102 102 2 2 $AUDIT 100.2.5.3.1 100.2.5.4.1 100.2.5.4.3 100.2.5.3.3 103 103 3 3 $OSS 100.2.5.3.1 100.2.5.4.1 100.2.5.4.3 100.2.5.3.3 104 104 4 4 Daisy-Chain Configurations (DL385 G2 or G5 Storage CLIMs Only with MSA70 SAS Disk Enclosures) This illustration shows an example of cable connections for the two DL385 G2 or G5 Storage CLIMs and four MSA70 SAS disk enclosures in a single daisy-chain configuration: 84 System Configuration Guidelines
DL380 G6 Storage CLIM and SAS Disk Enclosure Configurations Two DL380 G6 Storage CLIMs, Two D2700 SAS Disk Enclosures (page 85) Two DL380 G6 Storage CLIMs, Four D2700 SAS Disk Enclosures (page 87) Four DL380 G6 Storage CLIMs, Four D2700 SAS Disk Enclosures (page 87) Four DL380 G6 Storage CLIMs, Eight D2700 SAS Disk Enclosures (page 88) Two DL380 G6 Storage CLIMs, Two D2700 SAS Disk Enclosures This illustration shows example cable connections for the two DL380 G6 Storage CLIM, two D2700 SAS disk enclosure configuration. Storage CLIM Devices 85
Figure 17 Two DL380 G6 Storage CLIMs, Two D2700 SAS Disk Enclosure Configuration This table lists the Storage CLIM, LUN, and bay identification for the factory-default system disk locations in the configuration of two DL380 G6 Storage CLIMs and two D2700 SAS disk enclosures. In this case, $SYSTEM, $DSMSCM, $AUDIT, and OSS are configured as mirrored SAS disk volumes: Disk Volume Name Primary and Mirror-Backup CLIM Backup and Mirror CLIM Primary LUN Mirror LUN Primary Disk Bay in Primary SAS Enclosure Mirror Disk Location in Mirror SAS Enclosure $SYSTEM 100.2.4.1 100.2.5.1 101 201 1 1 $DSMSCM 100.2.4.1 100.2.5.1 102 202 2 2 $AUDIT 100.2.4.1 100.2.5.1 103 203 3 3 $OSS 100.2.4.1 100.2.5.1 104 204 4 4 For an illustration of the factory-default slot locations for a SAS disk enclosure, refer to Factory-Default Disk Volume Locations for SAS Disk Devices (page 82). 86 System Configuration Guidelines
Two DL380 G6 Storage CLIMs, Four D2700 SAS Disk Enclosures This illustration shows example cable connections for the two DL385 G2 or G5 Storage CLIM, four D2700 SAS disk enclosures configuration. This configuration uses two SAS HBAs in slots 2 and 3 of each DL380 G6 Storage CLIM. Figure 18 Two DL380 G6 Storage CLIMs, Four D2700 SAS Disk Enclosure Configuration This table lists the Storage CLIM, LUN, and bay identification for the factory-default system disk locations in the configuration of four DL380 G6 Storage CLIMs and two D2700 SAS disk enclosures. In this case, $SYSTEM, $DSMSCM, $AUDIT, and OSS are configured as mirrored SAS disk volumes: Disk Volume Name Primary and Mirror-Backup CLIM Backup and Mirror CLIM Primary LUN Mirror LUN Primary Disk Bay in Primary SAS Enclosure Mirror Disk Location in Mirror SAS Enclosure $SYSTEM 100.2.4.1 100.2.5.1 101 201 1 1 $DSMSCM 100.2.4.1 100.2.5.1 102 202 2 2 $AUDIT 100.2.4.1 100.2.5.1 103 203 3 3 $OSS 100.2.4.1 100.2.5.1 104 204 4 4 For an illustration of the factory-default slot locations for a SAS disk enclosure, refer to Factory-Default Disk Volume Locations for SAS Disk Devices (page 82). Four DL380 G6 Storage CLIMs, Four D2700 SAS Disk Enclosures This illustration shows example cable connections for the four DL380 G6 Storage CLIM, four D2700 SAS disk enclosures configuration. This configuration uses two SAS HBAs in slot 2 of each DL380 G6 Storage CLIM. Storage CLIM Devices 87
Figure 19 Four DL380 G6 Storage CLIMs, Four D2700 SAS Disk Enclosure Configuration This table lists the Storage CLIM, LUN, and bay identification for the factory-default system disk locations in the configuration of four DL380 G6 Storage CLIMs and four D2700 SAS disk enclosures. In this case, $SYSTEM, $DSMSCM, $AUDIT, and OSS are configured as mirrored SAS disk volumes: Disk Volume Name Primary CLIM Backup CLIM Mirror CLIM Mirror-Backup CLIM Primay LUN Mirror LUN Primary Disk Bay in Primary SAS Enclosure Mirror Disk Location in Mirror SAS Enclosure $SYSTEM 100.2.4.1 100.2.5.1 100.2.6.1 100.2.7.1 101 101 1 1 $DSMSCM 100.2.4.1 100.2.5.1 100.2.6.1 100.2.7.1 102 102 2 2 $AUDIT 100.2.4.1 100.2.5.1 100.2.6.1 100.2.7.1 103 103 3 3 $OSS 100.2.4.1 100.2.5.1 100.2.6.1 100.2.7.1 104 104 4 4 For an illustration of the factory-default slot locations for a SAS disk enclosure, refer to Factory-Default Disk Volume Locations for SAS Disk Devices (page 82). Four DL380 G6 Storage CLIMs, Eight D2700 SAS Disk Enclosures This illustration shows example cable connections for the four DL380 G6 Storage CLIM, eight D2700 SAS disk enclosures configuration. This configuration uses two SAS HBAs in slot 2 and slot 3 of each DL380 G6 Storage CLIM. 88 System Configuration Guidelines
Figure 20 Four DL380 G6 Storage CLIMs, Eight D2700 SAS Disk Enclosure Configuration This table lists the Storage CLIM, LUN, and bay identification for the factory-default system disk locations in the configuration of four DL380 G6 Storage CLIMs and eight D2700 SAS disk enclosures. In this case, $SYSTEM, $DSMSCM, $AUDIT, and OSS are configured as mirrored SAS disk volumes: Disk Volume Name Primary CLIM Backup CLIM Mirror CLIM Mirror-Backup CLIM Primay LUN Mirror LUN Primary Disk Bay in Primary SAS Enclosure Mirror Disk Location in Mirror SAS Enclosure $SYSTEM 100.2.4.1 100.2.5.1 100.2.6.1 100.2.7.1 101 101 1 1 $DSMSCM 100.2.4.1 100.2.5.1 100.2.6.1 100.2.7.1 102 102 2 2 $AUDIT 100.2.4.1 100.2.5.1 100.2.6.1 100.2.7.1 103 103 3 3 $OSS 100.2.4.1 100.2.5.1 100.2.6.1 100.2.7.1 104 104 4 4 For an illustration of the factory-default slot locations for a SAS disk enclosure, refer to Factory-Default Disk Volume Locations for SAS Disk Devices (page 82). Storage CLIM Devices 89
P-Switch to NonStop S-Series I/O Enclosure Cabling Each NonStop S-series I/O enclosure uses one port of one PIC in each of the two p-switches for ServerNet connection. If no IOAM enclosure is installed in the system, up to 24 NonStop S-series I/O enclosures can be connected to an Integrity NonStop NS16000 series system through these ServerNet links. A single fiber-optic cable provides the ServerNet link between an I/O PIC port on both the X and Y p-switch and the I/O multifunction 2 (IOMF 2) CRUs in a NonStop S-series I/O enclosure. For cable types and lengths, see Enterprise Storage System (page 133). The cables from the two IOMF 2 CRUs must connect to PICs residing in the same slot number in both the X and Y p-switch and to the same port number on each PIC. This illustration shows the cables from the NonStop S-series IOMF 2 CRUs connected to port 1 of the PICs in slot 4 of the X and Y p-switch, assigning the group number of 11: 90 System Configuration Guidelines
These restrictions or requirements apply when integrating NonStop S-series I/O enclosures into an Integrity NonStop NS16000 series system: Only NonStop S-series I/O enclosures equipped with IOMF 2 CRUs can be connected to an Integrity NonStop NS16000 series system. The IOMF 2 CRU must have an MMF PIC installed. NonStop S-series system enclosures can be converted to NonStop S-series I/O enclosures, replacing the two PMF CRUs in each enclosure with IOMF 2 CRUs. See the conversion instructions in the Hardware Service and Maintenance Publications category of the Support and Service Library of NTL. Disk drives and ServerNet adapters (except SEB and MSEB CRUs) used in NonStop S-series I/O enclosures, as well as devices that are downstream of these enclosures, are compatible with NonStop NS-series hardware. For information about the disk drives and adapters, see the manual for that disk drive or adapter. CAUTION: Do not attempt to use NonStop S-series SEB or MSEB CRUs in NonStop S-series I/O enclosures connected to Integrity NonStop NS16000 series systems. System failures can result. Connection to a NonStop S-series I/O enclosure cannot be on the same p-switch PIC that connects to an IOAM enclosure. Each p-switch (for the X or Y ServerNet fabric) has up to six I/O PICs, with one I/O PIC required for each IOAM enclosure in the system. Each NonStop S-series I/O enclosure uses one port on one PIC, so a maximum of 24 NonStop S-series I/O enclosures can be connected to an Integrity NonStop NS16000 series system if no IOAM enclosure is installed. Assignment of the group number of each NonStop S-series I/O enclosure depends on the cable connection to the p-switch PIC by slot and port as described in NonStop S-Series I/O Enclosure Group Numbers (page 29). Only NonStop S-series I/O enclosures with these group numbers can be connected to an Integrity NonStop NS16000 series system: 11 through14 21 through 24 31 through 34 41 through 44 51 through 54 61 through 64 For NonStop S-series I/O enclosures that have a nonsupported group number, change the group number as described in the NonStop S-Series Planning and Configuration Guide. Cables required are: An LC-SC multimode fiber-optic cable connects each IOMF 2 CRU to a p-switch in an Integrity NonStop NS16000 series system. A serial cable from each SPON connector on the p-switch carries power-on signals to the NonStop S-series I/O enclosure. This is a unidirectional SPON cable used only for connection between a a p-switch in an Integrity NonStop NS16000 series system and the IOMF 2 CRU in the NonStop S-series I/O enclosure. If the system has no IOAM enclosures and you want system communications with the OSM Service Connection and OSM Notification Director, the NonStop S-series I/O enclosure must provide connection for the dedicated service LAN via two Category 6 (CAT 6) Ethernet cables. Category 5e (CAT 5e) cable is also acceptable. P-Switch to NonStop S-Series I/O Enclosure Cabling 91
IOAM Enclosure and Disk Storage Considerations When deciding between one IOAM enclosure or two (or more), consider: One IOAM Enclosure High-availability and fault-tolerant attributes of NonStop S-series systems with I/O enclosures using tetra-8 and tetra-16 topologies. Two or four FCSAs split between only two IOAM modules, so loss of one module takes down all primary or mirror Fibre Channel loops. 1 Two IOAM Enclosures Greater availability because of multiple redundant ServerNet paths and FCSAs. Installing the IOAM enclosures in separate cabinets prevents application or system failure if a localized power outage affects only one cabinet. Four FCSAs split between four IOAM modules, so loss of one module takes down only alternate primary or mirror Fibre Channel loop. 1 1 See Configuration Recommendations for Fibre Channel Devices (page 95). Fibre Channel Devices This subsection includes: Factory-Default Disk Volume Locations (page 94) Configurations for Fibre Channel Devices (page 94) Configuration Restrictions for Fibre Channel Devices (page 95) Configuration Recommendations for Fibre Channel Devices (page 95) Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module (page 96) The only Fibre Channel device used internally with Integrity NonStop NS16000 series systems is the Fibre Channel disk module (FCDM). An FCDM and its disk drives are controlled through the Fibre Channel ServerNet adapter (FCSA). For more information on the FCSA, see Fibre Channel ServerNet Adapter (FCSA) (page 128) or the Fibre-Channel ServerNet Adapter Installation and Support Guide. For more information on the Fibre Channel disk module (FCDM), see Fibre Channel Disk Module (page 130). For examples of cable connections between FCSAs and FCDMs, see Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module (page 96). This illustration shows an FCSA with indicators and ports that are used and not used in Integrity NonStop NS16000 series systems: 92 System Configuration Guidelines
This illustration shows the locations of the hardware in the Fibre Channel disk module as well as the Fibre Channel port connectors at the back of the enclosure: Fibre Channel Devices 93
Fibre Channel disk modules connect to Fibre Channel ServerNet adapters (FCSAs) via Fiber Channel arbitrated loop (FC-AL) cables. This drawing shows the two Fibre Channel arbitrated loops implemented within the Fibre Channel disk module: Factory-Default Disk Volume Locations This illustration shows where the factory-default locations for the primary and mirror system disk volumes reside in separate Fibre Channel disk modules: FCSA location and cable connections vary according to the various controller and Fibre Channel disk module combinations. Configurations for Fibre Channel Devices Storage subsystems in NonStop S-series systems used a fixed hardware layout. Each enclosure can have up to four controllers for storage devices and up to 16 internal disk drives. The controllers 94 System Configuration Guidelines
and disk drives always have a fixed logical location with standardized location IDs of group-module-slot. Only the group number changes as determined by the enclosure position in the ServerNet topology. However, the Integrity NonStop NS16000 series systems have no fixed boundaries for the hardware layout. Up to 60 FCSA (or 120 ServerNet addressable controllers) and 240 Fibre Channel disk enclosures, with identification depending on the ServerNet connection of the IOAM and slot housing in the FCSAs. Configuration Restrictions for Fibre Channel Devices To avoid creating configurations that are not fault-tolerant or do not promote high availability, these restrictions apply and are invoked by Subsystem Control Facility (SCF): Primary and mirror disk drives cannot connect to the same Fibre Channel loop. Loss of the Fibre Channel loop makes both the primary volume and the mirrored volume inaccessible. This configuration inhibits fault tolerance. Disk drives in different Fibre Channel disk modules on a daisy chain connect to the same Fibre Channel loop. The primary path and backup Fibre Channel communication links to a disk drive should not connect to FCSAs in the same module of an IOAM enclosure. In a fully populated system, loss of one FCSA can make up to 56 disk drives inaccessible on a single Fibre Channel communications path. This configuration is allowed, but only if you override an SCF warning message. The mirror path and mirror backup Fibre Channel communication links to a disk drive should not connect to FCSAs in the same module of an IOAM enclosure. In a fully populated system, loss of one FCSA can make up to 56 disk drives inaccessible on a single Fibre Channel communications path. This configuration is allowed, but only if you override an SCF warning message. Configuration Recommendations for Fibre Channel Devices These recommendations apply to FCSA and Fibre Channel disk module configurations: Primary Fibre Channel disk module connects to the FCSA F-SAC 1. Mirror Fibre Channel disk module connects to the FCSA F-SAC 2. FC-AL port A1 is the incoming port from an FCSA or from another Fibre Channel disk module. FC-AL port A2 is the outbound port to another Fibre Channel disk module. FC-AL port B2 is the incoming port from an FCSA or from a Fibre Channel disk module. FC-AL port B1 is the outbound port to another Fibre Channel disk module In a daisy-chain configuration, the ID expander harness determines the enclosure number. Enclosure 1 is always at the bottom of the chain. FCSAs can be installed in slots 1 through 5 in an IOAM. G4SAs can be installed in slots 1 through 5 in an IOAM. In systems with two or more cabinets, primary and mirror Fibre Channel disk modules reside in separate cabinets to prevent application or system outage if a power outage affects one cabinet. With primary and mirror Fibre Channel disk modules in the same cabinet, the primary Fibre Channel disk module resides in a lower U than the mirror Fibre Channel disk module. Fibre Channel disk drives are configured with dual paths. Where possible, FCSAs and Fibre Channel disk modules are configured with four FCSAs and four Fibre Channel disk modules for maximum fault tolerance. If FCSAs are not in groups of Fibre Channel Devices 95
four, the remaining FCSAs and Fibre Channel disk modules can be configured in other fault-tolerant configurations such as with two FCSAs and two Fibre Channel disk modules or four FCSAs and three Fibre Channel disk modules. In systems with one IOAM enclosure: With two FCSAs and two Fibre Channel disk modules, the primary FCSA resides in module 2 of the IOAM enclosure, and the backup FCSA resides in module 3. (See the example configuration in NonStop Blade Element Group-Module-Slot Numbering (page 24).) With four FCSAs and four Fibre Channel disk modules, FCSA 1 and FCSA 2 reside in module 2 of the IOAM enclosure, and FCSA 3 and FCSA 4 reside in module 3. (See the example configuration in Four FCSAs, Four FCDMs, One IOAM Enclosure (page 97).) In systems with two or more IOAM enclosures With two FCSAs and two Fibre Channel disk modules, the primary FCSA resides in IOAM enclosure 1, and the backup FCSA resides in IOAM enclosure 2. (See the example configuration in Two FCSAs, Two FCDMs, Two IOAM Enclosures (page 98).) With four FCSAs and four Fibre Channel disk modules, FCSA 1 and FCSA 2 reside in IOAM enclosure 1, and FCSA 3 and FCSA 4 reside in IOAM enclosure 2. (See the example configuration in Four FCSAs, Four FCDMs, Two IOAM Enclosures (page 99).) Daisy-chain configurations follow the same configuration restrictions and rules that apply to configurations that are not daisy-chained. (See Daisy-Chain Configurations (page 100).) Fibre Channel disk modules containing mirrored volumes must be installed in separate daisy chains. Daisy-chained configurations require that all Fibre Channel disk modules reside in the same cabinet and be physically grouped together. Daisy-chain configurations require an ID expander harness with terminators for proper Fibre Channel disk module and disk drive identification. After you connect all Fibre Channel disk modules in configurations of four FCSAs and four Fibre Channel disk modules, yet three Fibre Channel disk modules remain not connected, connect them to the four FCSAs. (See the example configuration in Four FCSAs, Three FCDMs, One IOAM Enclosure (page 102).) Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module These subsections show various example configurations of FCSA controllers and Fibre Channel disk modules with IOAM enclosures. NOTE: Although it is not a requirement for fault tolerance to house the primary and mirror disk drives in separate FCDMs. the example configurations show FCDMs housing only primary or mirror drives, mainly for simplicity in keeping track of the physical locations of the drives. Two FCSAs, Two FCDMs, One IOAM Enclosure (page 97) Four FCSAs, Four FCDMs, One IOAM Enclosure (page 97) Two FCSAs, Two FCDMs, Two IOAM Enclosures (page 98) Four FCSAs, Four FCDMs, Two IOAM Enclosures (page 99) Daisy-Chain Configurations (page 100) Four FCSAs, Three FCDMs, One IOAM Enclosure (page 102) 96 System Configuration Guidelines
Two FCSAs, Two FCDMs, One IOAM Enclosure This illustration shows example cable connections between the two FCSAs and the primary and mirror Fibre Channel disk modules: This table lists the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay (GMSB) identification for the factory-default system disk locations in the configuration of two FCSAs, two Fibre Channel disk modules, and one IOAM enclosure: Disk Volume Name $SYSTEM (primary) $DSMSCM (primary) $AUDIT (primary) $OSS (primary) $SYSTEM (mirror) $DSMSCM (mirror) $AUDIT (mirror) $OSS (mirror) FCSA GMSP 110.2.1.1 and 110.3.1.1 110.2.1.1 and 110.3.1.1 110.2.1.1 and 110.3.1.1 110.2.1.1 and 110.3.1.1 110.2.1.2 and 110.3.1.2 110.2.1.2 and 110.3.1.2 110.2.1.2 and 110.3.1.2 110.2.1.2 and 110.3.1.2 Disk GMSB* 110.211.101 110.211.102 110.211.103 110.211.104 110.212.101 110.212.102 110.212.103 110.212.104 * For an illustration of the factory-default slot locations for a Fibre Channel disk module, see Factory-Default Disk Volume Locations (page 94). Four FCSAs, Four FCDMs, One IOAM Enclosure This illustration shows example cable connections between the four FCSAs and the two sets of primary and mirror Fibre Channel disk modules: Fibre Channel Devices 97
This table lists the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay (GMSB) identification for the factory-default system disk locations in the configuration of four FCSAs, four Fibre Channel disk modules, and one IOAM enclosure: Disk Volume Name $SYSTEM (primary 1) $DSMSCM (primary 1) $AUDIT (primary 1) $OSS (primary 1) $SYSTEM (mirror 1) $DSMSCM (mirror 1) $AUDIT (mirror 1) $OSS (mirror 1) FCSA GMSP 110.2.1.1 and 110.3.1.1 110.2.1.1 and 110.3.1.1 110.2.1.1 and 110.3.1.1 110.2.1.1 and 110.3.1.1 110.2.2.2 and 110.3.2.2 110.2.2.2 and 110.3.2.2 110.2.2.2 and 110.3.2.2 110.2.2.2 and 110.3.2.2 Disk GMSB 1 110.211.101 110.211.102 110.211.103 110.211.104 110.222.101 110.222.102 110.222.103 110.222.104 1 For an illustration of the factory-default slot locations for a Fibre Channel disk module, see Factory-Default Disk Volume Locations (page 94). Two FCSAs, Two FCDMs, Two IOAM Enclosures This illustration shows example cable connections between the two FCSAs split between two IOAM enclosures and one set of primary and mirror Fibre Channel disk modules: 98 System Configuration Guidelines
This table list the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay (GMSB) identification for the factory-default system disk locations in the configuration of two FCSAs, two Fibre Channel disk modules, and two IOAM enclosures: Disk Volume Name $SYSTEM (primary 1) $DSMSCM (primary 1) $AUDIT (primary 1) $OSS (primary 1) $SYSTEM (mirror 1) $DSMSCM (mirror 1) $AUDIT (mirror 1) $OSS (mirror 1) FCSA GMSP 110.2.1.1 and 111.2.1.1 110.2.1.1 and 111.2.1.1 110.2.1.1 and 111.2.1.1 110.2.1.1 and 111.2.1.1 110.2.1.2 and 111.2.1.2 110.2.1.2 and 111.2.1.2 110.2.1.2 and 111.2.1.2 110.2.1.2 and 111.2.1.2 Disk GMSB 1 110.211.101 110.211.102 110.211.103 110.211.104 110.212.101 110.212.102 110.212.103 110.212.104 1 For an illustration of the factory-default slot locations for a Fibre Channel disk module, see Factory-Default Disk Volume Locations (page 94). Four FCSAs, Four FCDMs, Two IOAM Enclosures This illustration shows example cable connections between the four FCSAs split between two IOAM enclosures and two sets of primary and mirror Fibre Channel disk modules: Fibre Channel Devices 99
This table lists the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay (GMSB) identification for the factory-default system disk locations in the configuration of four FCSAs, four Fibre Channel disk modules, and two IOAM enclosures: Disk Volume Name $SYSTEM (primary) $DSMSCM (primary) $AUDIT (primary) $OSS (primary) $SYSTEM (mirror) $DSMSCM (mirror) $AUDIT (mirror) $OSS (mirror) FCSA GMSP 110.2.1.1 and 111.2.1.1 110.2.1.1 and 111.2.1.1 110.2.1.1 and 111.2.1.1 110.2.1.1 and 111.2.1.1 110.3.1.2 and 111.3.1.2 110.3.1.2 and 111.3.1.2 110.3.1.2 and 111.3.1.2 110.3.1.2 and 111.3.1.2 Disk GMSB* 110.211.101 110.211.102 110.211.103 110.211.104 110.312.101 110.312.102 110.312.103 110.312.104 * For an illustration of the factory-default slot locations for a Fibre Channel disk module, see Factory-Default Disk Volume Locations (page 94) Daisy-Chain Configurations When planning for possible use of daisy-chained disks, consider: Daisy-Chained Disks Recommended Cost-sensitive storage and applications using low-bandwidth disk I/O. Low-cost, high-capacity data storage is important. Daisy-Chained Disks Not Recommended Many volumes in a large Fibre Channel loop. The more volumes that exist in a larger loop, the higher the potential for negative impact from a failure that takes down a Fibre Channel loop. Applications with a highly mixed workload, such as transaction data bases or applications with high disk I/O. Requirements for Daisy-Chain 1 All daisy-chained Fibre Channel disk modules reside in the same cabinet and are physically grouped together. ID expander harness with terminators is installed for proper Fibre Channel disk module and drive identification. 100 System Configuration Guidelines
Daisy-Chained Disks Recommended Daisy-Chained Disks Not Recommended Requirements for Daisy-Chain 1 FCSA for each Fibre Channel loop is installed in a different IOAM module for fault tolerance. Two Fibre Channel disk modules minimum, with four Fibre Channel disk modules maximum per daisy chain. 1 See Fibre Channel Devices (page 92). This illustration shows an example of cable connections between the two FCSAs and four Fibre Channel disk modules in a single daisy-chain configuration: A second equivalent configuration, including an IOAM enclosure, two FCSAs, four Fibre Channel disk modules with an ID expander, is required for fault-tolerant mirrored disk storage. Installing each mirrored disk in the same corresponding FCDM and bay number as its primary disk in not required, but it is recommend to simplify the physical management and identification of the disks. This table list the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay (GMSB) identification for the factory-default system disk locations in a daisy-chained configuration: Disk Volume Name $SYSTEM $DSMSCM $AUDIT FCSA GMSP 110.2.1.1 and 110.3.1.1 110.2.1.1 and 110.3.1.1 110.2.1.1 and 110.3.1.1 Disk GMSB* 110.211.101 110.211.102 110.211.103 Fibre Channel Devices 101
Disk Volume Name $OSS FCSA GMSP 110.2.1.1 and 110.3.1.1 Disk GMSB* 110.211.104 * For an illustration of the factory-default slot locations for a Fibre Channel disk module, see Factory-Default Disk Volume Locations (page 94). Four FCSAs, Three FCDMs, One IOAM Enclosure This illustration shows example cable connections between the four FCSAs and three Fibre Channel disk modules with the primary and mirror drives split within each Fibre Channel disk module: This table lists the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay (GMSB) identification for the factory-default disk volumes for the configuration of four FCSAs, three Fibre Channel disk modules, and one IOAM enclosure: Disk Volume Name $SYSTEM (primary 1) $DSMSCM (primary 1) $AUDIT (primary 1) $OSS (primary 1) $SYSTEM (mirror 1) $DSMSCM (mirror 1) $AUDIT (mirror 1) $OSS (mirror 1) FCSA GMSP 110.2.1.2 and 110.3.1.2 110.2.1.2 and 110.3.1.2 110.2.1.2 and 110.3.1.2 110.2.1.2 and 110.3.1.2 110.2.2.1 and 110.3.2.1 110.2.2.1 and 110.3.2.1 110.2.2.1 and 110.3.2.1 110.2.2.1 and 110.3.2.1 Disk GMSB 110.212.101 110.212.101 110.212.101 110.212.101 110.221.108 110.221.109 110.221.110 110.221.111 102 System Configuration Guidelines
This illustration shows the factory-default locations for the configurations of four FCSAs and three Fibre Channel disk modules where the primary system file disk volumes are in Fibre Channel disk module 1: This illustration shows the factory-default locations for the configurations of four FCSAs with three Fibre Channel disk modules where the mirror system file disk volumes are in Fibre Channel disk module 3: Ethernet to Networks Depending on your configuration, Gigabit Ethernet connectivity is provided by the Ethernet interfaces in an IP or Telco CLIM or the Ethernet ports on a Gigabit Ethernet 4-port ServerNet Adapter (G4SA): IP CLIM Ethernet Interfaces (page 103) Telco CLIM Ethernet Interfaces (page 104) Gigabit Ethernet 4-Port ServerNet Adapter (G4SA) Ethernet Ports (page 104) The IP or Telco CLIM installed in a NonStop NS16000 series system or a G4SA installed in an IOAM enclosure provide Gigabit connectivity between NonStop NS16000 series systems and Ethernet LANs. The Ethernet port is an end node on the ServerNet and uses either fiber-optic or copper cable for connectivity to user application LANs, as well as for the dedicated service LAN. IP CLIM Ethernet Interfaces The IP CLIM has two types of Ethernet configurations: IP CLIM option 1 and IP CLIM option 2: IP CLIM option 1 provides five Ethernet copper ports. IP CLIM option 2 provides three Ethernet copper ports and two Ethernet optical ports. For illustrations showing the Ethernet interfaces and ServerNet fabric connections on DL385 G2 or G5 and DL380 G6 IP CLIMs with the IP CLIM option 1 and option 2 configurations, see IP CLuster I/O Module (CLIM) (page 116). All CLIMs use the Cluster I/O Protocols (CIP) subsystem. For more information about the CIP subsystem, refer to the Cluster I/O Protocols (CIP) Configuration and Management Manual. Ethernet to Networks 103
Telco CLIM Ethernet Interfaces The Telco CLIM Ethernet interfaces are five Ethernet copper ports identical to the IP CLIM option 1 configuration. For illustrations showing the Ethernet interfaces and ServerNet fabric connections on DL385 G2 or G5 and DL380 G6 Telco CLIMs, see Telco CLuster I/O Module (CLIM) (page 119). All CLIMs use the Cluster I/O Protocols (CIP) subsystem. For more information about the CIP subsystem, refer to the Cluster I/O Protocols (CIP) Configuration and Management Manual. Gigabit Ethernet 4-Port ServerNet Adapter (G4SA) Ethernet Ports The G4SA provides Gigabit connectivity between Integrity NonStop NS16000 series systems and Ethernet LANs. The G4SA is an end node on the ServerNet and uses either fiber-optic or copper cable for connectivity to user application LANs, as well as for the dedicated service LAN. For more information on the G4SA, see Gigabit Ethernet 4-Port ServerNet Adapter (page 129) or the Gigabit 4-Port ServerNet Adapter Installation and Support Guide. This illustration shows a G4SA with indicators and ports: 104 System Configuration Guidelines
Default Naming Conventions With a few exceptions, default naming conventions are not necessary for the modular resources that make up Integrity NonStop NS16000 series systems. In most cases, users can name their resources at will and use the appropriate management applications and tools to find the location of the resource. However, default naming conventions for certain resources simplify creation of the initial configuration files and automatic generation of the names of the modular resources. When autoconfigure mode is set to ON for storage subsystems, the preconfigured default naming convention is used to generate the names of the Fibre Channel disk drives. Preconfigured default resource names are: Type of Object Naming Convention 1 Example Description Fiber Channel disk drive $FC number $FC10 Tenth Fibre Channel disk drive in the system ESS disk drive $ESS number $ESS20 Twentieth ESS disk drive in the system Modular tape drive $TAPE number $TAPE01 First modular tape drive in the system IP CLIM N group module slot port N100291 IP CLIM located in group 100, module 2 and connected to Pswitch slot 9, port 1 Telco CLIM O group module slot port O100291 Telco CLIM located in group 100, module 2 and connected to Pswitch slot 9, port 1 IB CLIM B group module slot port B100291 IB CLIM located in group 100, module 2 and connected to Pswitch slot 9, port 1 Storage CLIM S group module slot port S100241 Storage CLIM located in group 100, module 2 and connected to Pswitch slot 4, port 1. SAS disk volume $SAS number $SAS20 Twentieth SAS disk volume in the system G4SA G group module slot G11123 G4SA in location 111.2.3 G4SA LIF L group module slot port L11123B LIF for PIF at location 111.2.3.0.B TCP/IP process $ZTC number $ZTC0 First TCP6SAM or TCP/IP process for the system Telserv process $ZTN number $ZTN0 First Telserv process for the system Listener process $LSN number LSN0 First Listener process for the system TFTP process Automatically created by WANMGR None None WANBOOT process Automatically created by WANMGR None None SWAN Concentrator S number S10 Tenth SWAN concentrator in the system Default Naming Conventions 105
1 For more information about CLIM processes that use the CIP subsystem and the naming conventions for these processes, refer to the Cluster I/O Protocols (CIP) Configuration and Management Manual. On new NonStop systems, only one of each of these processes and names is configured: TCP6SAM - $ZTC0 Telserv - $ZTCN0 Listener - $LNS0 No TFTP or WANBOOT process is configured for new NonStop systems. NOTE: Naming conventions or configurations for the dedicated service LAN TCP/IP are the same as the TCP/IP conventions used with G-series RVUs and are named $ZTCP0 and $ZTCP1. CAUTION: Do not change the process names for $ZTCP0 and $ZTCP1. Doing so will make some components inaccessible. OSM Service Connection provides the location of the resource by adding an identifying suffix to the names of all the system resources. Other interfaces, such as SCF, also provide means to locate named resources. 106 System Configuration Guidelines
5 Modular System Hardware This section describes the hardware used in Integrity NonStop NS16000 series systems: NonStop Blade Element Logical Synchronization Unit (LSU) Processor Switch CLuster I/O Modules (CLIMs) (page 115) Serially Attached SCSI (SAS) Disk Enclosure (page 124) I/O Adapter Module (IOAM) Enclosure and I/O Adapters (page 125) Fibre Channel Disk Module (page 130) Tape Drive and Interface Hardware (page 130) Maintenance Switch (Ethernet) (page 130) UPS and ERM (Optional) (page 131) ServerNet Cluster Configuration Form (page 31) Enterprise Storage System (page 133) NonStop Blade Element The NonStop Blade Element enclosure, which is 5U high and weighs 112 pounds (46 kilograms), has these physical attributes: Rackmountable Redundant AC power feeds Front-to-rear cooling Cable connections at rear (power, reintegration, LSU) with cable management equipment on the rear of the cabinet Each NonStop Blade Element includes these field replaceable units (FRUs): Processor board with up to four Itanium microprocessors, interface ASIC, clock generation (accessed through front of enclosure) Memory board that can hold up to 32 DIMMs, each one 4 GB, for a total memory capacity of 128 GB (accessed through front of enclosure) DIMMs (16 or 32 DDR SDRAM DIMM slots) Reintegration board for managing internal memory traffic (accessed from top of enclosure when enclosure is pulled forward on its rails) Blade optics adapter plug-in cards (PICs) with two ports; two adapters minimum, eight maximum (accessed from top of enclosure when enclosure is pulled forward on its rails) Redundant cooling fans (accessed from top of enclosure when enclosure is pulled forward on its rails) Redundant 220-240 V AC power supplies and power cords (accessed from back of of enclosure) I/O interface board, NonStop Blade Element I/O interface board for parallel to serial conversion and maintenance logic (requires removal of enclosure from modular cabinet) Front panel with indicator LEDs and power buttons NonStop Blade Element 107
The NonStop Blade Element midplane for logic interconnection and power distribution, which is part of the chassis assembly, is not a FRU. Two NonStop Blade Elements provide up to four processor elements in a high-availability duplex configuration, and eight NonStop Blade Elements provide a full 16-processor duplex system. For a fault-tolerant triplex system, three NonStop Blade Elements provide four processors, and 12 NonStop Blade Elements provide a full 16-processor triplex system. NOTE: Integrity NonStop NS16000 series systems do not support duplex and triplex processors within the same system. This illustration shows the rear of the NonStop Blade Element, equipped with two power supplies and eight Blade optics adapters: The numbers below the optics connectors refer to the processor numbers within the NonStop Blade Element enclosure. Currently, only connectors J0, J1, J2, and J3 are used. The remaining J connectors are for future use by the NonStop Blade Element hardware. CAUTION: To maintain proper cooling air flow, blank panels must be installed in all slots that do not contain Blade optics adapters. Labels for the fiber-optic cable connections can become complex in a large system. Fiber-optic cables for the ServerNet fabric and also the optic cable connections use the same type of optics connector. Because the modular hardware provides considerable flexibility in how the hardware is distributed among multiple cabinets, a single cabinet could contain four NonStop Blade Elements, with each NonStop Blade Element a member of a different NonStop Blade Complex. To reduce ambiguity in identifying proper cable connections to the NonStop Blade Complex, the identification convention uses combinations of a letter to refer to each connection. A number such as 0, 1, 2, and 3 identifies NonStop Blade Complexes, and a letter such as A, B, or C identifies the NonStop Blade Element. Therefore, a single NonStop Blade Element is identified with an alphanumeric ID, such as A1, A2, A3, and so forth. These IDs refer to the appropriate NonStop Blade Element for proper connection of the fiber-optic cables. The optic cables provide communications between each NonStop Blade Element and the LSU as well as between the LSU and the p-switch PICs on the X fabric and Y fabric. No requirement exists to connect cables from a particular Blade optics adapter on a NonStop Blade Element to a physically corresponding adapter on an LSU. However, to help reduce the complexity of cable connections, 108 Modular System Hardware
HP recommends that you use a physically sequential order of slots for fiber-optic cable connections on the LSU and do not randomly mix the LSU slots. Cable connections to the LSU have no bearing on NonStop Blade Complex number, but HP also recommends you connect NonStop Blade Elements A to the NonStop Blade Element A connection on the LSU. Optic cable connections to the p-switch PICs determine the identification numbers of each NonStop Blade Complex. This simplified example shows connections from a NonStop Blade Element to the LSU and to the p-switch: Front Panel Buttons Button Function Condition Operation Power Hard reset Power is on. Cycle power and reset or reconfigure logic. Power is in standby. Remain in standby. TOC/NMI Soft reset Power is on. Send initialize interrupt to processors, but without reset or reconfiguration of logic. Front Panel Indicator LEDs LED Indicator State Meaning Power Fault Flashing green Flashing yellow Off Steady amber Power is on; NonStop Blade Element is available for normal operation. NonStop Blade Element is in power mode. Power is off. Hardware or software fault exists. NonStop Blade Element 109
LED Indicator Locator State Off Flashing blue Meaning NonStop Blade Element is available for normal operation. System locator is activated. Logical Synchronization Unit (LSU) The LSU is the heart of both the high-availability duplex NonStop Blade Complex and the fault-tolerant triplex NonStop Blade Complex. In Integrity NonStop NS16000 series systems, each LSU is associated with only one logical processor. The LSU is the gateway to the ServerNet fabrics for the duplex or triplex NonStop Blade Complexes. It ensures that I/O from logical processors is valid before allowing it into the ServerNet fabric. The LSU module compares I/O streams between the output of each NonStop Blade Element to ensure validity of the data. No data leaves the confines of a single NonStop Blade Element without being compared to and validated with the data from the other NonStop Blade Element (duplex processor) or the other two NonStop Blade Elements (triplex processor). The LSU enclosure, which is 4U high and weighs 96 pounds (43.5 kilograms) when fully populated, has these physical attributes: Rackmountable Redundant AC power feeds Front to rear cooling Connections for three NonStop Blade Elements Connections for two ServerNet fabrics Cable management and connectivity on the rear of the cabinet Logically the LSU: Implements a fault domain that affects only a single logical processor Has environmental sense and control (ESC) aspects managed by the logical processor Supports single-point system power-on The LSU module consists of two types of FRUs: LSU logic board (accessible from the front of the LSU enclosure) LSU optics adapters (accessible from the rear of the LSU enclosure) AC power assembly (accessible from the rear of the LSU enclosure) CAUTION: To maintain proper cooling air flow, blank panels must be installed in all slots that do not contain logic adapter PICs or logic boards. This illustration shows an example LSU configuration as viewed from the rear of the enclosure and equipped with four LSU optics adapter PICs in positions 20 through 23: 110 Modular System Hardware
This illustration shows an example LSU configuration as viewed from the front of the enclosure and equipped with four LSU logic boards in positions 50 through 53: LSU Indicator LEDs LED LSU optics adapter PIC (green LED) LSU optics adapter PIC (amber LED) LSU optics adapter connectors (green LEDs) LSU logic board (green LED) State Green Off Amber Off Green Off Green Off Meaning Power is on; LSU is available for normal operation. Power is off. Power is in progress, board is being reset, or a fault exists. Normal operation or powered off. NonStop Blade Element optics link or ServerNet link is functional. Power is off, or a fault exists (amber LED is on). Power is on with LSU available for normal operation. Power is off. LSU Indicator LEDs 111
LED LSU logic board PIC (amber LED) State Amber Off Meaning Power is in progress, board is being reset, or a fault exists. Normal operation or powered off. Processor Switch The processor switch, or p-switch, provides the first level of ServerNet fabric interconnect for the Integrity NonStop NS16000 series processors. ServerNet connection from the LSU also defines the ID numbers, 0 through 15, for the logical processors within the system. In cases where NonStop S-series I/O enclosures provide I/O capabilities and storage for the Integrity NonStop NS16000 series system, the ServerNet connection to the p-switch determines the NonStop S-series I/O enclosure group number, as described in NonStop S-Series I/O Enclosure Group Numbers (page 29). Two p-switches are required, one each for the X and Y ServerNet fabrics. Physical attributes for the 3U-high p-switch are: Rackmountable Dual AC power feeds and power supplies Dual fans with front to rear cooling Main logic board Maintenance entity (ME) logic and firmware Maintenance PIC (slot 1) for connection to maintenance switch Cluster PIC (slot 2) for connection to 6770 or 6780 cluster switch Crosslink PIC (slot 3) for connection to the other p-switch ServerNet I/O PICs (slots 4 to 9); provide 24 ServerNet 3 connections to one or more IOAMs, one or more CLIMs, and to optional NonStop S-series I/O enclosures Processor I/O PICs (slots 10 to 13); connect to LSU for ServerNet 3 I/O with the processors Cable management and connectivity on the rear of the cabinet CAUTION: To maintain proper cooling air flow, blank panels must be installed in all slots that do not contain PICs. P-switch FRUs include: ServerNet switch board Quad MMF PIC (up to 10) for processor connection to LSUs and I/O connections to CLIMs, IOAM enclosures, NonStop S-series I/O enclosures, and ServerNet cluster switch (model 6780) SMF PIC for connection to ServerNet cluster switch (model 6770) Cross-link PIC for crossover connections between the two p-switches in the system Maintenance PIC for Ethernet connection to the maintenance switch and for system power-on (SPON) connection to a NonStop S-series I/O enclosure Power supplies (2) 112 Modular System Hardware
Fans (2) 20-character by 2-line liquid crystal display (LCD) for configuration information: IP address Group-module-slot Cabinet name and offset ME firmware revision Field programmable gate array (FPGA) firmware revision Each p-switch is the ServerNet fabric (X or Y) hub for all local and remote ServerNet connections. Functions of the p-switch include: ServerNet interconnect between processors ServerNet interconnect between processors and CLIM ServerNet interconnect between processors and IOAM ServerNet interconnect between processors and IOMF 2 for NonStop S-series I/O enclosure Interface for LAN and system maintenance for: Processor control Environmental sense and control (ESC) Coldload of TACL and EMS windows ServerNet configuration This illustration shows the front of the p-switch: This illustration shows the rear of a fully populated p-switch: Processor Switch 113
P-Switch Indicator LEDs LED All PICs PIC ServerNet connector (green LED) Display State Green Off Amber Off Green Off Messages Meaning Power is on with PIC available for normal operation. Power is off. A fault exists. Normal operation or powered off. ServerNet link is functional. ServerNet link is not functional. Status messages are displayed. Processor Numbering Connection of the ServerNet cables from the LSU to the PICs in p-switch slots 10 through 13 determines the number of the associated logical processor. For more information, see LSUs to Processor Switches and Processor IDs (page 70). This example of a triplex processor shows the ServerNet cabling to the p-switch PIC in slot 10 that defines processors 0, 1, 2, and 3. This configuration is only an example to be used for understanding the interconnection. 114 Modular System Hardware
CLuster I/O Modules (CLIMs) CLIMs are rack-mounted servers that can function as ServerNet Ethernet or I/O adapters. The CLIM complies with Internet Protocol version 6 (IPv6), an Internet Layer protocol for packet-switched networks, and has passed official certification of IPv6 readiness. Two models of base servers are used for CLIMs. You can determine a CLIM's model by looking at the label on the back of the unit (behind the cable arm). This label refers to the number as a PID, CLuster I/O Modules (CLIMs) 115
although it is not the PID. The same number is listed as the part number in OSM. Below is the mapping for CLIM models and earliest supported RVUs: Table 5 CLIM Models and RVU Requirements Model DL385 G2 or G5 DL380 G6 Name on Label 414109-B21 or 453060-B21 494329-B21 Earliest Supported RVU For IP CLIMs: H06.16 and later RVUs For Telco CLIMs: H06.18 and later RVUs For Storage CLIMs: H06.20 and later RVUs For IP CLIMs, either: H06.17 through H06.20 RVUs with required SPRs listed in the CLuster I/O Module (CLIM) Software Compatibility Reference installed, or H06.21 and later RVUs For Telco CLIMs, H06.18 and later RVUs For IB CLIMs, H06.23 and later RVUs For Storage CLIMs, either: H06.20 RVU with required SPRs listed in the CLuster I/O Module (CLIM) Software Compatibility Reference installed, or H06.21 and later RVUs The front of the DL385 G2 or G5 CLIM is shown below: The front of the DL380 G6 CLIM is shown below: These CLIM configurations are supported: IP CLuster I/O Module (CLIM) (page 116) Telco CLuster I/O Module (CLIM) (page 119) IB CLuster I/O Module (CLIM) (Optional) (page 121) Storage CLuster I/O Module (CLIM) (page 123) The optional CLIM Cable Management Ethernet Patch Panel (page 122) cable management product is a convenient way to configure Ethernet cables in a NonStop cabinet for IP and Telco CLIMs. IP CLuster I/O Module (CLIM) The IP CLIM is a rack-mounted server that is part of some NonStop NS16000 series system configurations. See Table 5 (page 116) for RVU and SPR requirements for DL385 G2 or G5 IP CLIMs and DL380 G6 IP CLIMs. The IP CLIM functions as a ServerNet Ethernet adapter providing 116 Modular System Hardware
HP standard Gigabit Ethernet Network Interface Cards (NICs) to implement one of these IP CLIM configurations: DL385 G2 or G5 IP CLIM Option 1 Five Ethernet Copper Ports (page 118) DL385 G2 or G5 IP CLIM Option 2 Three Ethernet Copper and Two Ethernet Optical Ports (page 119) DL380 G6 IP CLIM Option 1 Five Ethernet Copper Ports (page 119) DL380 G6 IP CLIM Option 2 Three Ethernet Copper and Two Ethernet Optical Ports (page 119) These illustrations show the Ethernet interfaces and ServerNet fabric connections on DL385 G2 or G5 and DL380 G6 IP CLIMs with the IP CLIM option 1 and option 2 configurations. For illustrations of the fronts of these CLIMs, see CLuster I/O Modules (CLIMs) (page 115). CLuster I/O Modules (CLIMs) 117
NOTE: All CLIMs use the Cluster I/O Protocols (CIP) subsystem. For more information about the CIP subsystem, refer to the Cluster I/O Protocols (CIP) Configuration and Management Manual. DL385 G2 or G5 IP CLIM Option 1 Five Ethernet Copper Ports IP CLIM Port or Slot Eth 1 port Slot 1 Slot 2 Slot 3 Slot 4 Slot 5 Provides One copper 1 Gb Ethernet interface via embedded Gigabit NIC Four copper 1 Gb Ethernet interfaces via PCIe Gigabit NIC Empty ServerNet fabric connections via a PCIe 4x adapter Empty Empty 118 Modular System Hardware
DL385 G2 or G5 IP CLIM Option 2 Three Ethernet Copper and Two Ethernet Optical Ports IP CLIM Port or Slot Eth 1 port Slot 1 Slot 2 Slot 3 Slot 4 Slot 5 Provides One copper 1 Gb Ethernet interface via embedded Gigabit NIC Two copper 1 Gb Ethernet interfaces via PCIe Gigabit NIC One 1 Gb Ethernet optical port via PCIe Gigabit NIC ServerNet fabric connections via a PCIe 4x adapter One 1 Gb Ethernet optical port via PCIe Gigabit NIC Empty DL380 G6 IP CLIM Option 1 Five Ethernet Copper Ports IP CLIM Port or Slot Eth 1 port Slot 1 Slot 2 Slot 3 Slot 4 Slot 5 Provides One copper 1 Gb Ethernet interface via embedded Gigabit NIC Four copper 1 Gb Ethernet interfaces via PCIe Gigabit NIC Empty ServerNet fabric connections via a PCIe 4x adapter Empty Empty DL380 G6 IP CLIM Option 2 Three Ethernet Copper and Two Ethernet Optical Ports IP CLIM Port or Slot Eth 1 port Eth 2 port Eth 3 port Slot 1 Slot 2 Slot 3 Slot 4 Slot 5 Provides One copper 1 Gb Ethernet interface via embedded Gigabit NIC One copper 1 Gb Ethernet interface via embedded Gigabit NIC One copper 1 Gb Ethernet interface via embedded Gigabit NIC ServerNet fabric connections via a PCIe 4x adapter One 1 Gb Ethernet optical port via PCIe Gigabit NIC One 1 Gb Ethernet optical port via PCIe Gigabit NIC Empty Empty Telco CLuster I/O Module (CLIM) The Telco CLIM is a rack-mounted server that is part of some NonStop NS16000 series system configurations. See Table 5 (page 116) for RVU and SPR requirements for DL385 G2 or G5 Telco CLIMs and DL380 G6 Telco CLIMs. The Telco CLIM is supported as of the H06.17 RVU and utilizes the Message Transfer Part Level 3 User Adaptation layer (M3UA) protocol and functions as a ServerNet Ethernet adapter with one of these Telco CLIM configurations: DL385 G2 or G5 Telco CLIM Five Ethernet Copper Ports (page 120) DL380 G6 Telco CLIM Five Ethernet Copper Ports (page 121) NOTE: All CLIMs use the Cluster I/O Protocols (CIP) subsystem. For more information about the CIP subsystem, refer to the Cluster I/O Protocols (CIP) Configuration and Management Manual. CLuster I/O Modules (CLIMs) 119
These illustrations show the Ethernet interfaces and ServerNet fabric connections on a DL385 G2 or G5 and DL380 G6 Telco CLIM. For illustrations of the fronts of these CLIMs, see CLuster I/O Modules (CLIMs) (page 115). DL385 G2 or G5 Telco CLIM Five Ethernet Copper Ports Telco CLIM Port or Slot Eth 1 port Slot 1 Slot 2 Slot 3 Slot 4 Slot 5 Provides One copper 1 Gb Ethernet interface via embedded Gigabit NIC Four copper 1 Gb Ethernet interfaces via PCIe Gigabit NIC Empty ServerNet fabric connections via a PCIe 4x adapter Empty Empty 120 Modular System Hardware
DL380 G6 Telco CLIM Five Ethernet Copper Ports Telco CLIM Port or Slot Eth 1 port Eth 2 port Eth 3 port Slot 1 Slot 2 Slot 3 Slot 4 Slot 5 Provides One copper 1 Gb Ethernet interface via embedded Gigabit NIC One copper 1 Gb Ethernet interface via embedded Gigabit NIC One copper 1 Gb Ethernet interface via embedded Gigabit NIC ServerNet fabric connections via a PCIe 4x adapter Two 1 Gb Ethernet copper interfaces via PCIe Gigabit NIC Empty Empty Empty IB CLuster I/O Module (CLIM) (Optional) The IB CLIM is a rack-mounted HP Proliant DL380 G6 server that is used in some NonStop NS16000 series system configurations to provide InfiniBand connectivity via dual-ported Host Channel Adapter (HCA) InfiniBand interfaces. The HCA IB interface on the IB CLIM connects to a customer-supplied IB switch using a customer-supplied cable as part of the Low Latency Solution. NOTE: IB CLIMs are only used as a Low Latency Solution. They do not provide general purpose InfiniBand connectivity for NonStop Systems. The IB CLIMs are used as part of a Low Latency Solution. The Low Latency Solution architecture provides a high speed and low latency messaging system for stock exchange trading from the incoming trade server to the NonStop operating system. The solution utilizes the third-party Informatica software for messaging and order sequencing, which must be installed separately. For information about obtaining Informatica software, contact your service provider. The Low Latency Solution also requires a customer-supplied IB switch and Subnet Manager software either installed on the IB switch or running on another server. The following illustration shows the IB and Ethernet interfaces and ServerNet fabric connections on an IB CLIM. CLuster I/O Modules (CLIMs) 121
IB CLIM Port or Slot Eth 1, Eth 2, Eth 3 ports Slot 1 Slot 2 and Slot 3 Slot 4 Slot 5 and Slot 6 Description Each Eth port provides one 1 Gb Ethernet copper interface via embedded Gigabit NIC ServerNet fabric connections via a PCIe 4x adapter Unused Two InfiniBand interfaces (ib0 and ib1 ports) via the IB HCA card. Only one IB interface port is utilized by the Informatica software. HP recommends connecting to the ib0 interface for ease of manageability. Unused NOTE: All CLIMs use the Cluster I/O Protocols (CIP) subsystem. For more information about the CIP subsystem, refer to the Cluster I/O Protocols (CIP) Configuration and Management Manual. IB CLIM Ports IB CLIM Port or Slot Eth 1 port Eth 2 port Eth 3 port Slot 1 Slot 2 Slot 3 Slot 4 Slot 5 Slot 6 Provides One copper 1 Gb Ethernet interface via embedded Gigabit NIC One copper 1 Gb Ethernet interface via embedded Gigabit NIC One copper 1 Gb Ethernet interface via embedded Gigabit NIC ServerNet fabric connections via a PCIe 4x adapter Empty Empty Two InfiniBand ports (ib0 and ib1) via an IB PCIe card Empty Empty CLIM Cable Management Ethernet Patch Panel The HP Ethernet patch panel cable management product is used for cabling the IP and Telco CLIM connections and is preinstalled in all new systems that are configured with IP or Telco CLIMs. The 122 Modular System Hardware
patch panel simplifies and organizes the cable connections to allow easy access to the CLIM's customer-usable interfaces. IP and Telco CLIMs each have five customer-usable interfaces. The patch panel connects these interfaces and brings the usable interface ports to the patch panel. Each Ethernet patch panel has 24 slots, is 1U high, and should be the topmost unit to the rear of the rack. Each Ethernet patch panel can handle cables for up to five CLIMs. It has no power connection. Each patch panel has 6 panes labeled A, B, C, D, E, and F. Each pane has 4 RJ45 ports, and each port is labeled 1, 2, 3, or 4. The RJ45 ports in Panel A have port names: A1, A2, A3, and A4. The factory default configuration depends on how many IP or Telco CLIMs and patch panels are configured in the system. For a new system, the Technical Document shipped with the system generates a cable table with the CLIM interface name. This table identifies how the connections between the CLIM physical ports and patch panel ports were configured at the factory. If you are adding a patch panel to an existing system, have your service provider refer to the CLuster I/O (CLIM) Installation and Configuration Guide. Storage CLuster I/O Module (CLIM) The Storage CLuster I/O Module (CLIM) is part of some NonStop NS16000 series system configurations. See Table 5 (page 116) for RVU and SPR requirements for DL385 G2 or G5 Storage CLIMs and DL380 G6 Storage CLIMs. The Storage CLIM is supported as of the H06.20 RVU and is a rack-mounted server and functions as a ServerNet I/O adapter with these characteristics: Dual ServerNet fabric connections A Serial Attached SCSI (SAS) interface for the storage subsystem via a SAS Host Bus Adapter (HBA) supporting SAS disk drives and SAS tapes A Fibre Channel (FC) interface for ESS and FC tape devices via a customer-ordered FC HBA Connections to FCDMs are not supported. DL385 G5 Storage CLIMs and DL380 G6 Storage CLIMs can coexist in a system, but not as a primary/backup CLIM pair for the supported SAS disk enclosures. NOTE: DL385 G5 Storage CLIMs can coexist with DL380 G6 Storage CLIMs within the same NonStop NS16000 series system only if they control different storage enclosures. For an illustration of the Storage CLIM HBA slots, refer to Storage CLIM Devices (page 80). Two storage CLIM configurations are available: DL385 G2 or G5 Storage CLIM (page 124) DL380 G6 Storage CLIM (page 124) NOTE: A Storage CLIM pair supports a maximum of 4 SAS disk enclosures. This maximum applies to all Storage CLIM types: G2, G5, G6. CLuster I/O Modules (CLIMs) 123
DL385 G2 or G5 Storage CLIM The DL385 G2 or G5 Storage CLIMs contain 5 PCIe HBA slots with these characteristics: Storage CLIM HBA Slot 1 2 3 4 5 Configuration Optional customer order Optional customer order Part of base configuration Part of base configuration Part of base configuration Provides SAS or Fibre Channel NOTE: Not part of base configuration. Optional customer order. SAS or Fibre Channel NOTE: Not part of base configuration. Optional customer order. ServerNet fabric connections via a PCIe 4x adapter. One SAS external connector with four SAS links per connector and 3 Gbps per link is provided by the PCIe 8x slot. One SAS external and internal connector with four SAS links per connector and 3 Gbps per link is provided by the PCIe 8x slot. DL380 G6 Storage CLIM The DL380 G6 Storage CLIM contains 4 PCIe HBA slots with these characteristics: Storage CLIM HBA Slot 1 2 3 4 Configuration Part of base configuration Part of base configuration Optional customer order Optional customer order Provides ServerNet fabric connections via a PCIe 4x adapter. One SAS external connector with two SAS links per connector and 6 Gbps per link is provided by the PCIe 8x slot SAS or Fibre Channel NOTE: Fibre Channel NOTE: Not part of base configuration. Optional customer order. Not part of base configuration. Optional customer order. Serially Attached SCSI (SAS) Disk Enclosure A SAS disk enclosure is a rackmounted enclosure that is a part of some NonStop NS16000 series configurations. The SAS disk enclosure supports up to 25 SAS disk drives, 3Gbps or 6Gbps SAS protocol, and a dual SAS domain from Storage CLIMs to dual port SAS disk drives. The SAS disk enclosure supports connections to SAS disk drives. Connections to FCDMs are not supported. For more information about the SAS disk enclosure, refer to the manual for your SAS disk enclosure model (for example the HP StorageWorks 70 Modular Smart Array Enclosure Maintenance and Service Guide). 124 Modular System Hardware
Two models of SAS disk enclosures are supported. This table describes the type of SAS disk enclosures and shows compatibility between different CLIM and SAS disk enclosure models: SAS Disk Enclosure Model Description Compatibility With CLIM Models 1 DL385 G2 or G5 DL380 G6 Daisy-chain SAS Disk Enclosures? MSA70 HP Storage 70 Modular Smart Array Enclosure, holding 25 2.5 hard disk drives (HDDs), with redundant power supply and cooling Yes No Yes, maximum 4 D2700 HP Storage D2700 Disk Enclosure, holding 25 2.5 hard disk drives (HDDs) or Solid State Drives (SSDs), with redundant power supply and cooling No 1 A Storage CLIM pair supports a maximum of 4 SAS disk enclosures. This maximum applies to all Storage CLIM types: G2, G5, G6. An MSA70 SAS disk enclosure supports 3Gbps SAS protocol. It contains: Twenty five 2.5 dual-ported disk drive slots Two power supplies Two fans Two independent I/O modules: SAS Domain A SAS Domain B NOTE: MSA70 SAS disk enclosures support SAS HDDs, but not SAS SSDs. MSA70 SAS disk enclosures do not support the disk partitioning feature. A D2700 SAS disk enclosure supports 6Gbps SAS protocol. It contains: Twenty five 2.5 dual-ported disk drive slots Two power supplies Two fans Two independent I/O modules: SAS Domain A SAS Domain B NOTE: With H06.23 and later RVUs, D2700 disk enclosures support the Disk Partitioning feature, which allows a single SAS HDD or SAS SSD to be partitioned into multiple logical devices. See the SCF Reference Manual for the Storage Subsystem for details about disk partitioning. With H06.24 and later RVUs, D2700 disk enclosures can contain HDDs or SSDs. I/O Adapter Module (IOAM) Enclosure and I/O Adapters An IOAM provides the Integrity NonStop NS16000 series system with its system I/O using Gigabit 4-port Ethernet ServerNet adapters (G4SAs) for LAN connectivity and Fibre Channel ServerNet adapters (FCSAs) for storage connectivity. Yes No I/O Adapter Module (IOAM) Enclosure and I/O Adapters 125
IOAM Enclosure An IOAM enclosure is 11U high, Each enclosure contains: Four power supplies with dual AC input Four fans provide front-to-rear cooling Two ServerNet switch boards (for the X fabric and Y fabric) for ServerNet routing to the ServerNet adapters in the IOAM enclosure. These boards also have maintenance entity (ME) logic and firmware, maintenance connection (Ethernet), and an LCD for: IP address Group-module-slot Cabinet name and offset ME firmware revision Field programmable gate array (FPGA) firmware revision Up to ten dual-ported (X and Y ServerNet fabrics) ServerNet Adapters FCSAs to Fibre Channel disk modules, tape devices, or Enterprise Storage System (ESS) G4SAs for 10/100/1000 and 10/100 Ethernet connectivity to local and wide area networks CAUTION: To maintain proper cooling air flow, blank panels must be installed in all slots that do not contain ServerNet adapters. The four power supplies do not support the entire IOAM enclosure. Instead, two power supplies support each of the enclosure s two modules. For each module to operate, at least one power supply in each module must be operational. For fault tolerance, you must connect the power supplies in each module to separate power distribution units (PDUs). For example, the power supplies in (2, 15) and (2, 18) cannot both be plugged into the left PDU. The dual IOAM modules in each IOAM enclosure promote fault-tolerant configuration and servicing: Redundant hardware provides fault tolerance if it is divided between modules. Fault-tolerant paths must be configured through different modules. This illustration shows the front and rear of the IOAM enclosure and details: 126 Modular System Hardware
Because the IOAM enclosure contains two modules, one IOAM enclosure can provide fault-tolerant paths. However, the paths must be configured through both modules. For example, you could configure these paths through four FCSAs: Path Group Module Slot Primary 110 02 02 Primary backup 110 03 02 Mirror 110 03 04 Mirror backup 110 02 04 These paths all exist within the same group. But they are divided between two modules, so the configuration is fault-tolerant. For additional information, see Fibre Channel Devices (page 92). IOAM Enclosure Indicator LEDs ServerNet Switch Board LED Power Alert Enet Port LCD Display ServerNet Ports State Green Off Amber Off Green Off Messages Green Off Meaning Power is on; board is available for normal operation. Power is off. A fault exists. Normal operation or powered off. Link is functional. Link is not functional. Message as is displayed. ServerNet link is functional. ServerNet link is not functional. I/O Adapter Module (IOAM) Enclosure and I/O Adapters 127
ServerNet Switch Board LED Reset State Amber Off Meaning Board is in logic reset Normal operation or powered off. Fibre Channel ServerNet Adapter (FCSA) The FCSA provides two ports, or ServerNet addressable controllers called SACs, for Fibre Channel connectivity between the system and: Fibre Channel disk module Enterprise Storage System (ESS) Fibre Channel to SCSI converter for connection to a tape drive listed in NonStop NS16000 Series Hardware Installation Manual This illustration shows the front of an FCSA: 128 Modular System Hardware
FCSAs are installed in pairs and can reside in slots 1 through 5 of either IOAM (module 2 or 3) in an IOAM enclosure. The pairs can be installed one each in the two IOAM modules in the same IOAM enclosure, or the pair can be installed one each in different IOAM enclosures. The FCSA allows either a direct connection to an Enterprise Storage System (ESS) or connection through a storage area network. For detailed information on the FCSA, see the Fibre Channel ServerNet Adapter Installation and Support Guide. Gigabit Ethernet 4-Port ServerNet Adapter The Gigabit Ethernet 4-port ServerNet adapter (G4SA) provides Gigabit connectivity to Ethernet LANs. G4SAs can reside in slots 1 through 5 of each IOAM module. A combination of copper and fiber-optic interfaces are supported on the six ports of the G4SA, although only four of these ports can be in use at any one time. I/O Adapter Module (IOAM) Enclosure and I/O Adapters 129
A G4SA can be configured as: Two 10/100/1000 Mbps copper ports and two 10/100 Mbps copper ports Two 10/100/1000 Mbps multimode fiber-optic ports and two 10/100 Mbps copper ports A G4SA complies with the 1000 Base-T standard (802.3ab), 1000 Base-SX standard (802.3z), and these Ethernet LANs: 802.3 (10 Base-T) 802.1Q (VLAN tag-aware switch) 802.3u (Auto negotiate) 802.3x (Flow control) 802.3u (100 Base-T and 1000 Base-T) For detailed information on the G4SA, see the NonStop Gigabit Ethernet 4-Port Installation and Support Guide. Fibre Channel Disk Module The Fibre Channel disk module is a rackmounted enclosure that contains: Up to 14 Fibre Channel arbitrated loop disk drives (enclosure front) Environmental monitoring unit (EMU) (enclosure rear) Fibre Channel arbitrated loop (FC-AL) modules (enclosure rear) A Fibre Channel disk module connects to a FCSA in an IOAM enclosure. You can daisy-chain together up to four Fibre Channel disk modules with 14 drives in each one. This illustration shows a fully populated Fibre Channel disk module: Tape Drive and Interface Hardware For an overview of tape drives and the interface hardware, see FCSA to Tape Devices (page 79). For a list of supported tape devices, refer to the NonStop Storage Overview manual. Maintenance Switch (Ethernet) The ProCurve maintenance switch includes management features that NS16000 series systems require and provides the communication between the NS16000 series system at the switch boards 130 Modular System Hardware
in the p-switches and IOAM enclosure, and optional UPS and the system console running HP NonStop Open System Management (OSM). The maintenance switch includes enough ports to support multiple systems. The maintenance switch mounts in the modular cabinet, but no restrictions exist for its placement. This illustration shows an example of two maintenance switches installed in the top of a cabinet: Each system requires multiple connections to the maintenance switch. For more information, refer to the connections described in Basic LAN Configuration (page 137) and Fault-Tolerant Configuration (page 138). The preferred configuration is connections to two maintenance switches. UPS and ERM (Optional) An uninterruptible power supply (UPS) is optional but recommended where a site UPS is not available. You can use any UPS that meets the modular cabinet power requirements for all enclosures being powered by the UPS. One UPS option is the HP R5000 UPS. For information about the requirements for installing a UPS other than the HP R5000 UPS in an Integrity NonStop NS16000 series system, see Uninterruptible Power Supply (UPS) (page 33). Cabinet configurations that include an R5000 UPS or R5500 XR UPS also have one extended runtime module (ERM). An ERM is a battery module that extends the overall battery-supported system run time. A second ERM can be added for even longer battery-supported system run time. Use the R5000 ERM(s) with the R5000 UPS. Use the R5500 XR ERM(s) with the R5500 XR UPS. Adding an R5000 UPS to a modular cabinet in the field requires removing the PDU on the right side of the modular cabinet and installing HP extension bars that are compatible with the UPS. A factory-installed UPS ships with the HP extension bars already installed on the right side of the modular cabinet. The PDU and extension bars are oriented inward, facing the components within the modular cabinet. Both the UPS and the ERM are 3U high and must reside in the bottom of the cabinet. NOTE: Retrofitting a system in the field with a UPS and ERMs will likely require moving all installed enclosures in the modular cabinet to provide space for the new hardware. One or more of the enclosures that formerly resided in the modular cabinet might be displaced and therefore have to be installed in another modular cabinet that would also need a UPS and ERMs installed. Additionally, lifting equipment might be required to lift heavy enclosures to their new location. NOTE: The AC input power cord for the R5000 UPS is routed to exit the modular cabinet at either the top or bottom rear corners of the cabinet, depending on what is ordered for the site power feed, and the large output receptacle is unused. UPS and ERM (Optional) 131
This illustration shows the location of an R5000 UPS and an ERM in a modular cabinet: This illustration shows the location of an R5500 XR UPS and an ERM in a modular cabinet: For power and environmental requirements, planning, installation, and emergency power-off (EPO) instructions for the UPS, refer to the documentation shipped with the UPS. System Console A system console is an HP approved personal computer (PC) running maintenance and diagnostic software for NonStop systems. When supplied with a new NonStop system, system consoles have factory-installed HP and third-party software for managing the system. You can install software upgrades from the HP NonStop System Console Installer DVD image. Some system console hardware, including the Windows Server unit, monitor, and keyboard, can be mounted in the Integrity NonStop NS16000 series modular cabinet. Other Windows Server units are installed outside the modular cabinet and require separate provisions or furniture to hold the Windows Server unit hardware. Two system consoles, a primary and a backup, are recommended to manage NonStop systems. Two CLuster I/O Modules (CLIMs) can be configured to run the DHCP, TFTP, and DNS Windows-based services instead of the system consoles. For more information, see DHCP, TFTP, and DNS Windows-Based Services (page 139). 132 Modular System Hardware
NOTE: The NonStop system console must be configured with some ports open. For more information, see the NonStop System Console Installer Guide. For more information on the system console, refer to System Console (page 147). Enterprise Storage System An Enterprise Storage System (ESS) is a collection of magnetic disks, their controllers, and a disk cache in one or more standalone cabinets. ESS connects to the Integrity NonStop NS16000 series systems either directly via FCSAs in IOAM enclosures (direct connect) or through separate storage area network (SAN) using a Fibre Channel SAN switch (switched connect). For more information about these connection types, see the Fibre Channel ServerNet Adapter Installation and Support Guide. High availability and fault-tolerant configurations for one or two IOAM enclosures and pairs of FCSAs are similar to the configurations required for Fibre Channel disk drives, as explained in IOAM Enclosure and Disk Storage Considerations (page 92). Cables and switches vary, depending on whether the connection is direct, switched, or a combination: Connection Direct connect Switched Combination of direct and switched LC-LC Cables 2 per FCSA 4 per FCSA 2 per FCSA for each direct connection4 per FCSA for each switched connection Fibre Channel Switches 0 1 or more 1 This illustration shows an example of connections between two IOAM enclosures and an ESS via the separate Fibre Channel switch: For fault tolerance, the primary and backup paths to an ESS logical device (LDEV) must go through different Fibre Channel switches. Some storage area procedures, such as reconfiguration, can cause the affected switches to pause. If the pause is long enough, I/O failure occurs on all paths connected to that switch. If both the primary and the backup paths are connected to the same switch, the LDEV goes down. Enterprise Storage System 133
Refer to the documentation that accompanies the ESS. NonStop S-Series I/O Enclosure NonStop S-series I/O enclosures equipped with model 1980 I/O multifunction 2 customer replaceable units (IOMF 2 CRUs) can be connected to the NonStop NS16000 series server via fiber-optic ServerNet cables and the processor switch (p-switch). All hardware and I/O connectivity that is currently supported for NonStop S-series I/O enclosures is also supported with the Integrity NonStop NS16000 series servers, with the exception of FOX ring connectivity. The model 1952 IOMF CRU, ServerNet/FX, and ServerNet/FX 2 adapters are not supported. For site preparation specifications for the NonStop S-series I/O enclosures, see Chapter 3 (page 37). For migration or connection information, see Appendix D (page 161). 134 Modular System Hardware
6 Hardware Configurations Minimum and Maximum Hardware Configuration This table shows the minimum, typical, and maximum number of the modular components installed in a system. These values might not reflect the system you are planning and are provided only as an example, not as exact values. Enclosure or Component Duplex Processor Minimum Maximum Triplex Processor Minimum Maximum 4-processor NonStop Blade Element with 16 DIMMs 2 8 3 12 4-GB memory quad 4 32 6 48 Processor board with two 1.6 GHz processors 2-3 - Processor board with four 1.6 GHz processors - 8-12 LSU logic board and optics adapter 2 16 2 16 P-switch 2 2 2 2 CLIMs (see NonStop NS16000 Series System Overview (page 19) for details) 0 20 0 20 CLIM cable management Ethernet patch panel 1 if IP or Telco CLIMs in system 0 if not 1 per modular cabinet 1 if IP or Telco CLIMs in system 0 if not 1 per modular cabinet SAS disk enclosure 0 44 0 44 SAS disk drive 0 550 0 550 IOAMs 1 6 1 6 FCSA G4SA 2 2 Up to 60 in mixture set by disks and I/O 2 2 Up to 60 in mixture set by disks and I/O Fibre Channel disk module per FCSA pair 2 8 2 8 Fibre Channel disk drives per FCSA pair 14 112 14 112 Enclosure Locations in Cabinets Each delivery of an Integrity NonStop NS16000 series system or component includes a Technical Document that describes: Each cabinet included with the system Each hardware enclosure installed in the cabinet Cabinet U location of the bottom edge of each enclosure Minimum and Maximum Hardware Configuration 135
7 Maintenance and Support Connectivity Local monitoring and maintenance of the Integrity NonStop NS16000 series system occurs over the dedicated service LAN. The dedicated service LAN provides connectivity between the system console and the maintenance infrastructure in the system hardware. Remote support is provided in conjunction with OSM, which runs on the system console and communicates with the chosen remote access solution. HP Insight Remote Support Advanced is now the go-forward remote support solution for NonStop systems, replacing the OSM Notification Director in both modem-based and HP Instant Support Enterprise Edition (ISEE) remote support solutions. For more information on Insight Remote Support Advanced, please refer to Insight Remote Support Advanced for NonStop in the external Service Information collection of NTL. Only components specified by HP can be connected to the dedicated LAN. No other access to the LAN is permitted. A maximum of eight NonStop systems can be connected to the dedicated service LAN. The dedicated service LAN uses a ProCurve Ethernet switch for connectivity between the p-switches and ServerNet switch boards for each IOAM or CLIM and the system console. An important part of the system maintenance architecture, the system console is a Windows Server unit purchased from HP to run maintenance and diagnostic software for NonStop NS16000 series systems. Through the system console, you can: Monitor system health and perform maintenance operations using the HP NonStop Open System Management (OSM) interface View manuals and service procedures Run HP Tandem Advanced Command Language (TACL) sessions using terminal-emulation software Install and manage system software using the Distributed Systems Management/Software Configuration Manager (DSM/SCM) Make remote requests to and receive responses from a system using remote operation software Dedicated Service LAN An Integrity NonStop NS16000 series system requires a dedicated LAN for system maintenance through OSM. Only components specified by HP can be connected to a dedicated LAN. No other access to the LAN is permitted. This subsection includes: Basic LAN Configuration (page 137) Fault-Tolerant Configuration (page 138) IP Addresses (page 140) Ethernet Cables (page 142) SWAN Concentrator Restriction (page 142) Dedicated Service LAN Links With One IOAM Enclosure (page 143) Dedicated Service LAN Links Using IP CLIMs (page 142) Dedicated Service LAN Links With IOAM Enclosure and NonStop S-Series I/O Enclosure (page 145) Dedicated Service LAN Links With NonStop S-Series I/O Enclosure (page 145) Initial Configuration for a Dedicated Service LAN (page 146) Additional Configuration for OSM (page 147) 136 Maintenance and Support Connectivity
Basic LAN Configuration A basic dedicated service LAN that does not provide a fault-tolerant configuration requires connection of these components to the ProCurve maintenance switch installed in the modular cabinet: One connection for each system console running OSM One connection to the processor switch on the X fabric for OSM Low-Level Link to a down system One connection to the Processor switch on the Y fabric for OSM Low-Level Link to a down system One connection to the maintenance interface (eth0) for each CLIM One connection to the ilo inteface for each CLIM One connection to the IOAM 2 ServerNet board for OSM control of the I/O hardware One connection to the IOAM 3 ServerNet board for OSM control of the I/O hardware Connections to both X and Y fabrics (for fault tolerance) for OSM system-up maintenance (any one of these connections is valid as long as there are at least two connections total): Gigabit 4-port ServerNet adapters (G4SAs) installed in an IOAM enclosure Ethernet 4-port ServerNet adapters (E4SAs), Fast Ethernet ServerNet adapters (FESAs), or Gigabit Ethernet ServerNet adapters (GESA) installed in a NonStop S-series I/O enclosure with IOMF 2 CRUs One connection to the UPS (optional) for power-fail monitoring One connection to the Fibre Channel to SCSI converter (optional) for Fibre Channel tape This illustration shows a basic LAN configuration with one maintenance switch: Dedicated Service LAN 137
Fault-Tolerant Configuration Your HP-authorized service provider configures the dedicated service LAN as described in the NonStop Dedicated Service LAN Installation and Configuration Guide. HP recommends that you use a fault-tolerant LAN configuration. A fault-tolerant configuration includes these connections to two maintenance switches: A system console to each maintenance switch One processor switch on the X fabric to the first maintenance switch One processor switch on the Y fabric to the second maintenance switch For every CLIM pair, connect the ilo and eth0 ports of the primary CLIM to one maintenance switch, and the ilo and eth0 ports of the backup CLIM to the second maintenance switch. The primary and backup CLIMs are defined, based on the CLIM-to-CLIM failover configuration. NOTE: For more information about CLIM-to-CLIM failover, refer to the Cluster I/O Protocols (CIP) Configuration and Management Manual. One of the two IOAM enclosure ServerNet switch boards to each maintenance switch If CLIMs are used to configure the maintenance LAN, connect the CLIM that configures $ZTCP0 to one maintenance switch, and connect the other CLIM that configures $ZTCP1 to the second maintenance switch. 138 Maintenance and Support Connectivity
If G4SAs are used to configure the maintenance LAN, connect the G4SA that configures $ZTCP0 to one maintenance switch, and connect the other G4SA that configures $ZTCP1 to the second maintenance switch. One E4SA, FESA, or GESA on the X fabric to the first maintenance switch One E4SA, FESA, or GESA on the X fabric to the second maintenance switch One connection to the UPS (optional) for power-fail monitoring One connection to the Fibre Channel to SCSI converter (optional) for Fibre Channel tape This illustration shows a fault-tolerant LAN configuration with two maintenance switches: DHCP, TFTP, and DNS Windows-Based Services DHCP, TFTP, and DNS Windows-based services are required for NonStop NS16000 series systems with CLIMs. These services can reside either on the primary and backup system consoles or on a pair of CLuster I/O Modules (CLIMs). By default for commercial NonStop NS16000 series systems, HP ships these services on NonStop system consoles. You can move these services from the system consoles to CLIMs or from the CLIMs to system consoles. Procedures for moving these services are Dedicated Service LAN 139
located in the Service Information section of the Service Procedures collection of NTL. For details, see: Changing the DHCP, DNS, or BOOTP Server from CLIMs to System Consoles Changing the DHCP, DNS, or BOOTP Server from System Consoles to CLIMs CAUTION: You must have only two sources of these services in the same dedicated service LAN. If these services are installed on any other sources, they must be disabled. To determine the location of these services, see Locating and Troubleshooting DHCP, TFTP, and DNS Services on the NonStop Dedicated LAN. You cannot have these services divided between a CLIM and a system console. This mixed configuration is not supported. IP Addresses Integrity NonStop NS16000 series servers require Internet protocol (IP) addresses for these components that are connected to the dedicated service LAN: ServerNet switch boards in the p-switch ServerNet switch boards in the IOAM enclosure CLuster I/O Modules (CLIMs) FESAs, G4SAs, E4SAs, and GESAs Maintenance switch System consoles OSM Low-Level Link OSM Service Connection UPS (optional) Fibre Channel to SCSI converter (optional) These components have default IP addresses that are preconfigured at the factory. You must change these preconfigured IP addresses to addresses appropriate for the LAN environment: Component Primary system console (rack-mounted or stand-alone) Backup system console (rack-mounted only) Maintenance switch (ProCurve) (First switch) Maintenance switch (ProCurve) (Additional switches) Processor-switch CLIM ilos CLIM Maintenance Interfaces Location N/A N/A N/A N/A 100.2.14 100.3.14 CLIM at 100.2.4.1 CLIM at 100.2.4.2 CLIM at 100.2.4.3 CLIM at 100.2.4.4 CLIM at 100.2.5.1 Default IP Address 192.231.36.1 192.231.36.4 192.231.36.21 192.231.36.22 192.231.36.30 192.231.36.202 192.231.36.203 Assigned by DHCP server on the LAN 192.231.36.41 192.231.36.42 192.231.36.43 192.231.36.44 192.231.36.51 140 Maintenance and Support Connectivity
Component IOAM enclosure (ServerNet switch boards) First UPS (rack-mounted only) Additional UPSs (rack-mounted only) NonStop system console DHCP server settings: Location CLIM at 100.2.5.2 CLIM at 100.2.5.3 CLIM at 100.2.5.4 CLIM at 100.2.6.1 CLIM at 100.2.6.2 CLIM at 100.2.6.3 CLIM at 100.2.6.4 CLIM at 100.2.7.1 CLIM at 100.2.7.2 CLIM at 100.2.7.3 CLIM at 100.2.7.4 CLIM at 100.2.8.1 CLIM at 100.2.8.2 CLIM at 100.2.8.3 CLIM at 100.2.8.4 CLIM at 100.2.9.1 CLIM at 100.2.9.2 CLIM at 100.2.9.3 CLIM at 100.2.9.4 110.2.14 110.3.14 111.2.14 111.3.14 112.2.14 112.3.14 113.2.14 113.3.14 114.2.14 114.3.14 115.2.14 115.3.14 N/A N/A Primary system console starting IP address Primary system console ending IP address Primary system console subnet mask Backup system console starting IP address Backup system console ending IP address Backup system console subnet mask Default IP Address 192.231.36.52 192.231.36.53 192.231.36.54 192.231.36.61 192.231.36.62 192.231.36.63 192.231.36.64 192.231.36.71 192.231.36.72 192.231.36.73 192.231.36.74 192.231.36.81 192.231.36.82 192.231.36.83 192.231.36.84 192.231.36.91 192.231.36.92 192.231.36.93 192.231.36.94 192.231.36.222 192.231.36.223 192.231.36.224 192.231.36.225 192.231.36.226 192.231.36.227 192.231.36.228 192.231.36.229 192.168.36.230 192.231.36.231 192.231.36.232 192.231.36.233 192.231.36.31 192.231.36.32-192.231.36.38 192.231.36.101 192.231.36.150 255.255.255.0 192.231.36.151 192.231.36.200 255.255.255.0 Dedicated Service LAN 141
Component Location TCP/IP processes for OSM Service Connection: $ZTCP0 $ZTCP1 Default IP Address 192.231.36.10 255.255.255.0 subnet mask 192.231.36.11 255.255.255.0 subnet mask Ethernet Cables Ethernet connections for a dedicated service LAN require Category 6 (CAT 6) unshielded twisted-pair cables. Category 5e (CAT 5e) cable is also acceptable. SWAN Concentrator Restriction Isolate the dedicated service LAN from any ServerNet wide area network (SWAN) on a system. Isolate redundantly configured SWAN concentrator subnets from the dedicated service LAN. Do not connect SWANs on a subnet containing a DHCP. One possible fault-tolerant configuration is a pair of G4SAs (each in a different IOAM) with a dedicated service LAN connected to the A port on each G4SA. The B port is then available to support the SWAN. Dedicated Service LAN Links Using IP CLIMs You can implement up-system service LAN connectivity using IP CLIMs, if the system has at least two IP CLIMs. The values in this table show the identification for the CLIMs in a NonStop NS16000 series system and connected to the maintenance switch. In this table, an IP CLIM named N100291 is connected to the first port and PIC 9 of the p-switch in Group 100, module 2: CLIM Location 100.2.9.1 100.2.9.2 TCP/IP Stack $ZTCP0 $ZTCP1 IP Configuration IP: 192.231.36.10 Subnet: %hffffff00 Hostname: osmlanx IP: 192.231.36.11 Subnet: %hffffff00 Hostname: osmlany NOTE: For a fault-tolerant dedicated service LAN, two IP CLIMs are required, with each IP CLIM connected to a separate maintenance switch. Dedicated Service LAN Links Using G4SAs You can implement system-up service LAN connectivity using G4SAs or IP CLIMs. The values in this table show the identification for G4SAs in slot 5 of both modules of an IOAM enclosure and connected to the maintenance switch: GMS for G4SA Location in IOAME G4SA PIF G4SA PIF TCP/IP Stack IP Configuration 110.2.5 G11025.0.A L1102R $ZTCP0 IP: 192.231.36.10 142 Maintenance and Support Connectivity
GMS for G4SA Location in IOAME G4SA PIF G4SA PIF TCP/IP Stack IP Configuration Subnet: %hffffff00 Hostname: osmlanx 110.3.5 G11035.0.A L1103R $ZTCP1 IP: 192.231.36.11 Subnet: %hffffff00 Hostname: osmlany NOTE: For a fault-tolerant dedicated service LAN, two G4SAs are required, with each G4SA connected to a separate maintenance switch. These G4SA can reside in modules 2 and 3 of the same IOAM enclosure or in module 2 of one IOAM enclosure and module 3 of a second IOAM enclosure. When the G4SA provides connection to the dedicated service LAN, use the slower 10/100 Mbps PIF A rather than one of the high-speed 1000 Mbps Ethernet ports of PIF C or D. Dedicated Service LAN Links With One IOAM Enclosure This illustration shows the dedicated service LAN cables connected to the G4SAs in slot 5 of both modules of an IOAM enclosure and to the maintenance switch: The values in this table show the identification for the G4SAs in the preceding illustration: GMS for G4SA Location in IOAME G4SA PIF G4SA PIF TCP/IP Stack IP Configuration 110.2.5 G11025.0.A L1102R $ZTCP0 IP: 192.231.36.10 GW: 192.231.36.9 Subnet: %hffffff00 Hostname: osmlanx 110.3.5 G11035.0.A L1103R $ZTCP1 IP: 192.231.36.11 GW: 192.231.36.9 Subnet: Dedicated Service LAN 143
GMS for G4SA Location in IOAME G4SA PIF G4SA PIF TCP/IP Stack IP Configuration %hffffff00 Hostname: osmlany Dedicated Service LAN Links to Two IOAM Enclosures This illustration shows dedicated service LAN cables connected to G4SAs in two IOAM enclosures and to the maintenance switch: The values in this table show the identification for the G4SAs in the preceding illustration: GMS for G4SA Location in IOAME G4SA PIF G4SA PIF TCP/IP Stack IP Configuration 110.2.5 G11025.0.A L1102R $ZTCP0 IP: 192.231.36.10 GW: 192.231.36.9 Subnet: %hffffff00 Hostname: osmlanx 111.3.5 G11135.0.A L1113R $ZTCP1 IP: 192.231.36.11 GW: 192.231.36.9 Subnet: %hffffff00 Hostname: osmlany 144 Maintenance and Support Connectivity
Dedicated Service LAN Links With IOAM Enclosure and NonStop S-Series I/O Enclosure This illustration shows dedicated service LAN cables connected to a G4SA in an IOAM enclosure and at least one NonStop S-series Ethernet adapter (E4SA, FESA, or GESA) in a NonStop S-series I/O enclosure (module 12 in this example) and to the maintenance switch: In this example, the G4SA in module 2 of the IOAM enclosure connects to the X ServerNet fabric while the adapter in the NonStop S-series I/O enclosure connects to the Y ServerNet fabric using the dual-ported slot 54. For information on the NonStop S-series I/O enclosure, see the NonStop S-Series Planning and Configuration Guide. The values in this table show the identification for the adapters in the preceding illustration: GMS for G4SA Location in IOAME G4SA PIF G4SA PIF TCP/IP Stack IP Configuration 110.2.5 G11025.0.A L1102R $ZTCP0 IP: 192.231.36.10 GW: 192.231.36.9 Subnet: %hffffff00 Hostname: osmlanx 12.1.54 E1254.0.A L12C $ZTCP1 IP: 192.231.36.11 GW: 192.231.36.9 Subnet: %hffffff00 Hostname: osmlany Dedicated Service LAN Links With NonStop S-Series I/O Enclosure This illustration shows dedicated service LAN cables connected to two NonStop S-series Ethernet adapters (E4SA, FESA, or GESA) in a NonStop S-series I/O enclosure (module 12 in this example) and to the maintenance switch: Dedicated Service LAN 145
This configuration can be used in cases where a NonStop NS-series system does not have an IOAM enclosure and only the NonStop S-series I/O enclosure provides the system I/O connections and mass storage: GMS for G4SA Location in IOAME G4SA PIF G4SA PIF TCP/IP Stack IP Configuration 11.1.53 E1153.0.A L118 $ZTCP0 IP: 192.231.36.10 GW: 192.231.36.9 Subnet: %hffffff00 Hostname: osmlanx 11.1.54 E1154.0.A L11C $ZTCP1 IP: 192.231.36.11 GW: 192.231.36.9 Subnet: %hffffff00 Hostname: osmlany Initial Configuration for a Dedicated Service LAN New systems are shipped with an initial set of IP addresses configured. For a listing of these initial IP addresses, see IP Addresses (page 140). Factory-default IP addresses for the G4SA and E4SA adapters are in the LAN Configuration and Management Manual. IP addresses for SWAN concentrators are in the WAN Subsystem Configuration and Management Manual. HP recommends that you change these preconfigured IP addresses to addresses appropriate for your LAN environment. You must change the preconfigured IP addresses on: A backup system console if you want to connect it to a dedicated service LAN that already includes a primary system console or other system console Any system console if you want to connect it to a dedicated service LAN that already includes a primary system console The MSP and Ethernet port IP addresses of any NonStop S-series server if you want to connect it to a LAN that already includes another NonStop S-series server Keep track of all the IP addresses in your system so that no IP address is assigned twice. 146 Maintenance and Support Connectivity
Additional Configuration for OSM If you are using OSM Notification Director for remote support services and/or require a faster connection to the OSM Service Connection, see Configuring Additional TCP/IP Processes for OSM Connectivity in the OSM Migration and Configuration Guide. System Console New system consoles are preconfigured with the required HP and third-party software. When upgrading to the latest RVU, you can install software upgrades from the HP NonStop System Console Installer DVD image. NOTE: The NonStop system console must be configured with some ports open. For more information, see the NonStop System Console Installer Guide. Some system console hardware, including the Windows server unit, monitor, and keyboard, can be mounted in the cabinet. Other Windows servers are installed outside the cabinet and require separate provisions or furniture to hold the Windows server hardware. System consoles communicate with NonStop NS16000 series systems over a dedicated service local area network (LAN) or a secure operations LAN. A dedicated service LAN is required for use of OSM Low-Level Link and Notification Director functionality, which includes configuring primary and backup dial-out points (referred to as the primary and backup system consoles, respectively). HP recommends that you also configure the backup dedicated service LAN with a backup system console. Your system console configuration can be any of: One system console managing one system (not recommended) One system console managing multiple systems (not recommended) Primary and backup system consoles managing one system Primary and backup system consoles managing multiple systems For Information About... Connecting and configuring system consoles Service providers should refer to... NonStop NS16000 Series Hardware Installation Manual NonStop Dedicated Service LAN Installation and Configuration Guide To make sure that you have the latest software installed on the system consoles, see the OSM Migration and Configuration Guide. System Console Configurations Several system console configurations are possible: One System Console Managing One System (Setup Configuration) (page 148) One System Console Managing Multiple Systems (page 148) Primary and Backup System Consoles Managing One System (page 148) Primary and Backup System Consoles Managing Multiple Systems (page 149) NOTE: The illustrations in this section are examples and are not intended for use in system installation. For connections, ask your HP service provider to refer to the NonStop NS16000 Series Hardware Installation Manual. Dedicated Service LAN 147
One System Console Managing One System (Setup Configuration) The one NonStop system console on the LAN must be configured as the primary system console. This configuration can be called the setup configuration and is used during initial setup and installation of the system console and the server. The setup configuration is an example of a secure, stand-alone network as shown in Basic LAN Configuration (page 137). A LAN cable connects the primary system console to the maintenance switch, and additional LAN cables connect the switches and Ethernet interfaces. The maintenance switch or an optional second maintenance switch allows you to later add a backup system console and additional system consoles. NOTE: Because the system console and maintenance switch are single points of failure that could disrupt access to OSM, this configuration is not recommended for operations that require high availability or fault tolerance. When you use this configuration, you do not need to change the preconfigured IP addresses. The one system console on the LAN must be configured as the primary system console. This configuration can be called the setup configuration and is used during initial setup and installation of the system console and the server. One System Console Managing Multiple Systems The one NonStop system console on the LAN must be configured as the primary system console. Because all servers are shipped with the same preconfigured IP addresses for the primary/backup p-switches, $ZTCP0, $ZTCP1, and Eth0 (CLIM), you must change these IP addresses for the second and subsequent servers before you can add them to the LAN. Primary and Backup System Consoles Managing One System This configuration is recommended. It is similar to the setup configuration, but for fault-tolerant redundancy, it includes a second maintenance switch, backup system console, and second modem (if a modem-based remote solution is used). The maintenance switches provide a dedicated LAN in which all systems use the same subnet. Fault-Tolerant Configuration (page 138) shows a fault-tolerant configuration without modems. NOTE: A subnet is a network division within the TCP/IP model. Within a given network, each subnet is treated as a separate network. Outside that network, the subnets appear as part of a single network. The terms subnet and subnetwork are used interchangeably. If a remote maintenance LAN connection is required, use the second network interface card (NIC) in the NonStop system console to connect to the operations LAN, and access the other devices in the maintenance LAN using Remote Desktop via the console. Because this configuration uses only one subnet, you must: Enable Spanning Tree Protocol (STP) in switches or routers that are part of the operations LAN. 148 Maintenance and Support Connectivity
NOTE: Do not perform the next two bulleted items if your backup system console is shipped with a new NonStop NS16000 series system. In this case, HP has already configured these items for you. Change the preconfigured DHCP configuration of the backup system console before you add it to the LAN. Change the preconfigured IP address of the backup system console before you add it to the LAN. CAUTION: Networks with more than one path between any two systems can cause loops that result in message duplication and broadcast storms that can bring down the network. If a second connection is used, refer to the documentation for the ProCurve maintenance switch and enable STP in the maintenance switches. STP ensures only one active path at any given moment between two systems on the network. In networks with two or more physical paths between two systems, STP ensures only one active path between them and blocks all other redundant paths. Primary and Backup System Consoles Managing Multiple Systems If you want to manage more than one system from a console (or from a fault-tolerant pair of consoles), you can daisy chain the maintenance switches together. This configuration requires an IP address scheme to support it. Contact your HP service provider to design this configuration. Dedicated Service LAN 149
A Cables Cable Types, Connectors, Lengths, and Product IDs TIP: Although a considerable distance can exist between the modular enclosures in the system, HP recommends placing all cabinets adjacent to each other and bolting them together, with cable length between each of the enclosures as short as possible. Available cables and their lengths are: Connection From... Cable Type Connectors Length in meters Product ID NonStop Blade MMF LC-LC 2 M8900-02 Element to LSU enclosure 3 M8900-03 5 M8900-05 6 10 15 40 80 100 M8900-06 M8900-10 M8900-15 M8900-40 M8900-80 M8900100 NonStop Blade MMF MTP-MTP 1.5 M8925 01 Element to NonStop Blade Element 5 M8925 05 10 M8925 10 30 M8925 30 50 M8925 50 P-switch to: LSU enclosure p-switch crosslink networking CLIM IOAM enclosure MMF LC-LC 2 3 5 6 M8900-02 M8900-03 M8900-05 M8900-06 10 M8900-10 15 M8900-15 40 M8900-40 80 M8900-80 100 M8900100 125 M8900125 FCSA to: Fibre Channel disk module ESS FC switch MMF LC-LC 2 3 5 6 M8900-02 M8900-03 M8900-05 M8900-06 10 M8900-10 150 Cables
Connection From... Cable Type Connectors Length in meters Product ID 15 M8900-15 40 M8900-40 80 M8900-80 100 M8900100 125 M8900125 200 M8900200 250 M8900250 P-switch to cluster switch 6780 SMF LC-LC 2 3 M8921-02 M8900-03 5 M8900-05 6 M8900-06 10 M8900-10 15 M8900-15 40 M8900-40 80 M8900-80 100 M8900100 P-switch to cluster switch 6770 SMF LC-SC 2 5 M8922-02 M8922-05 10 M8922-10 25 M8922-25 40 M8922-40 80 M8922-80 P-switch to NonStop S-series IOMF 2 MMF LC-SC 10 20 M8910-10 M8910-20 50 M8910-50 100 M8910100 P-switch to CLIM MMF LCL-LC 2 m M8900-02 3 M8900-03 5 M8900-05 6 M8900-06 10 M8900-10 15 M8900-15 40 M8900-40 80 M8900-80 100 M8900100 125 M8900125 Cable Types, Connectors, Lengths, and Product IDs 151
Connection From... Cable Type Connectors Length in meters Product ID DL385 G5 Storage CLIM SAS HBA port to SAS tape (carrier-grade tape only) N.A. SFF-8470 to SFF-8470 2 4 M8908 2 M8908-4 Internal power-on cable between p-switches and IOMF 2 CRU Serial RJ11 - RJ45 2 10 542380-001 542381-001 P-switches on NonStop NS16000 series system to a ServerNet cluster (zone) with Model 6780 NonStop ServerNet switches SMF LC-LC 2 5 10 25 M8921-2 M8921-5 M8921-10 M8921-25 40 M8921-40 80 M8921-80 100 M8921-100 P-switches on NonStop NS16000 series system to a ServerNet cluster (zone) with Model 6770 NonStop ServerNet switches SMF LC-LS 2 5 10 25 M8922-2 M8922-5 M8922-10 M8922-25 40 M8922-40 80 M8922-80 Maintenance LAN interconnect CAT 6 UTP (GESA external ports support CAT 5e or CAT 6 cables that you provide.) RJ-45 to RJ-45 0.6 1.2 1.5 1.8 M8926 02 M8926 04 M8926 05 M8926 06 2.1 M8926 07 2.4 M8926 08 2.7 M8926 09 3 M8926 10 4.6 M8926 15 7.6 M8926 25 DL385 G5 Storage CLIM to M8380-25 (MSA70) SAS disk enclosure Copper SFF-8470 to SFF-8088 2 4 6 M8905-02 M8905-04 M8905-06 DL380G6 Storage CLIM to M8381-25 (D2700) SAS disk enclosure Copper SFF-8088 to SFF-8088 2 4 6 m M8906-02 M8906-04 M8906-06 M8380-25 (MSA70) SAS disk enclosure to M8380-25 (MSA70) Copper SFF-8088 to SFF-8088 2 4 M8906-02 M8906-04 152 Cables
Connection From... Cable Type Connectors Length in meters Product ID SAS disk enclosure (daisy-chain) 6 M8906-06 Storage CLIM FC HBA to: ESS FC tape MMF LC-LC 1 2 3 M8900 01 M8900 02 M8900 03 5 M8900 05 6 M8900 06 10 M8900 10 15 M8900 15 40 M8900 40 80 M8900 80 100 M8900100 125 M8900125 200 M8900200 250 M8900250 FCSA in IOAM Enclosure to: ESS FC switch MMF LC-LC 1 2 3 M8900 01 M8900 02 M8900 03 5 M8900 05 6 M8900 06 10 M8900 10 15 M8900 15 40 M8900 40 80 M8900 80 100 M8900100 125 M8900125 200 M8900200 250 M8900250 DL380 G6 IB CLIM HCA port to customer-supplied IB switch Copper or Fiber QSFP from IB port on IB CLIM This is a customer-supplied cable. ETH1 port on Storage CLIM with encryption to customer-supplied switch. CAT 6 UTP RJ-45 to RJ-45 This is a customer-supplied cable. NOTE: This connection type is only supported on Storage CLIMs with encryption. Cable Types, Connectors, Lengths, and Product IDs 153
Connection From... Cable Type Connectors Length in meters Product ID Maintenance LAN interconnect CAT 5e UTP (GESA external ports support CAT 5e or CAT 6 cables that you provide.) RJ-45 to RJ-45 1.5 3 4.6 7.6 M8926 05 M8926 10 M8926 15 M8926 25 Maintenance LAN interconnect CAT 6 UTP (GESA external ports support CAT 5e or CAT 6 cables that you provide.) RJ-45 to RJ-45 0.6 1.2 1.5 1.8 M8926 02 M8926 04 M8926 05 M8926 06 2.1 M8926 07 2.4 M8926 08 2.7 M8926 09 3 M8926 10 4.6 M8926 15 7.6 M8926 25 NOTE: ServerNet cluster connections on NonStop NS16000 series systems follow the ServerNet cluster and cable length rules and restrictions. For more information, refer to these manuals: ServerNet Cluster Supplement for NS-Series Systems For 6770 switches and star topologies: ServerNet Cluster Manual For 6780 switches and layered topology: ServerNet Cluster 6780 Planning and Installation Guide NOTE: BladeCluster connections on NonStop NS16000 series systems follow the BladeCluster cable length rules and restrictions. For more information, refer to the BladeCluster Solutions Manual. M8201R to Tape Device Cables For SCSI cables connecting the M8201R Fibre Channel to SCSI router to tape drives, see the M8201R Fibre Channel to SCSI Router Installation and User s Guide. Cable Management System The cable management system (CMS) for Integrity NonStop NS16000 series systems contains cable trays and vertical cable guides and is designed specifically for the Integrity NonStop NS16000 series server components to manage all fiber cables, CAT 5e or CAT 6 cables, and for some components and power cords. Using the CMS maintains the minimum 25mm bend radius for the fiber cables and provides strain relief for the fiber-optic cables and the power cords. For details on using the CMS, have your service provider refer to the NonStop NS16000 Series Hardware Installation Manual. 154 Cables
B Operations and Management Using OSM Applications OSM server-based components are incorporated in a single OSM server-based SPR, T0682 (OSM Service Connection Suite), that is installed on Integrity NonStop NS16000 series servers running the HP NonStop operating system. For information on how to install, configure and start OSM server-based processes and components, see the OSM Migration and Configuration Guide. OSM client-based components are installed on new system console shipments and upgrades can be installed through the NonStop System Management Tools installer on the HP NonStop System Console Installer DVD. This DVD also delivers all other client software required for managing and servicing NonStop NS16000 series servers. For installation instructions, see the NonStop System Console Installer Guide. These are the OSM client components: Product ID T0632 Component OSM Notification Director Task Performed No longer needed when using Insight Remote Support Advanced for remote support services, including dial-out (See note below table.) T0633 OSM Low-Level Link Provides down-system support Provides support to configure IP CLIMs before they are operational in a system Provides CLIM software updates T0634 OSM Console Tools OSM System Inventory Tool Terminal Emulator File Converter Includes Start menu shortcuts for easy access to the OSM Service Connection and OSM Event Viewer (browser-based OSM application, which are not installed on the system console) and the following client-based OSM tools: OSM System inventory Tool Terminal Emulator File Converter OSM Certificate Tool CLIM Boot Service Configuration Wizard Retrieves hardware inventory from multiple systems Converts existing OSM Service Connection-related OutsideView (.cps) session files to MR-WIN6530 (.653) session files NOTE: HP Insight Remote Support Advanced is the go-forward remote support solution for NonStop servers. It works in conjunction with OSM server software, but replaces OSM Notification Director in both modem-based and HP Instant Support Enterprise Edition (ISEE) remote support solutions. For more information on Insight Remote Support Advanced, please refer to Insight Remote Support Advanced for NonStop located in the Service Information collection of NTL. The OSM Notification Director is not required if Insight Remote Support Advanced is used as the remote support solution. For more information, refer to the Insight Remote Support Advanced for NonStop in the Service Information collection of NTL. 155
Using OSM for Down-System Support In Integrity NonStop NS16000 series systems, the maintenance entity (ME) in the p-switches provides dedicated service LAN services via the OSM Low-Level Link for both OS coldload, system management, and hardware configuration when hardware is powered up but the OS is not running. LAN connections are direct from the maintenance PIC in slot 1 of the p-switch to the maintenance switch that is installed in the modular cabinet. This illustration shows the LAN connections for the OSM Low-Level Link: AC Power Monitoring For Integrity NonStop NS16000 series servers, you can use one of the following to provide continued system operation through a power failure. However, to take advantage of OSM power fail support, you must use the HP R5000 UPS. When used, it is connected to the system s dedicated service LAN via the maintenance switch through which OSM monitors the power state. Optional HP R5000 UPS (with one or two ERMs for additional battery power) User-supplied UPS installed in each modular cabinet User-supplied site UPS If a user-supplied rack-mounted UPS or a site UPS is used rather than the HP-supported UPS models mentioned above, the system is not notified of the power outage. Operators are responsible for detecting power transients and outages and developing the appropriate actions, which might include a ride-through time based on the capacity of the site UPS and the power demands made on that UPS. The R5000 UPS and ERMs installed in modular cabinets do not support any devices that are external to the cabinets. External devices can include tape drives, external disk drives, LAN routers, and SWAN concentrators. Any external peripheral devices that do not have UPS support will fail immediately at the onset of a power failure. Plan for UPS support of any external peripheral devices that must remain operational as system resources. This support can come from a site UPS or individual units as necessary. OSM Power Fail Support When properly configured, OSM provides important power fail support for Integrity NonStop NS16000 series servers. OSM detects power outages and helps you take appropriate actions in the event that the outage lasts longer than the estimated capabilities of your UPS. When OSM detects that one power rail is running on UPS and the other power rail has lost power, it logs an event and begins counting down the ride-through time configured for the system. The ride-through time, designed to avoid disruption if the power outage is short enough in duration for a UPS to provide the needed power, is specified through the POWERFAIL_DELAY_TIME attribute of the SCF command ALTER SUBSYS (for more information, see Considerations for Ride-Through Time Configuration (page 157)). 156 Operations and Management Using OSM Applications
The actions that OSM takes next are directly tied to that specified ride-through time: If AC power is restored before the ride-through period ends, the ride-through countdown terminates and OSM does not initiate a controlled shutdown of I/O operations and processors. If AC power is not restored before the ride-through period ends, OSM initiates a controlled shutdown of I/O operations and processors by broadcasting a PFAIL_SHOUT message to all processors (the processor running OSM being the last one in the queue) to shut down the system's ServerNet routers and processors in a fashion designed to allow disk writes for items that are in transit through controllers and disks to complete. CAUTION: You should not turn off the UPS as soon as the NonStop OS is down, since the UPS continues to provide power until its supply is exhausted, time which may be needed for disk controllers and disks to complete disk writes. Unlike NonStop S-series systems, this system requires a system load after a power failure that results in system shutdown. For more information on recovering from power failures, see the NonStop Operations Guide for H-Series and J-Series RVUs. Because this shutdown described above does not include powering off the system or stopping TMF or other applications, you are encouraged to create scripts to shut down database activity before the processors are shut down. With T0682 H02 ACC and later, OSM provides support for automatic execution of those scripts through two OSMCONF settings: SHUTDOWN_SCRIPT_NAME and SHUTDOWN_SCRIPT_TIME. Through these settings, you tell OSM to execute a shutdown script a specified number of seconds before the ride-through time is reached. OSM will not execute the script. For more information on implementing those OSMCONF settings, see the OSM Migration and Configuration Guide. To configure OSM power fail support, you must perform these actions, located under the Power Supply units in either P-switches or IOAM modules in the OSM Service Connection: Perform the Configure a Power Source as UPS action to configure the power rail (either A or B) connected to the UPS. Perform the Configure a Power Source as AC action to configure the power rail (either B or A) connected to AC power. You can then perform the Verify Power Fail Configuration action, located under the System object, to verify that power failure support has been properly configured and in place for the system. For more information on locating or performing these actions, see the OSM Service Connection online help. Considerations for Ride-Through Time Configuration Ride-through time is specified through the POWER_DELAY_TIME attribute of the SCF command ALTER SUBSYS. For command syntax, see the SCF Reference Manual for the Kernel Subsystem. The goal in configuring the ride-through time is to allow the maximum time for power to be restored while at the same time allowing sufficient time for completion of disk writes for I/Os that passed to the disk controllers before the ServerNet was shut down. Allowing enough time for sufficient completion of these tasks allows for a relatively clean shutdown from which TMF recovery is less time-consuming and difficult than if all power failed and disk writes did not complete. The maximum ride-through time for each system will vary, depending on system load, configuration, and the UPS capability. A rack-mounted UPS can often supply five minutes of power with some extra capacity for contingencies, provided the batteries are new and fully charged. This five minute figure is an estimate used for illustration in this discussion, not a guarantee for any specific configuration. You must ensure that the battery capacity for a fully-powered system allows for at least two minutes after OSM initiates the orderly shutdown to allow the disk cache to be flushed to nonvolatile media. Assuming the UPS has five minutes of power capacity, you would set the ride-through time for three minutes (UPS capacity of five minutes minus two minutes for OSM). AC Power Monitoring 157
NOTE: OSM does not make dynamic computations based on remaining capacity of the rack-mounted UPS. The ride-through time is statically configured in SCF for OSM use. For example, when power comes back before the initiated shutdown, but then fails again shortly afterward, the UPS has been depleted by some amount and does not last for the ride-through time until it is fully recharged. OSM does not account for multiple power failures that occur within the recharge time of the rack-mounted UPS. Power can be extended by adding ERMs to the configuration; and power for UPS alone can extend beyond five minutes based on power consumption. To extend the ride-through time beyond three minutes, use this manual and your UPS documentation to calculate the expected power consumption, measure the site power consumption, factor in the ERM, if present, and make adjustments. Also consider air conditioning failures during a real power failure because increased ambient temperature typically causes the fans to run faster, which causes the system to draw more power. By allowing for the maximum power consumption and applying those figures to the UPS calculations provided in the UPS manuals, you can increase the ride-through time beyond three minutes. Considerations for Site UPS Configurations OSM cannot monitor a site UPS. The SCF configured ride-through time on a NonStop NS16000 series system has no effect if only a site UPS is used. With a site UPS instead of a rack-mounted UPS, the customer must perform manual system shutdown if the backup generators cannot be started. It is also possible to have a rack-mounted UPS in addition to a site UPS. Since the site UPS can supply a whole computer room or part of that room, including required cooling, from the perspective of OSM, site UPS power can supply the group 100 AC power. The group 100 UPS power configured in OSM, in this case, would still come from a rack-mounted UPS (one of the supported models). AC Power-Fail States These states occur when a power failure occurs and an optional HP R5000 UPS is installed in each cabinet within the system: System State NSK_RUNNING RIDE_THRU HALTED LOW_POWER POWER_OFF Description NonStop operating system is running normally. OSM has detect a power failure and begins timing the outage. AC power returning terminates RIDE_THRU and puts the operating system back into an NSK_RUNNING state. At the end of the predetermined RIDE_THRU time, if AC has not returned, OSM executes a PFAIL_SHOUT that results in the system going to LOW_POWER. Normal halt condition. Halted processors do not participate in power-fail handling. A normal power-on also puts the processors into the HALTED state. Halted-state services (HSS) informs the p-switch that it is in LOW_POWER state and then waits until the p-switch removes optic power. Loss of optic power from the p-switches occurs, or the UPS batteries suppling the NonStop Blade Element modules are completely depleted. When power returns, the system is essentially in a cold-boot condition. 158 Operations and Management Using OSM Applications
C Default Startup Characteristics NOTE: The configurations documented here are typical for most sites. Your system load paths might be different, depending upon how your system is configured. To determine the configuration of your system, refer to the system attributes in the OSM Service Connection. You can select this from within the System Load dialog box in the OSM Low-Level Link. Each system ships with these default startup characteristics: $SYSTEM disks residing in one of these two locations: In a Fibre Channel disk module connected to IOAM enclosure group 110 with the disks in these locations: IOAM FCSA Fibre Channel Disk Module Path Group Module Slot SAC Shelf Bay Primary 110 2 1 1 1 1 Backup 110 3 1 1 1 1 Mirror 110 3 1 2 1 1 Backup 110 2 1 2 1 1 In a NonStop S-series I/O enclosure in group, module, slot 11.1.11 Configured system load paths Enabled command interpreter input (CIIN) function If the automatic system load is not successful, additional paths for loading are available in the boot task. Using one load path, the system load task attempts to use another path and keeps trying until all possible paths have been used or the system load is successful. These 16 paths are available for loading and are listed in the order of their use by the system load task: Load Path Description Source Disk Destination Processor ServerNet Fabric 1 Primary $SYSTEM-P 0 X 2 Primary $SYSTEM-P 0 Y 3 Backup $SYSTEM-P 0 X 4 Backup $SYSTEM-P 0 Y 5 Mirror $SYSTEM-M 0 X 6 Mirror $SYSTEM-M 0 Y 7 Mirror backup $SYSTEM-M 0 X 8 Mirror backup $SYSTEM-M 0 Y 9 Primary $SYSTEM-P 1 X 10 Primary $SYSTEM-P 1 Y 11 Backup $SYSTEM-P 1 X 12 Backup $SYSTEM-P 1 Y 13 Mirror $SYSTEM-M 1 X 14 Mirror $SYSTEM-M 1 Y 159
Load Path Description Source Disk Destination Processor ServerNet Fabric 15 Mirror backup $SYSTEM-M 1 X 16 Mirror backup $SYSTEM-M 1 Y This illustration shows the system load paths: The command interpreter input file (CIIN) is automatically invoked after the first processor is loaded. The CIIN file shipped with new systems contains the TACL RELOAD * command, which loads the remaining processors. For default configurations of the FCSAs, Fibre Channel disk modules, and load disks, see Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module (page 96). For system load procedures, see the NonStop NS16000 Series Hardware Installation Manual. 160 Default Startup Characteristics
D NonStop S-Series Systems: Connecting to or Migrating From Topics described in this appendix are: Connecting to NonStop S-Series I/O Enclosures (page 161) Migrating From a NonStop S-Series Systems to a NonStop NS16000 Series Systems (page 162) Connecting to NonStop S-Series I/O Enclosures NOTE: For NonStop S-Series I/O enclosure group numbers, refer to NonStop S-Series I/O Enclosure Group Numbers (page 29). NonStop S-series I/O enclosures can be connected to Integrity NonStop NS16000 series systems to retain not only previously installed hardware but also data stored on disks mounted in the NonStop S-series I/O enclosures. This illustration shows connection of a NonStop S-series I/O enclosure to an Integrity NonStop NS16000 series system: Connecting to NonStop S-Series I/O Enclosures 161
IOMF 2 CRU Each p-switch (for the X or Y fabric) has up to six I/O PICs, with one I/O PIC required for each IOAM enclosure in the system. Each NonStop S-series I/O enclosure uses one port and one PIC, so a maximum of 24 NonStop S-series I/O enclosures can be connected to an Integrity NonStop NS16000 series system if no IOAM enclosure is installed. The Integrity NonStop NS16000 series system is compatible with most disks and ServerNet adapters contained in currently installed NonStop S-series I/O enclosures equipped with IOMF 2 CRUs. I/O multifunction 2 (IOMF 2) CRUs, each equipped with an MMF PIC, are required for connecting NonStop S-series I/O enclosures to Integrity NonStop NS16000 series systems. To retain disk drives and adapters currently installed in NonStop S-series system enclosures, you can convert these enclosures to NonStop S-series I/O enclosures, replacing the two PMF CRUs in each enclosure with IOMF 2 CRUs. See the conversion instructions in the Authorized Service Provider Hardware category of the Support and Service Library of NTL. For information on the IOMF 2 CRUs, see the NonStop S-Series Planning and Configuration Guide. NonStop S-Series Disk Drives and ServerNet Adapters Disk drives and ServerNet adapters (except SEB, MSEB CRUs, ServerNet/FX and ServerNet/FX 2 adapters) installed in NonStop S-series I/O enclosures equipped with IOMF 2 CRUs, as well as devices that are downstream of the enclosures, are compatible with the Integrity NonStop NS16000 series hardware, as long as the NonStop S-series I/O enclosure is properly connected to the p-switches. For information on the NonStop S-series disk drives and ServerNet adapters, see the NonStop S-Series Planning and Configuration Guide. CAUTION: Do not attempt to use NonStop S-series SEB or MSEB CRUs in NonStop S-series I/O enclosures connected to Integrity NonStop NS16000 series systems. System failure will result. Migrating From a NonStop S-Series Systems to a NonStop NS16000 Series Systems Topics described in this section include: Migrating Applications (page 162) Migration Considerations (page 163) Migrating Hardware Products to Integrity NonStop NS16000 Series Servers (page 163) Other Manuals Containing Software Migration Information (page 163) Migrating Applications The H-Series Application Migration Guide is the primary source of information for migrating applications. It provides summaries, overviews, references to product-specific documentation where appropriate, and information about: Source code changes required for C/C++, COBOL, and ptal programs User library changes Changes in the application development environment, including compilers, linkers, and debuggers Changes in native process architecture and process environment Changes required for independent products: HP NonStop Server for Java HP NonStop CORBA HP NonStop Tuxedo How to get help with migration tasks, including assessment services and pilot (proof of concept) service offered by HP 162 NonStop S-Series Systems: Connecting to or Migrating From
Migration Considerations SQL/MP objects that are present on disks installed in a NonStop S-series I/O enclosure are not immediately usable after the enclosure is connected to an Integrity NonStop NS16000 series system. The file labels and catalogs must be updated to reflect the new system name and number. You can use the SQLCI MODIFY command to update SQL/MP objects. Refer to the migration information contained in the SQL Supplement for H-Series RVUs. G-series application programs that reside on a NonStop S-series I/O enclosure might require migration changes to run on an Integrity NonStop NS16000 series system with an H-series RVU. For more information, refer to the H-Series Application Migration Guide. Migrating Hardware Products to Integrity NonStop NS16000 Series Servers Connecting NonStop S-series hardware to an Integrity NonStop NS16000 series server is only one step in the overall migration. Software and application changes might be required to complete the migration. Any hardware migration should be planned as part of the overall application and software migration tasks. The next subsections refer you to the documentation for tasks involved in physically moving IOAM enclosures and NonStop S-series I/O enclosures to an Integrity NonStop NS16000 series server. Moving IOAM Enclosures to NonStop NS16000 Series Servers Moving an IOAM enclosure from a NonStop S-series server to an Integrity NonStop NS16000 series server is described in the NonStop NS16000 series Hardware Installation Manual. Reusing NonStop S-Series I/O Enclosures and Processor Enclosures You can reuse NonStop S-series enclosures and most of the ServerNet adapters and disk drives they contain in your Integrity NonStop NS16000 series server. Moving an IOAM enclosure from a NonStop S-series server to an Integrity NonStop NS16000 series server is described in the NonStop NS16000 Series Hardware Installation Manual. Other Manuals Containing Software Migration Information You can also find information about software migration in these manuals: C/C++ Programmer s Guide COBOL Manual for TNS/E Programs ptal Reference Manual SQL Supplement for H-Series RVUs NonStop Server for Java 4 Programmer s Reference Tuxedo 8.0 Supplement for H-series RVUs Migrating CORBA Applications to H-Series RVUs H06.xx Software Installation and Upgrade Guide H06.xx Release Version Update Compendium NonStop System Console Installer Guide H06.xx README Migration Considerations 163
Interactive Upgrade Guide 2 If you are moving a NonStop S-series I/O enclosure from a NonStop S-series system to an Integrity NonStop NS16000 series system and want to migrate the data online, you can perform a migratory revive if: Your data is mirrored. You have another NonStop S-series system or NonStop S-series I/O enclosure connected to the NonStop S-series system. As an overview, the necessary processes include: Create a mirror of the data on the NonStop S-series I/O enclosure that is moving to the Integrity NonStop NS16000 series system. Disconnect the NonStop S-series I/O enclosure from the NonStop S-series system and connect it to the Integrity NonStop NS16000 series system. Change the mirror on the Integrity NonStop NS16000 series system to be the primary disk. Revive that primary disk with a mirror so that the Integrity NonStop NS16000 series system contains both the primary and the mirror 164 NonStop S-Series Systems: Connecting to or Migrating From
Index Symbols $SYSTEM disk locations, 159 A AC current calculations, 62 AC power enclosure input specifications, 54 feed, top or bottom, 21 input, 38, 43, 48 power-fail monitoring, 156 power-fail states, 158 unstrapped PDU, 54 AC power feed bottom of cabinet, 39, 44, 49 modular three-phase, 48 monitored single-phase, 38 monitored three-phase, 43 top of cabinet, 38, 43, 49 with cabinet UPS, 39, 40, 44, 45, 50, 51 air conditioning, 34 air filters, 35 B branch circuit, 42, 47, 53 C cabinet, 23 cabinet dimensions, 57 cable connections LSUs to p-switches, 70 NonStop Blade Element to LSUs, 70 cable labeling, 66 cable length restrictions, 69 cable management system (CMS), 154 cable specifications, 68 cabling FCSA to FCDM, 96 FCSA to tape devices, 79 p-switch to NonStop S-series I/O enclosure, 90 p-switches to IOAM enclosure, 78 p-switches to networking CLIM, 76 p-switches to Storage CLIM, 77 processors 0 to 3, 71 processors 12 to 15, 74 processors 4 to 7, 72 processors 8 to 11, 73 restrictions, 77, 78 calculation, heat, 35, 60 calculation, weight, 35, 59 clearances, service, 57 CLIM IB CLIM ports, 121 IP CLIM Ethernet interfaces, 103 IP CLIM ports, 117 IP or Telco CLIM Ethernet connections, 103 CLIMs IB CLIM Ethernet or InfiniBand connections, 121 IB CLIM overview, 121 IP CLIM overview, 116 restrictions (Storage CLIM), 82 storage configurations, 83 Storage overview, 123 Telco CLIM overview, 119 Telco CLIM, Ethernet interfaces, 104 cluster cable product IDs, 150 complex processor ID, 108, 114 configuration considerations Fibre Channel devices, 94 IOAM enclosures, 92 minimum, typical, and maximum number of enclosures, 135 Configuration restrictions, Storage CLIM, 82 configuration, factory-installed hardware documentation, 31 Connections fault-tolerant LAN configuration, 138 IP or Telco CLIM, Ethernet, 103 LAN using G4SAs, 142 LAN using IP CLIMs, 142 SAS disk enclosure, 82 tape (SAS), 82 Console configuration, 147 controls, NonStop Blade Element front panel, 109 cooling assessment, 34 D daisy-chain disk configuration recommendations, 96 Default disk drive locations SAS disk enclosures, 82 default disk drive locations, 94 default startup characteristics, 159 dimensions enclosures, 58 modular cabinet, 57 service clearances, 57 disk drive configuration recommendations, 95 Disk drives default disk drive locations, SAS disk enclosures, 82 SAS disk enclosure, bay locations, 81 SAS disk enclosure, IO modules, 81 display IOAM switch boards, 126 p-switch, 113 documentation CLIM, 31 factory-installed hardware, 31 packet, 30 ServerNet adapter configuration, 31 software migration, 163 dust and microscopic particles, 35 165
E electrical disturbances, 33 electrical power loading, 55 emergency power off (EPO) switches HP 5000 UPS, 32 HP 5500 XR UPS, 32 Integrity NonStop NS16000 series servers, 32 NonStop S-series I/O enclosure, 32 enclosure combinations, 22 dimensions, 58 height in U, 57 minimum, typical, maximum number, 135 power loading, 55 types, 20 weight, 59 enclosure height in U, 57 enclosure location, 135 Enterprise Storage System (ESS), 133 environmental monitoring unit, 130 example system configurations, 62 duplex, 21, 62 example, IOAM and disk drive enclosure, 96 extended runtime module (ERM), 34, 131 F factory-installed hardware, documentation, 31 fan NonStop Blade Element, 107 OAM enclosure, 126 p-switch, 112 FC-AL configuration recommendations, 95 FCSA to FCDM cabling, 96 FCSA to tape cabling, 79 FCSA, configuration recommendations, 95 fiber-optic cable specifications, 68 fiber-optic cables, 66 Fibre Channel arbitrated loop (FC-AL), 94, 130 Fibre Channel device considerations, 94 Fibre Channel devices, 92 Fibre Channel disk module, 92, 130 Fibre Channel ServerNet adapter (FCSA), 128 flooring, 35 Forms CLIM, 31 forms ServerNet adapter configuration, 31 front panel, NonStop Blade Element buttons, 109 indicator LEDs, 109 FRU AC power assembly, LSU, 110 Fibre Channel disk module, 130 I/O interface board, 107 IOAM enclosure fan, 126 logic board, LSU, 110 LSU optic adapter, 110 memory board, 107 NonStop Blade Element fan, 107 NonStop Blade Element optic adapter, 107 p-switch fan, 112 power supply, 107, 112 processor board, 107 reintegration board, 107 G G4SA, 129 network connections, 103, 104 service LAN PIF, 143 Gigabit Ethernet 4-port ServerNet adapter see G4SA GMS Fibre Channel disk module, 28 IOAM enclosure, 27 LSU, 25 p-switch, 26 processor, 24 grounding, 33, 54 group, 23 group, NonStop S-series I/O enclosure, 29 group-module-slot (GMS), 23 H hardware configurations examples, 62 heat calculation, 35, 60 height in U, enclosures, 57 hot spots, 34 I I/O interface board, NonStop Blade Element, 107 IB CLIM, 121 see also CLIMs IB switch, 121 indicator LEDs LSU front panel, 111 NonStop Blade Element front panel, 109 p-switch, 114 Informatica, 121 input power, 38, 43, 48 inrush current, 33 Integrity NonStop NS16000 series characteristics, 19 internal cable product IDs, 150 internal interconnect cabling, 67 IOAM configuration considerations, 92 enclosure, 126 FCSA, 128 FRUs, 126 G4SA, 129 IOMF 2 CRU, 162 IP addresses components connected to LAN, 140 IP CLIM, 116 see also CLIMs service LAN, 142 166 Index
L labeling, optic cables, 66 LAN fault-tolerant maintenance, 137 non-fault-tolerant maintenance, 137 service, G4SA PIF, 143 service, IP CLIM, 142 LCD IOAM switch boards, 126 p-switch, 113 load operating system paths, 159 Low Latency Solution, 121 LSU description, 23, 110 FRUs, 110 function and features, 110 indicator LEDs, 111 logic boards, 111 M M8201R Fibre Channel to SCSI router, 80 maintenance PIC, 112 maintenance switch, 130 manuals software migration, 163 memory board, FRU, 107 metallic particulate contamination, 35 migrating applications, 162 IOAM enclosures from NonStop S-series systems, 163 NonStop S-series hardware, 163 migration considerations and process overview, 163 mirror and primary disk drive location recommendations, 95 Models, system, 19 modular cabinet physical specifications, 58 weight, 59 N naming conventions, 105 NonStop Blade Element, 23, 107 NonStop NS16000 Series characteristics, 19 NonStop NS16000 series characteristics, 19 NonStop S-series I/O enclosure, 134 NonStop S-series I/O enclosures, 161 NS16000 series characteristics, 19 NS16000 Series, characteristics, 19 O operating system load paths, 159 operational space, 36 optic adapter LSU, 110 NonStop Blade Element, 107 NonStop Blade Element J connectors, 108 OSM, 147, 155, 156 OSM Console Tools, 155 OSM Low-Level Link, 155 OSM Notification Director, 155 OSM System Inventory Tool, 155 OutsideView, converting files, 155 P p-switch cabling to IOMA enclosure, 78 cabling to networking CLIM, 76 cabling to NonStop S-series I/O enclosure, 90 cabling to Storage CLIM, 77 description, 112 display, 113 FRUs, 112 functions, 113 particulates, metallic, 35 paths, operating system load, 159 PDU AC power feed, 38, 43, 48 receptacles, 37 single-phase modular (non-monitored), 47 single-phase monitored, 37 strapping configurations, 54 three-phase modular, 48 three-phase monitored, 42, 43 port, 23 Ports, opening on console, 147 power and thermal calculations, 62 power consumption, 33 power distribution units (PDUs), 21, 38, 43, 48 power feed, top or bottom, 32, 38, 43, 48 power input, 38, 43, 48 power quality, 33 power receptacles, PDU, 37 power supply IOAM enclosure, 126 NonStop Blade Element, 107 p-switch, 112 power-fail monitoring, 156 states, 158 primary and mirror disk drive location recommendations, 95 processor board, FRU, 107 complex, 23 default numbering, 25 interconnect cabling, 70 processor ID, 108, 114 ProCurve maintenance switch, 130 Q quad MMF PIC, 112 167
R R5000 UPS, 131 rack, 23 rack offset, 23, 24 raised flooring, 35 receive and unpack, 36 receptacles, PDU, 37 reintegration board, 107 related manuals software migration, 163 restrictions cable length, 69 cabling NonStop S-series I/O enclosure, 91 Fibre Channel device configuration, 95 p-switch cabling, 77, 78 S safety ground/protective earth, 33, 54 SAS disk enclosure bay locations, 81 connecting, 82 front and back view, 81 SAS Tape connecting, 82 ServerNet interconnection cabling, 67 optic cabling, 66 processor connections, 114 switch board, IOAM enclosure, 126 switch board, p-switch, 112 service clearances, 57 service LAN, 136 Site power using site UPS for, 158 Site preparation, guidelines, 32 slot, position, 23 SMF PIC, 112 specification branch circuits, 42, 47, 53 cabinet physical, 58 cable, 68 enclosure dimensions, 58 heat, 60 nonoperating temperature, humidity, altitude, 62 operating temperature, humidity, altitude, 61 weight, 59 startup characteristics, default, 159 Storage CLIM, 123 see also CLIMs HBA slots, 124 overview, 123 Storage CLIM, illustration of ports and HBAs, 80 Subnet Manager software, 121 SWAN concentrator restriction, 142 system configurations, examples, 62 System console, 132 description, 136 system console configurations, 147 system disk location, 159 T tech doc, factory installed hardware, 31 Telco CLIM, 119 see also CLIMs Terminal Emulator File Converter, 155 terminology, 23 Three-Phase Modular INTL PDU input characteristics, 53 output characteristics, 53 Three-Phase Modular NA/JPN PDU input characteristics, 52 output characteristics, 52 Three-Phase Modular NA/JPN PDU extension bar, 53 U U height, enclosures, 57 uninterruptible power supply (UPS), 33, 131 UPS considerations, 158 HP R5000, 33, 54 HP R5500 XR, 33, 54, 131 input rating, 54 user-supplied rackmounted, 34 user-supplied site, 34 W weight calculation, 35, 59 worksheet heat calculation, 60 weight calculation, 59 Z zinc, cadmium, or tin particulates, 35 168 Index