Sun Blade(TM) 6000 Modular System Technical Training Jacques Bessoudo Technical Marketing Systems Group Sun Microsystems Inc 1
(TM) Sun Blade Modular Systems Extending the Systems Portfolio 2
Sun Blade Modular System More Memory More I/O Memory DIMM / CPU I/O Gbps Per Blade 8 150 125 6 100 4 75 50 2 25 0 0 More Threads Core Per CPU MORE More Open More Efficiency, More Applications... 35 30 25 20 15 10 5 0 Sun 3 HP IBM Dell
Industry's Most Open Blade Platform Easy Integration Into Existing Data Centers. Avoid Vendor Lock-In. Leverage existing open standard technology and ecosystems. No proprietary I/O device or management tool required. Simplify your business. Use N1SM or your existing 3rd party tools. OpenSolaris(TM). OpenSPARC(TM). Transparent Management. Independent Industry Standard I/O. 4
Why Sun Blades? CPU Architecture choice Future Proof Modular Architecture Transparent Management Independent I/O High Efficiency Cooling 5 Power Mgmt Mgmt Compute Modular Computing Cooling I/O Storage
Continuing To Push The Limit Game Changing Portfolio > Sun Constellation System Ultra-dense Blade Platform Ultra-dense Switching Solution Ultra-dense Storage Solution Software > Eco-Efficient building blocks > Developer Environment and Management Tools SunSMCustomer Ready Architected Systems > HPC Cluster, including Sun Blade Scalable Unit > Visualization System > Scalable Storage Cluster Very large deployments > Tokyo Tech > TACC (under construction) > KISTI (under construction) 6 Delivering Seamless Scalability 1TF 1.7PF Most Open Highest Performing Most Serviceable Fastest Deployment Most Robust Most Reliable Most Available
The New Era of HPC Innovations in HPC meet the Commercial world Today HPC customers want more: > Radical Efficiency Performance, Power, > > > > 7 Cooling, Space, Cost Super Scalability Paving the way to Petaflops Open Systems Open Interfaces & Industry Standard Components Industrial Robustness High Availability & Reliability Production Ready Time to Results or Production
Opportunity Massive Scale high-performance computing that requires extremely low latency 8
Introducing The Sun Constellation System An open systems super computer designed for petascale packaged as an integrated product Massive Scalability: Optimized compute, storage, networking and software technologies and services Dramatically Reduced Complexity: Integrated connectivity and management to reduce start-up, development and operational complexity Breakthrough Economics: Technical innovation resulting in fewer components and high efficiency systems in a tightly integrated solution 9
Sun Constellation System Open Petascale Architecture Delivering the Most Scalable Super Computer The Most Scalable Computing Cluster > 700ns latency (DDR); Up to 1.7 PetaFLOPs; Up to 10 PB > 20% Smaller Footprint than Competition Open Industry Standards > Solaris, Linux, OpenMPI, Open InfiniBand interfaces and management > X64 Computing Architecture > InfiniBand DDR interconnect The Highest Density Compute Cluster > Core switch supports 3456 nodes > Custom rack supports 48 server modules > Sun Fire X4500 storage cluster with 480TB per rack The Easiest to Deploy and Manage > Provides a 6:1 reduction in physical ports and cables > Eliminates 100s of discrete switching elements 10
Flexible Configurations Scale to Meet Computing Needs 1 Core Switch Servers: 3,456 PFLOPS: 0.4 3 Core Switches Servers: 10,368 PFLOPS: 1.3 11 2 Core Switches Servers: 6,912 PFLOPS: 0.9 4 Core Switches Servers: 13,824 PFLOPS: 1.7
Sun Constellation System Open Petascale Architecture Radical Simplicity: Building out a 3456 Node HPC Super Computer Sun Constellation ks c Ra System Cluster Competitors Compute cks a R Clusters af Le s he re hes Co itc Sw it c Sw Competitors Cabling Infrastructure Ca Alternative Open Standards Fabric 300 Switching Elements 6912 Cables 92 Racks 12 ng i l b Reduced Cabling g in l b a C Constellation System Open Super Computer 1 Switching Element 300:1 Reduction 1152 Cables 6:1 Reduction 74 Racks 20% Smaller footprint
Sun Constellation System Open Petascale Architecture Eco-Efficient Building Blocks Compute Networking Storage Software Developer Tools Grid Engine Provisioning Linux Ultra-dense Blade Ultra-dense Platform Switch Solution Fastest Processors: SPARC, AMD Opteron, Intel Xeon Highest Compute Density Fastest Host Channel Adaptor 13 3456 port InfiniBand Switch Unrivaled cable simplification Most economical InfiniBand cost/port Ultra-dense Storage Solution Comprehensive Software Stack Integrated Developer Tools Integrated Grid Engine Infrastructure Provisioning, Monitoring, Patching Simplified Inventory Management Most economical and scalable parallel file system building block Up to 48 TB in 4RU Direct Cabling to IB Switch
Sun Blade 6000 Modular System 10 Server Module Capacity Compute 10 Server Modules I/O 1.42Tb/s I/O per Chassis 20 Ho1t-Plug PCIe ExpressModules(TM) for granular blade I/O configuration 2 PCIe Network Express Modules(TM) for multi blade I/O configuration Availability Hot Swap fans Hot Swap power supplies Redundant power grid connection capability Management SNMP, SSH, CLI Form Factor 10 Rack Units 40 Opteron Cores 80 Intel Cores 80 SPARC cores Page: 14 Sun Internal and Authorized Partners Only
Chassis Front View 10 Server Modules per chassis The chassis is 10 RU System power > 1+1 Hot-swap power supplies > 6,000W (provide head room to power future technology) > Two plugs and cords per power supply 15
Chassis Rear View 20 x PCI Express ExpressModules > Two per Server Module 2 x PCI Express Network Express Modules (NEM) 1 x Chassis Monitoring Module Cooling Fans > 6 x Rear Fan Modules > Hot-Swap > Redundant (N+1) 4 x power inlets with cable holders > Two inlets per power supply > Cable holders prevent accidental loss of power 16
Chassis Air Flow Air flow in the chassis is front to rear Two separate air flows > One is powered by the Front Fan Modules within the Power Supplies > Second is powered by the Rear Fan Modules which cools the Server Modules 17
Power Supplies Two hot swap redundant power supplies per chassis > A single power supply can power the entire chassis 18 Each requires two power inlets Each consumes 6000 [W] Power supply output is 12 [V] Each holds a Front Fan Module
Front Fan Modules 19 Redundant fan modules 100 [CFM] each Fans fit in the Power Supply Modules Can be replaced by pulling the Power Supply 4-5 Used to cool the power supplies and the PCI-E ExpressModules, Network Express modules and CMM
Rear Fan Modules N+1 hot-swap Fans accessible from the rear of the chassis Used to pull air through the Server Modules Redundant in-line fans Easy removal handle Fault Indicator Total of 800 [CFM] 20
Power Inlets Two power cords are required per power supply Input power is rated at 6000W per power supply in an 1+1 redundant configurations Each power inlet requires connection to a 200-240 [V] at 16-20[A] outlet Power inlets have a metal retainer which prevents accidental removal of the power cord 21
Power Connections Redundant and hot-swap > Grid Redundant 1 + 1 Total PSU rating: 6,000 [W] High-efficiency power supplies: approximately 90% Each power cord is cabled to a power supply unit > Two power cords are needed per power supply > Four power cords are needed for full 1+1 redundancy Power cord types > AMER: L6-20P to IEC 320 C19, 2.5m, Sun p/n 180-2005-01 > EPAC: IEC 309 to IEC 320 C19, 2.5m, Sun p/n 180-2004-01 Power input: 240-220 [V] @ 16-20 [A] 22
Ordering Configurations Each base chassis includes: > > > > Enclosure 2 x Power Supplies All fan modules Chassis Monitoring Module One to Four chassis per 42 RU Sun Rack Rack sold separately > Requires Power Distribution Units 23
Sun Blade 6048 Chassis Compute 4 x Shelves per chassis 48 x Sun Blade 6000 series server modules Sun UltraSPARC Intel Xeon AMD Opteron I/O 96x Hot-Plug PCIe ExpressModules 8x Hot-Plug PCIe Network Express Modules 12 1xGbit Ethernet connections per NEM Availability Hot swap redundant fans Eight Rear Fan modules per shelf Two Front Fan Modules per shelf Hot swap power supply modules Two 9000 W power supplies per shelf Total of 8 power supplies per chassis Management 4 x Chassis Monitoring Modules Solaris, Linux, or Windows OS, VMware Density 48 Servers 192 Sockets / 768 cores Page: 24 Sun Internal and Authorized Partners Only
Ultra-Dense Blade Platform Delivering the Most Efficient and Eco-Friendly Node Architecture The first blade platform designed for extreme density and performance > 6 TFLOPs, 768 cores per chassis / 42U 50% more compute power than HP C-Class 71% more compute power than IBM BladeCenterH > 4 InfiniBand Leaf Switch Network Express Modules Lowest cost per port with Ultra-Dense Switch Solution Pay as you grow platform ideal for fast growing businesses > Choose among SPARC, AMD Opteron and Intel Xeon CPU technologies Runs General Purpose Software > Custom compiles and tuning are not required Realize economies of scale savings in power and cooling 25
The Chassis is the Rack Up to 48 Server Modules; Up to 6 TFLOPS per Rack Chassis Monitoring Module (CMM) and Power Interface Module Up to 24 PCIe ExpressModules (EM) 4 Shelves: 1 to 12 Server Modules N+N Power Supply Modules Up to 2 PCIe Network Express Modules (NEM) 8 Fan Modules Front 26 Back
Sun Blade 6048 shelf detail front Shelf Operator panel > Shelf status > Power and Locator LED's Two hot-swap Power Supply Modules (PSM) > Each PSM includes a Front Fan Module > Each PSM requires three 16A @ 200 V inlets > 3 x 3000W = 9000W to power a shelf Twelve Sun Blade 6000 series Modular Servers 27
Sun Blade 6048 shelf detail rear Six power inlets Twenty four PCIe ExpressModules > Two per server module One Chassis Monitoring Module One or two PCIe Network Express Modules > 12 x 1GbEth pass-through > IB NEM takes up two NEM slots 28 Eight dual in-line Rear Fan Modules
Power Supplies Two hot swap redundant power supplies per shelf > A single power supply can power the entire shelf 29 Each requires three power inlets Each consumes 9000 [W] Power supply output is 12 [V] Each holds a Front Fan Module
Front Fan Modules 30 Redundant fan modules 135 [CFM] each Fans fit in the Power Supply Modules Can be replaced by pulling the Power Supply 4-5 Used to cool the power supplies and the PCI-E ExpressModules, Network Express modules and CMM
Rear Fan Modules N+1 hot-swap Fans accessible from the rear of the chassis Used to pull air through the Server Modules Redundant in-line fans Easy removal handle Fault Indicator Total of 1050 [CFM] 31
Power Connections Redundant and hot-swap > Grid Redundant 1 + 1 Total PSU rating: 9,000 [W] High-efficiency power supplies Each power cord is cabled to a power supply unit > Three power cords are needed per power supply > Six power cords are needed for full 1+1 redundancy Power cord types > AMER: L6-20P to IEC 320 C19, 2.5m, Sun p/n 180-2005-01 > EPAC: IEC 309 to IEC 320 C19, 2.5m, Sun p/n 180-2004-01 Power input: 240-220 [V] @ 16-20 [A] 32
Gigabit Ethernet Pass-through 12 port NEM Single width NEM up to two per shelf External Network Ports 33 Blade 0 Blade 1 Blade 2 Blade 3 Blade 4 Blade 5 Blade 6 Blade 7 Blade 8 Blade 9 Blade 10 The Gigabit Ethernet pass-through NEM provides connectivity to external network switches which allow Server Module communications to the outside Blade 11 Midplane connectors
Sun Blade 6048 InfiniBand switched NEM HCA HCA HCA HCA HCA HCA 24 port 384Gbps IB switch HCA HCA HCA HCA PCIe x8 PCIe x8 PCIe x8 PCIe x8 PCIe x8 PCIe x8 PCIe x8 PCIe x8 PCIe x8 PCIe x8 PCIe x8 PCIe x8 Double width NEM takes up both slots of a shelf Twelve dual port Host Channel Adapters (HCA) HCA HCA 24 port 384Gbps IB switch > Two 4x InfiniBand uplinks per server module Two 24 port InfiniBand switches > 12 x4 ports used for Twelve 1GbEthernet uplinks passthrough 34 Eight 12x InfiniBand uplinks through switches the HCA connection > 4 x12 ports used for the uplink
Sun Blade 6000 Family Modules Passive Midplane Server Modules > Sun Blade T6320 and T6300 Server Module > UltraSPARC T2(R) UltraSPARC T1(R) respectively > Sun Blade X6220 Server Module > AMD Next Generation Opteron 2000 series > Sun Blade X6250 Server Module > Intel Xeon 5100 and 5300 series PCI Express Network Express Module (NEM) PCI Express ExpressModule (EM) Chassis Monitoring Module (CMM) 35
Passive Midplane Network Express Module connectors Each NEM provides: To every Server Module Server Module I/O connectors Network Express Modules Two PCI Express x8 interfaces or Two XAUI interfaces (GbE, FC or IB) Serial Attached SCSI links (4 total) Gigabit Ethernet links (2 total) PCI ExpressModules Two PCI Express x8 interfaces Service Processor Ethernet link I2C Network PCI Express x8 interface or XAUI interface (GbE, FC or IB) Serial Attached SCSI links (2 total) Gigabit Ethernet link (1 total) CMM Ethernet link I2C Network Unidentified connectors are Fan Connectors 36 PCI-E ExpressModule Connectors to Server Modules Rear NEM, EM and Rear Fan Module Connectors CMM connector Server Module power connectors Front Panel LED control board Front Server Module and Power Supply Connectors
Passive Midplane Data Interfaces x8 E I PC E x8 I C P Service P roc AUI o r X et E x8 rn PCI- it Ethe ab Gi g SAS AUI or X et E x8 rn PCI- it Ethe ab Gig SAS 37 essor Eth ernet
Passive Midplane Sun Blade T6300 PCI Express Links to ExpressModules PCI Express Links to Network Express Modules Gigabit Ethernet Links SAS Links Total Server Module Bandwidth Service Processor Links 38 Sun Blade X6220 Interface No. of No. of Bandwidth interfaces interfaces [Gbps] 32 2 2 (8 lanes) 32 2 2 (8 lanes) 2 1 2 4 3 4 142 1 Sun Blade X6250 with REM module With PCI Expre ss FEM With other fabric FEM Interface Interface No. of Bandwidth Bandwidth interfaces [Gbps] [Gbps] 32 32 2 (8 lanes) (8 lanes) 32 16 2 (8 lanes) (4 lanes) 1 2 1 3 4 3 Interface Bandwidth [Gbps] 32 (8 lanes) XAUI = variable 1 3 142 1 110 1 78+
UltraSPARC Server Modules 39
Sun Blade T6320 Server Module Single Socket Enterprise-Class Data Center Compute Engine Page: 40 Compute 1 UltraSPARC T2 Processor with 4, 6 or 8 cores @ 1.2 or 1.4 GHz and 64 threads. 16x FB-DIMM slots - 64 GB memory using 4GB DIMM's I/O 142 Gbps of throughput using I/O modules 2x Hot-Plug PCIe ExpressModules 2x Hot-Plug PCIe or 10GbEthernet Network Express Modules 1xGbit Ethernet connection per NEM 2xSAS storage links per NEM 4x SAS or SATA 2.5 disk drives Availability Hot-swap disks; RAID 0 or 1 basic REM Optional BBWC w/raid 0, 1, 5, 6, 10, 50, 60 No Fans (on blade) No PSU (on blade) Management ILOM 2.0 Remote Management Solaris(TM) operating system Density 1 Socket/8 cores/64 threads per RU Sun Internal and Authorized Partners Only
Quick Specs Single socket UltraSPARC T2 based Server Module 16 FB-DIMMs (up to 64 GB RAM max with 4GB DIMM's) > 128 GB at RR+1Q (with 8GB DIMM's) 4 PCI-Express slots > Two PCIe ExpressModule x8 slots and > Two x4 PCIe or 10GbEthernet Network Express Modules 2 Intel GigE ports onboard available through the Network Express Modules Storage > Four SAS links to the midplane for external storage expansion > Four internal SAS or SATA links for 2.5 Small Form Factor hard drives 2 Disk Controller options > Basic RAID 0, 1 RAID Expansion Module based on the LSI1068e controller (Jackal) > Advanced BBWC RAID 0, 1, 5, 10, 50, 60 using StorageTek storage controller (Coyote) Hot swappable and redundant fans and PSUs 41 ILOM 2.0 Service Processor with legacy ALOM command line capability
Sun Blade T6320 Server Module FB-DIMMs @667MHz Memory Fabric Expansion Module (FEM) PCIe x4 or 10GbE UltraSPARC Niagara 2 Processor 10 GbEthernet PCIe x4 or 10GbE 10 Gb Ethernet T2 PCIe x4 PCIe x4 PCIe x4 PCIe to PCI Bridge PCIe x8 32Gbps Intel Ophir 2x Gbit Ethernet PCIe x8 32Gbps EM #1 RAID Expansion Module SAS HDDs SAS links USB Blade Module Front Panel and adapter VGA Output ILOM 2 SERIAL NEM #0 NEM #1 PCIe x4 PCI to USB USB 2.0 NEM #1 NEM #0 EM #0 PCIe x4 16Gbps PCIe Switch PEX8548 PCIe x8 42 Passive Midplane NEM #0 NEM #1 ATI Graphics Motorola MPC885 based ILOM 2 SP 10/100Mbps Mgmt Ethernet CMM
Sun Blade T6320 ILOM 2.0 Service Processor Connctor UltraSPARC T2 processor PCIe FEM connector XAUI FEM connector PCIe or XAUI interfaces, Four SAS links and two 1GbEth ports Four SAS/SATA Hard Drives PCIe bridge ATI VGA Controller 43 Fault 16 x FB-DIMM slots Remind button REM connector
Sun Blade T6320 ILOM 2.0 Service Processor Card 44 Fabric Expansion Module Card RAID Expansion Module Card
RAID Expansion Module (REM) All standard configurations will ship with the Jackal REM Based on the LSI SAS1068E storage controller Hardware RAID capabilities > RAID 0, 1 or 10 raidctl, Linux, Windows and BIOS management capabilities 45
Sun Blade T6300 Server Module Single Socket Enterprise-Class Data Center Compute Engine Page: 46 Compute 1 UltraSPARC T1 Processor with 6 or 8 cores @ 1.0, 1.2 or 1.4 GHz and up to 32 threads. 32 GB memory, 8x DIMM slots I/O 142 Gbps of throughput using I/O modules 2x Hot-Plug PCIe ExpressModules 2x Hot-Plug PCIe Network Express Modules 1xGbit Ethernet connection per NEM 2xSAS storage links per NEM 4x SAS or SATA 2.5 disk drives Availability Hot-swap disks; RAID 0 or 1 built-in No Fans (on blade) No PSU (on blade) Management ALOM Remote Management Solaris(TM) operating system Density 1 Socket/8 cores/32 threads per RU Sun Internal and Authorized Partners Only
Sun Blade T6300 Server Module Features Single UltraSPARC T1 processor with 6 or 8 cores > 1.0 GHz, 1.2 GHz or 1.4 GHz speed bins High memory density > 8 DIMM slots for a total of 32 GB of RAM per blade > 1GB, 2GB and 4GB DDR2 DIMMs > All four memory controllers in the processor are used Large I/O capacity > Four PCI Express interfaces > 2 x PCI Express ExpressModules (x8) > 2 x PCI Express Network Express Module (NEM) interfaces (x8) > Dual on-board Gigabit Ethernet interfaces available via NEM > Four on-board SAS links for external expansion via NEM 47
Sun Blade T6300 Server Module Features Versatile storage > 4 hot-swap small form factor (2.5 ) SAS or SATA hard drives > RAID 0 or 1 support (LSI SAS1068E) Solaris 10 Update 3 Advanced Lights Out Management (ALOM) Service Processor is a standard feature > CLI management (telnet, ssh) > Sun N1(TM) System Manager support USB 2.0 and serial ports are available through front panel connector 48
Sun Blade T6300 Block Diagram DDR2 533 @400MHz Memory 10.7 GB/sec UltraSPARC T1 Processor PCIe x8 32Gbps PCIe x8 PCIe Bridge JBUS EM #0 PCIe x8 32Gbps NEM #0 PCIe x4 16Gbps Fire Chip Intel Ophir 2x Gbit Ethernet 10.7 GB/sec PCIe x8 32Gbps PCIe x8 Fire EBus PCIe Bridge Passive Midplane NEM #0 NEM #1 EM #1 PCIe x8 32Gbps NEM #1 UART PCIe x4 HDDs SAS links PCIe RJ-45 Serial ALOM Posix serial USB 2.0 DB-9 Serial Posix LSI SAS SAS1068E NEM #1 PCI to USB Blade Module Front Panel and adapter USB PCIe to PCI Bridge Motorola MPC885 based ALOM SP ALOM SERIAL JUNTA FPGA 49 NEM #0 10/100Mbps Mgmt Ethernet CMM
Sun Blade T6300 Server Module Device Map Integrated hardware RAID controller for internal and external storage capabilities PCI-Express Bridges 4 DIMM sockets DDR2 533 @400MHz Four internal SAS or SATA hard drives CPU Four PCIe x8 lanes Four SAS interfaces Two Gigabit Ethernet ports 4 DIMM sockets DDR2 533 @400MHz Fire Chip JBUS to PCI-Express bridge 50 Per-blade ALOM management capabilities
x64 Server Modules 51
Sun Blade X6250 Server Module Dual Socket Enterprise-Class Data Center Compute Engine Page: 52 Compute Intel Xeon(TM) 5100 or 5300 series processor 64 GB memory, 16x DIMM slots I/O 110 Gbps of throughput 2x Hot-Plug PCIe ExpressModules 2x Hot-Plug PCIe Network Express Modules 1xGbit Ethernet connection per NEM 2xSAS storage links per NEM (with REM) 4x SAS (with REM) or SATA 2.5 disk drives Optional Fabric Expansion Module (FEM) RAID Expansion Module (REM) Availability Hot-swap disks; REM provides RAID 0, 1, 5 and 10 with built-in with write cache and battery backup No Fans or PSU's (on blade) Management Embedded LOM Service Processor; IPMI 2.0; remote KVM, Floppy/CDROM Solaris, Linux, or Windows OS, VMware Density 2 Sockets/8 cores/8 threads per RU Sun Internal and Authorized Partners Only
Sun Blade X6250 Server Module Features 64-bit Processor Technology > 2 Intel Xeon 5300 series processors (L5310, E5320, E5345, X5355) > 2 Intel Xeon 5100 series dual-core processors (E5160 ATO only) > L series = 50 W, E series = 80W, X series = 120W High memory density > 16 memory slots for a total of 64 GB of RAM per blade > 1GB, 2GB and 4GB FBDIMMs Large I/O capacity > Four PCI Express interfaces > 2 x PCI Express ExpressModules (x8) > 2 x PCI Express Network Express Module (NEM) interfaces 53 Available using the Fabric Expansion Module (FEM) which splits the x8 interface into two x4 interfaces FEM will become available when PCI-Express NEM's become available > Dual on-board Gigabit Ethernet interfaces available via NEM
Sun Blade X6250 Server Module Features Versatile storage > 4 internal hot-swap small form factor (2.5 ) SAS or SATA hard drives > SAS requires the RAID Expansion Module (REM) which is included in default configurations but is an option on XATO orders > REM includes RAID 0,1, 5 and 10 with write cache and battery backup Operating System flexibility > Solaris, Linux, Windows, VMware Embedded LOM Service Processor is a standard feature > IPMI 2.0, HTTP(S) interface, CLI management, remote KVMS over Ethernet, SNMP USB 2.0, Video and serial (Service Processor) ports are available through front panel connector 54
Sun Blade X6250 Block Diagram PCIe x8 32Gbps PCIe x8 32Gbps Intel 5000 MCH PCIe x4 or XAUI Fabric Expansion Module (FEM) ESI PCIe x8 10.5GB/s 5.3 GB/sec 10.5GB/s 5.3 GB/sec Intel 5100 or 5300 EMT64 processors PCIe x4 or XAUI Gbit Ethernet Gbit Ethernet Gigal ESB2 IO PCI Bridge PCIe x8 32Gbps IDE SERIAL VGA HD-15 DB-9 Serial 55 PCI USB 2.0 Blade Module Front Panel and adapter VGA Video Output RAID Expansion Module (REM) EM #1 NEM #0 NEM #1 NEM #0 NEM #1 EM #0 Compact Flash SATA x4 PCIe x4 LPC Super I/O USB 2.0 Passive Midplane 5.3 GB/sec 5.3 GB/sec FBDIMM 667 Memory SAS/SATA HDDs SAS HW RAID Controller SAS links MUX NEM #0 NEM #1 AST2000 Service Processor 10/100 PHY 10/100Mbps Mgmt Ethernet CMM
Sun Blade X6250 Server Module Device Fabric Expansion Northbridge and PCI-E Map Module (FEM) connector expansion HUB 8 DIMM sockets FB-DIMM 667 Four internal SAS or SATA hard drives CPU1 CPU0 8 DIMM sockets FB-DIMM 667 56 Four PCIe x8 lanes Four SAS interfaces Two Gigabit Ethernet ports Per-blade Embedded LOM service processor capabilities CompactFlash card reader boot device Integrated hardware RAID controller for internal and external (future - through NEM) storage capabilities
Sun Blade X6450 Server Module Quad Socket Enterprise-Class Data Center Compute Engine Compute 2 or 4 Intel Xeon(TM) 7000 sequence processor 24x FB-DIMM slots - 96 GB using 4GB DIMMs I/O 110 Gbps of throughput 2x Hot-Plug PCIe ExpressModules 2x Hot-Plug PCIe Network Express Modules 1xGbit Ethernet connection per NEM 2xSAS storage links per NEM (with REM) Optional Fabric Expansion Module (FEM) RAID Expansion Module for external storage (REM) CompactFlash card for boot or storage Availability No disks No Fans (on blade) No PSU (on blade) Management Embedded LOM Service Processor; IPMI 2.0; remote KVM, Floppy/CDROM ILOM 1Q after RR Solaris, Linux, or Windows OS, VMware Density 4 Sockets/16 cores/16 threads per RU Page: 57 Sun Internal and Authorized Partners Only
Sun Blade X6450 Quick Specs 4-socket Intel Xeon 7000 Sequence based Server Module > 4 Intel Xeon 7200/7300 Series (Dual- or Quad-Core) Processors > 50W or 80W only > Intel chipset (Clarksboro MCH Northbridge + ESB-2 Southbridge) > 24 FB-DIMMs (up to 96 GB RAM max using 4GB DIMMs) 4 PCI-Express slots (Two PCIe ExpressModule x8 slots and Two PCIe Network Express Module x4 interfaces via a FEM) 2 Intel GigE ports onboard available through the Network Express Modules CompactFlash slot as an optional internal storage or boot device Optional RAID Expansion Module (REM) for external storage connectivity > Provides four SAS links to the midplane for external storage expansion Hot swappable and redundant fans and PSUs Embedded LOM Service Processor [ILOM available 1Q after RR] 58
Sun Blade X6450 Block Diagram FBDIMM 667 Memory Fabric Expansion Module (FEM) PCIe x8 32Gbps Clarksboro North Bridge ESI RAID Expansion Module (REM) SAS HW RAID Controller PCIe x8 5.3 GB/sec PCIe x4 8.5GB/s PCIe x4 or XAUI PCIe x4 or XAUI 5.3 GB/sec Intel EMT64 processors 8.5GB/s 5.3 GB/sec 5.3 GB/sec 8.5GB/s Passive Midplane 8.5GB/s PCIe x8 32Gbps SAS links NEM #0 NEM #1 EM #0 NEM #0 NEM #1 ESB2 IO PCI Bridge PCIe x8 32Gbps EM #1 LPC IDE Compact Flash Super I/O 59 Blade Module Front Panel and adapter VGA Video Output MUX PCI SERIAL VGA HD-15 DB-9 Serial or RJ45 PCIe x8 32Gbps USB 2.0 USB 2.0 ILOM 2.0 Service Processor 10/100 PHY Gigal Gbit Ethernet Gbit Ethernet 10/100Mbps Mgmt Ethernet NEM #1 NEM #0 CMM
Sun Blade X6450 Server Module 24 FB-DIMM slots. Up to 96 GB of RAM FEM connector Four Intel 7000 processors. Dual or quad-core. Four PCIe interfaces, four SAS links and two 1GbEth ports Intel 7000 MCH (Clarksboro) Northbridge Compact Flash Storage REM connector Fault remind button ESB2 I/O bridge 60 Battery location for the REM
FB-DIMM memory population rules The Memory Controller Hub (MCH) provides four FBDIMM memory channels The first DIMM slot of each channel is identified by the white ejector handles Populate DIMMs using all channels > Follow ABCD then 0123 > A0, B0, C0, D0 then A1, B1, C1, D1, etc.. Populate larger DIMM sizes closer to the MCH > 4GB first, 2GB second and 1GB third > 4GB in A0, B0, C0, D0 then 2GB in A1, B1, C1, D1, etc.. 61
RAID Expansion Module (REM) All standard configurations will ship with the REM Optional add-on card for XATO orders only > If not used, the server may boot from CompactFlash slot or SATA interfaces derived from the Enterprise South Bridge 2 Hardware RAID capabilities > 128 MB of cache > Battery Backed - LiIon (72 hours when fully charged) > RAID 0, 1, 5 or 10 Adaptec Storage Manager software is used to monitor the storage solution 62
Embedded LOM service processor Remote management of server module Accessible over a serial line through the front panel dongle Network access is available through the CMM network interface and it provides: Sun Microsystems Embedded Lights Out Manager Copyright 2006 Sun Microsystems, Inc. All rights reserved. Hostname: SUNSP001636F16730 IP address: 10.6.163.66 MAC address: 00:16:36:F1:67:30 -> version SM CLP Version v1.0.0 SM ME Addressing Version v1.0.0 -> show / Targets: SP SYS CH > On-board CLI management over 63 SSH > IPMI and SNMP management from a management console > Full featured web GUI management Properties: Target Commands: show cd ->
Embedded LOM service processor ipmitool management Web GUI management bash-3.00$ ipmitool -H 10.6.163.66 -U root sdr Password: CPU 0 Temp 92 degrees C ok CPU 1 Temp 92 degrees C ok VRD 0 Temp 23 degrees C ok VRD 1 Temp 22 degrees C ok DIMM 0 Temp disabled ns DIMM 1 Temp disabled ns P_VCCP0 1.23 Volts ok P_VCCP1 1.25 Volts ok P1V2_VTT 1.20 Volts ok P1V5_MCH 1.49 Volts ok P2V5 2.51 Volts ok P1V8_B0 1.81 Volts ok P1V2_NIC 1.22 Volts ok... System Event Critical INT Button Boot Error Watchdog bash-3.00$ 64 Not Not Not Not Not Readable Readable Readable Readable Readable ns ns ns ns ns
Embedded LOM service processor Menus provide quick access to storage setup and I/O functions as well as configuration of ALT, function or CTRL based hot-keys Viewing window presents all POST and BIOS messages and shows interaction with the server platform Network link and storage icons indicate if any of these functions are active for the Java Remote Console tab. 65
Sun Blade X6220 Server Module 2-Socket Enterprise-Class Data Center Compute Engine Page: 66 Compute 2 AMD Next Generation Opteron Sockets Dual-core only 64 GB memory, 16x DIMM slots I/O 142 Gbps of throughput using I/O modules 2x Hot-Plug PCIe ExpressModules 2x Hot-Plug PCIe Network Express Modules 1xGbit Ethernet connection per NEM 2xSAS storage links per NEM 4x SAS or SATA 2.5 disk drives Availability Hot-swap disks; RAID 0 or 1 built-in No Fans or PSU's (on blade) Management ILOM Service Processor featuring IPMI 2.0; remote KVM, Floppy/CDROM Solaris, Linux, or Windows Density 2 Sockets/4 cores/4 threads per RU Sun Internal and Authorized Partners Only
Sun Blade X6220 Server Module Features 64-bit Processor Technology > 2 AMD Next Generation Opteron 2000 series processors > 2212, 2218, 2220. ATO: {2212, 2218, 2220} (95W), {2222} (120W) High memory density > 16 memory slots for a total of 64 GB of RAM per blade > 1GB, 2GB and 4GB DDR2 DIMMs at 667 MHz > When all DIMMs are populated, the memory speed is reduced to 533MHz Large I/O capacity > Four PCI Express interfaces > 2 x PCI Express ExpressModules > 2 x PCI Express Network Express Module (NEM) interfaces > Dual on-board Gigabit Ethernet interfaces derived from the 67 nvidia(tm) chipset are available via NEM > Four on-board SAS links for external expansion via NEM
Sun Blade X6220 Server Module Features Versatile storage > 4 internal hot-swap small form factor (2.5 ) SAS or SATA hard drives > 4 SAS links available through NEM for future expansion capabilities > On-board RAID 0 or 1 support via LSI SAS1068e Operating System flexibility > Solaris, Linux, Windows, VMware ILOM Service Processor is a standard feature > IPMI 2.0, HTTP(S) interface, CLI management, remote KVMS over Ethernet, SNMP USB 2.0, Video and Service Processor serial ports are available through front panel connector 68
Sun Blade X6220 Block Diagram Passive Midplane 10.7 GB/sec DDR2 667 Memory PCIe x8 32Gbps Next Generation AMD Opteron 2000 series processors 8 GB/s EM #1 PCIe x8 32Gbps IO-04 PCIe bridge NEM #1 Gbit Ethernet Gbit Ethernet NEM #1 NEM #0 PCIe x8 32Gbps 8 GB/s 10.7 GB/sec 3 USB 2.0 ports Remote KMS LSI SAS SAS1068e Compact Flash LPC 33MHz PCIe x4 HDDs PCI VGA HD-15 Blade Module Front Panel and adapter VGA Video Output SERIAL 69 NEM #0 IDE SAS links USB 2.0 USB 2.0 DB-9 Serial EM #0 PCIe x8 32Gbps nforce4 CK8-04 NEM #0 NEM #1 Super I/O Controller ATI RageXL DVI Video Output Video over LAN Redirect Motorola MPC8275 SP CMM BCM 10/100Mbps Mgmt Ethernet
Sun Blade X6220 Server Module Device Map Integrated hardware RAID controller for internal and external (future - through NEM) storage capabilities 8 DIMM sockets DDR2 667 CPU1 Four internal SAS or SATA hard drives CPU0 8 DIMM sockets DDR2 667 70 Nvidia nforce CK8-04 and IO-04 PCI Express I/O Bridge Four PCIe x8 lanes Four SAS interfaces Two Gigabit Ethernet ports CompactFlash card reader boot device Per-blade ILOM management capabilities
Sun Blade X6220 ILOM service processor Remote management of server module Accessible over a serial line through the front panel dongle Network access is available through the CMM network interface and it provides: > On-board CLI management over 71 Sun(TM) Integrated Lights Out Manager Version 1.0 Copyright 2005 Sun Microsystems, Inc. All rights reserved. Warning: password is set to factory default. -> SP SP SP SP version firmware version: 1.0 firmware build number: 9306 firmware date: Tue Feb 28 16:01:24 PST 2006 filesystem version: 0.1.13 -> show SSH > IPMI and SNMP management from a management console > Full featured web GUI -> management / Targets: SP SYS Properties: Commands: cd show
Sun Blade X6220 ILOM service processor ipmitool management Web GUI management bash-3.00$ /opt/ipmitool/bin//ipmitool 10.6.163.71 -U root -P changeme sdr sys.id 0x02 sys.intsw 0x00 sys.psfail 0x01 sys.tempfail 0x01 sys.fanfail 0x01 mb.t_amb 27 degrees C mb.v_bat 3.12 Volts mb.v_+3v3stby 3.27 Volts mb.v_+3v3 3.34 Volts mb.v_+5v 4.99 Volts -H ok ok ok ok ok ok ok ok ok ok.... ft1.fm2.f0.speed ft0.fm0.f1.speed ft0.fm1.f1.speed ft0.fm2.f1.speed ft1.fm0.f1.speed ft1.fm1.f1.speed ft1.fm2.f1.speed bash-3.00$ 72 11000 10000 10000 10000 11000 11000 11000 RPM RPM RPM RPM RPM RPM RPM ok ok ok ok ok ok ok
Sun Blade X6220 ILOM service processor Tabs provide multiple ILOM sessions through one Java Remote Console instance Menus provide quick access to storage setup and I/O functions as well as creation of new ILOM sessions Viewing window presents all POST and BIOS messages and shows interaction with the server platform Keyboard, Mouse, CDROM and Floppy icons indicate if any of these functions are active for the current active Java Remote Console tab. 73
Components common to Server Modules 74
Fault Remind Button To diagnose a Blade Server Module, it must be removed from the chassis > This means power is no longer available to the Server Module Push-button system holds a charge to identify failed components within the Module > CPUs > DIMMs 75
Server Module Hard Drives - SAS Serial Attached SCSI (SAS) 2.5 compact form factor Server grade > Hot swappable > High performance 10,000 RPM > Faster than 10k RPM SCSI drives > High density 73 or 146 GB > Higher densities offered when available > Enterprise-class reliability > 1.6M hours MTBF 76
Server Module Hard Drives - SATA Serial ATA (SATA) 2.5 compact form factor Server grade > Hot swappable > 5400 RPM > High density 80 GB > Higher densities offered when available > More than 500k hours MTBF 77
Server Module Hard Drive Controller (TM) LSI SAS1068e X6220 and T6300/T6320 ONLY 8 SAS ports Each port capable of 3.0 Gbit/s SAS link rates Total available SAS bandwidth: 24 Gbit/s Available through PCIe: 16Gbit/s LSI Integrated Hardware RAID (Low cost RAID solution that offloads CPU) Based on Fusion-MPT architecture Internal Storage Tx, Rx Tx, Rx Tx, Rx Tx, Rx SAS Device SAS Device SAS Device Tx, Rx SAS Device SAS Device 78 LSI SAS1068e SAS Device Tx, Rx SAS Device Tx, Rx Four ports are used for the internal hard drives Tx, Rx SAS Device NEM 0 NEM 1 Four ports are used for the external storage connections through NEM
Server Module Adapter (Octopus cable) x64 Servers St. Paul 79 DB-9 Service Processor Serial POSIX Serial HD-DB15 RJ-45 USB VGA output N/A USB 2.0 N/A Serial Processor (ALOM) serial USB 2.0
Network Express Module Constellation Modular System unique form factor Connects via midplane to all 10 blades in chassis Provides bulk or aggregated I/O options Appears to blades as standard PCI-Express adapter May implement an exposed or hidden management model, depending on function Provides termination for Server Module storage and network interfaces 80
Gigabit Ethernet Pass-through NEM External Network Ports 81 Blade 0 Blade 1 Blade 2 Blade 3 Blade 4 Blade 5 Blade 6 Blade 7 Blade 8 The Gigabit Ethernet pass-through NEM provides connectivity to external network switches which allow Server Module communications to the outside Blade 9 Midplane connectors
PCI Express ExpressModule (EM) New PCI-SIG standard form factor > 32Gb/s I/O per module (x8 PCI/E lanes) 2 EMs for each Server Module > 20 EMs per chassis 82 Highly Serviceable > Hot-Pluggable > Fully customer replaceable unit (CRU) without opening chassis or blade Industry-standard I/O > More I/O options, no vendor lock-in
PCI Express ExpressModule Industry standard hot-plug enclosure for safe/reliable replacement and handling Rear connectors for easy connectivity to external I/O 83 Standard PCIe silicon ExpressModule shown with the protective cover removed
PCI Express ExpressModules EMs available from Sun: > Dual-port GbE EM > Intel NICs > Dual-port 4Gbps FiberChannel EM > Qlogic(TM) > Dual-port 4X InfiniBand EM 84 > Mellanox(TM) Other available EMs: > Quad-port GbE EM > 10GbE EM > Combo card 2 x 1GbE + 2 x 4Gb FC
InfiniBand ExpressModule Dual port 4x InfiniBand EM Produced by Mellanox, productized by NSN Based on MT25208 InfiniHost III Ex PCI Express x8 lane width 256MB of onboard memory Drivers will be available for Solaris, Win2k3 and Linux 85
FibreChannel ExpressModule Dual port 4Gbps FiberChannel EM Produced by Qlogic, productized by DMG Based on the Qlogic ISP2432 FC-to-PCIe controller PCI Express x4 lane width Drivers available for Solaris, Win2k3 and Linux 86
GigabitEthernet ExpressModule Dual port 1GbEthernet (Cu) EM Produced by NSN Based on the Intel 82571EB Gigabit Ethernet controller (ophir) Same device available on the NEM PCI Express x4 lane width Drivers available for Solaris, Win2k3 and Linux 87
List of available PCIe ExpressModules 4Gb FC Dual Port ExpressModule Qlogic SG-XPCIE2FC-QB4-Z 1GbE Dual Port ExpressModule Copper X7282A-Z 1GbE Dual Port ExpressModule Fiber X7283A-Z 4X IB SDR 256MB Dual Port ExpressModule Mellanox X1288A-Z 4Gb FC Dual Port ExpressModule Emulex SG-XPCIE2FC-EB4-Z 12Gb SAS Dual Port ExpressModule -LSI Logic SG-XPCIE8SAS-EB-Z 12Gb SAS Single Port RAID ExpressModule Intel SRL SGXPCIESAS-R-BLD-Z 1GbE Quad Port ExpressModule Copper (Intel) X7284A-Z 1GbE Quad Port ExpressModule Copper (Neptune) X7287A-Z 10GbE Dual Port ExpressModule Fiber (Neptune) X1028A-Z 4X IB DDR Nomem Single Port EM Mellanox (Hermon) X1290A 88
Constellation Modular System Chassis Monitoring Module (CMM) Chassis/Enclosure Monitoring and Management Comprehensive management features: > LDAP Authentication > Network access > Secure protocols (SSH2) > Serial access All functions available via through SSH, CLI Easy integration into Sun and 3rd-party management infrastructure Consistent management with rack-mount servers 89
Chassis Monitoring Module Serial Interface Network Management Status LEDs 90 Service Processor Module CMM Switch Module
Chassis Monitoring Module LED Control Passive Midplane Blade 0 Blade1 Blade 2 Blade 3 Blade 4 Blade 5 Blade 6 Blade 7 Blade 8 Blade 9 Blade 10 NEM 0 NEM 1 91 To CMM Service Processor Unmanaged Switch BCM5321 Gigabit Ethernet Uplink 1 Gigabit Ethernet Uplink 0
Management overview CMM provides an SSH/CLI interface for management Shows > Presence of server modules and DMTF's SMASH example -> show / Targets: CH CMM Network Express Modules > Overall status of the chassis components (fans, LED's, PSU) Enables initial configuration of the server modules' service processor network information 92 Properties: Commands: cd show ->
Initial setup example through CMM BL0 service processor network setup using CLI Verify settings -> show /CH/BL0/SP/network /CH/BL0/SP/network Targets: -> cd /CH/BL0/SP/network /CH/BL0/SP/network Properties: type = Network Configuration commitpending = (Cannot show property) ipaddress = 10.6.160.18 ipdiscovery = unknown ipgateway = 10.6.163.254 ipnetmask = 255.255.252.0 macaddress = 00:00:00:00:00:00 pendingipaddress = 10.6.160.18 pendingipdiscovery = unknown pendingipgateway = 10.6.163.254 pendingipnetmask = 255.255.252.0 -> set pendingipdiscovery=static Set 'pendingipdiscovery' to 'static' -> set pendingipaddress=10.6.160.18 Set 'pendingipaddress' to '10.6.160.18' -> set pendingipgateway=10.6.163.254 Set 'pendingipgateway' to '10.6.163.254' -> set pendingipnetmask=255.255.252.0 Set 'pendingipnetmask' to '255.255.252.0' Commands: cd set show -> set commitpending=true Set 'commitpending' to 'true' -> 93 ->
Backup slides follow 94
Chassis Summary Server Module Slots PCI-E ExpressModule Slots PCI-E Network Express Module Slots Power Rating Power Supplies Power Inlets Power Redundancy Fans Front Back Remote Management CMM ILOM Redundancy CMM network Redundancy CMM Network Ports CMM Serial Ports Density Rack units Chassis per 42 RU rack Maximum power consumption per rack Cores per rack 95 SB6000 Chassis 10 20 2 6000 [W] 2 4 N+N (1+1) 18 6 12 SB8000 Chassis 10 20 4 9000 [W] 6 6 N + N (3 + 3) 27 9 18 SB8000P Chassis 10 0 2 9000 [W] 4 4 N + 1 (3 +1) 22 4 18 No No 2 1 Yes Yes 2 Standard, 4 Optional 1 Standard, 2 Optional Yes Yes 2 Standard, 4 Optional 1 Standard, 2 Optional 10 4 24000 [W] 320 19 2 18000 [W] 160 14 3 27000 [W] 240
Blade Summary Server Module CPU Sockets CPU Type X6220 2 AMD Opteron 2000 Series 4 4 DDR2 @667 16 Maximum Cores Maximum Threads Memory type DIMM sockets I/O interfaces 2 PCI-E ExpressModules NEM PCI-E 2 (x8) Gigabit Ethernet 2 SAS 4 Storage controller LSI SAS1068 Battery backed cache No Hardware RAID 0, 1 On-board HDD 4 x SAS or SATA Service Processor Type ILOM On-board or Mezzanine Mezzanine Front panel connectors (needs dongle) RJ-45 N/A DB-9 ILOM Serial VGA Platform VGA USB USB 2.0 96 SB6000 Chassis T6300 1 X6250 2 UltraSPARC T1 Intel 5300 Series 8 32 DDR2 @400 8 8 8 FBDIMMs @667 16 SB8000/SB8000P Chassis X8400 X8420 4 4 AMD Opteron 800 AMD Opteron 8000 Series Series 8 8 8 8 DDR @400 DDR2 @667 16 16 2 2 2 2 (x8) 2 (x4 if FEM present) 2 (x8) + 2 (x4) 2 2 0 4 4 (if REM present) 0 Intel/Adaptec LSI SAS1068 LSI SAS1064 No Yes, 128MB No 0, 1 0, 1, 5, 10 0, 1 4 x SAS or SATA 4 x SAS or SATA 2 x SAS or SATA ALOM Mezzanine (needs dongle) ALOM Serial POSIX Serial N/A USB 2.0 Embedded LOM On-board (needs dongle) N/A Emb. LOM Serial Platform VGA USB 2.0 2 2 (x8) + 2 (x4) 0 0 LSI SAS1064 No 0, 1 2 x SAS or SATA ILOM On-board ILOM On-board N/A ILOM Serial Platform VGA USB 2.0 N/A ILOM Serial Platform VGA USB 2.0