High-Performance Computing Clusters



Similar documents
NEXLINK STABLEFLEX MODULAR SERVER

Ignify ecommerce. Item Requirements Notes

ORACLE OPS CENTER: PROVISIONING AND PATCH AUTOMATION PACK

High Availability Server Clustering Solutions

NEC Corporation of America Intro to High Availability / Fault Tolerant Solutions

Dell PowerVault MD32xx Deployment Guide for VMware ESX4.1 Server

Private cloud computing advances

Simple Introduction to Clusters

Iron Networks Network Virtualization Gateways

PRIMERGY server-based High Performance Computing solutions

MESOS CB220. Cluster-in-a-Box. Network Storage Appliance. A Simple and Smart Way to Converged Storage with QCT MESOS CB220

alcatel-lucent vitalqip Appliance manager End-to-end, feature-rich, appliance-based DNS/DHCP and IP address management

SUN ORACLE EXADATA STORAGE SERVER

1. Specifiers may alternately wish to include this specification in the following sections:

Symantec Storage Foundation and High Availability Solutions Microsoft Clustering Solutions Guide for Microsoft SQL Server

Title Goes Asset Management

nappliance Network Virtualization Gateways

Fault Tolerant Servers: The Choice for Continuous Availability on Microsoft Windows Server Platform

Dell Desktop Virtualization Solutions Simplified. All-in-one VDI appliance creates a new level of simplicity for desktop virtualization

Clusters: Mainstream Technology for CAE

Title Goes ASSET MANAGEMENT

STORAGE CENTER WITH NAS STORAGE CENTER DATASHEET

Building Microsoft Windows Server 2012 Clusters on the Dell PowerEdge VRTX

Dell Reference Configuration for Hortonworks Data Platform

Power Redundancy. I/O Connectivity Blade Compatibility KVM Support

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

SAN Conceptual and Design Basics

Blade Server Benefits

Intel AMT Provides Out-of-Band Remote Manageability for Digital Security Surveillance

Post-production Video Editing Solution Guide with Microsoft SMB 3 File Serving AssuredSAN 4000

Patriot Hardware and Systems Software Requirements

OVERVIEW. CEP Cluster Server is Ideal For: First-time users who want to make applications highly available

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage

Getting Started with HC Exchange Module

The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment

1000-Channel IP System Architecture for DSS

Introduction to MPIO, MCS, Trunking, and LACP

ADVANCED NETWORK CONFIGURATION GUIDE

Sun Constellation System: The Open Petascale Computing Architecture

Step-by-Step Guide. to configure Open-E DSS V7 Active-Active iscsi Failover on Intel Server Systems R2224GZ4GC4. Software Version: DSS ver. 7.

HUAWEI TECHNOLOGIES CO., LTD. HUAWEI FusionServer X6800 Data Center Server

Brainlab Node TM Technical Specifications

Introduction 1 Performance on Hosted Server 1. Benchmarks 2. System Requirements 7 Load Balancing 7

Cooling and thermal efficiently in

Drobo How-To Guide. Topics. What You Will Need. Prerequisites. Deploy Drobo B1200i with Microsoft Hyper-V Clustering

HP Moonshot System. Table of contents. A new style of IT accelerating innovation at scale. Technical white paper

OPTIMIZING SERVER VIRTUALIZATION

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops

Cisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage

Microsoft Compute Clusters in High Performance Technical Computing. Björn Tromsdorf, HPC Product Manager, Microsoft Corporation

MapR Enterprise Edition & Enterprise Database Edition

Microsoft Windows Compute Cluster Server 2003 Getting Started Guide

I D C O P I N I O N S I T U A T I O N O V E R V I E W. Sponsored by: NEC. Kazuhiko Hayashi October 2014

Microsoft Hyper-V Server 2008 R2 Getting Started Guide

VTrak SATA RAID Storage System

Deployment Guide. How to prepare your environment for an OnApp Cloud deployment.

Preparation Guide. How to prepare your environment for an OnApp Cloud v3.0 (beta) deployment.

QuickSpecs. What's New Support for two InfiniBand 4X QDR 36P Managed Switches

Red Hat Global File System for scale-out web services

out of this world guide to: POWERFUL DEDICATED SERVERS

HUAWEI Tecal E6000 Blade Server

HP reference configuration for entry-level SAS Grid Manager solutions

Powerful Dedicated Servers

HP Certified Professional

Discover Smart Storage Server Solutions

Microsoft SQL Server on Stratus ftserver Systems

Fault Tolerant Servers: The Choice for Continuous Availability

Using Red Hat Network Satellite Server to Manage Dell PowerEdge Servers

EMC VPLEX FAMILY. Continuous Availability and Data Mobility Within and Across Data Centers

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Windows Compute Cluster Server Miron Krokhmal CTO

Parallels. Clustering in Virtuozzo-Based Systems

Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware

System requirements for A+

IBM System x family brochure

IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Virtual Connect

v7.1 Technical Specification

Cisco Application Networking Manager Version 2.0

Highly Scalable Server for Many Possible Uses. MAXDATA PLATINUM Server 3200 I

Dell PowerVault MD3400 and MD3420 Series Storage Arrays Deployment Guide

Cisco Prime Home 5.0 Minimum System Requirements (Standalone and High Availability)

White Paper. Dell Reference Configuration

HP high availability solutions for Microsoft SQL Server Fast Track Data Warehouse using SQL Server 2012 failover clustering

Using Multi-Port Intel Ethernet Server Adapters to Optimize Server Virtualization

Cisco Unified Computing System and EMC VNXe3300 Unified Storage System

Using DCIM to Identify Data Center Efficiencies and Cost Savings. Chuck Kramer, RCDD, NTS

Hortonworks Data Platform Reference Architecture

Part-1: SERVER AND PC

MEDIAROOM. Products Hosting Infrastructure Documentation. Introduction. Hosting Facility Overview

Developing a dynamic, real-time IT infrastructure with Red Hat integrated virtualization

Hardware/Software Guidelines

How To Use An Org.Org Cloud System For A Business

112 Linton House Union Street London SE1 0LH T: F:

James Serra Sr BI Architect

Red Hat Enterprise linux 5 Continuous Availability

Parallels Cloud Storage

Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013

Certification: HP ATA Servers & Storage

HP ConvergedSystem 900 for SAP HANA Scale-up solution architecture

High Availability with Elixir

Transcription:

High-Performance Computing Clusters 7401 Round Pond Road North Syracuse, NY 13212 Ph: 800.227.3432 Fx: 315.433.0945 www.nexlink.com

What Is a Cluster? There are several types of clusters and the only constant is that clusters keep changing. Here are some examples of common clusters: High-availability (HA) Clusters High-availability Clusters (a.k.a Failover clusters) are used in industries where down-time is not an option. Toll roads, subways, financial institutions, 911 call centers, to name a few, are all entities that have high-availability requirements. HA Clusters generally come in a redundant two-node configuration with the redundant node kicking in when the primary node fails. Seneca Data offers the NEC Express5800 to meet this need. For more details on High-availability / Tolerant Clusters, visit www.senecadata.com/products/server_nec.aspx. Conventional System I/O-PCI MEMORY CHIPSET CPU DISK NEC EXPRESS5800 Tolerant System Dual Module Redundancy DISKS MIRROR Multi-path I/O I/O-PCI I/O Subsystem A I/O-PCI & Isolation & Isolation I/O Subsystem B FT Cross Bar MEMORY CHIPSET Processing Subsystem A MEMORY CHIPSET Processing Subsystem B CPU CPU CPU Lockstep No Single Point of Failure Zero Switchover Time Single Software Image Load-balancing Clusters Load-balancing Clusters (a.k.a. server farms) operate by having the entire workload come through one or more load-balancing front ends, which then distribute computing tasks to a collection of back end servers. A commonly used free software package, for Linux OS-based Load-balancing Clusters, is available at www.linuxvirtualserver.org. Most Seneca Data Nexlink Load-balancing Clusters use Linux. Network Load-balancing Cluster 111.111.111.10 (Cluster Virtual IP) 111.111.111.1 WEB SERVER HTML ASP.NET COM ASP 111.111.111.2 WEB SERVER HTML ASP.NET COM ASP 111.111.111.3 WEB SERVER HTML ASP.NET COM ASP CENTRAL DATABASE 7401 Round Pond Road North Syracuse, NY 13212 Ph: 800.227.3432 Fx: 315.433.0945 www.nexlink.com

High-performance Computing (HPC) Clusters HPC Clusters provide increased performance by splitting a computational task across many homogenous nodes in a cluster and working on it in parallel. HPC Clusters are most commonly used in scientific computing where the end user designs specific programs that utilize the parallelism methods used on HPC Clusters. Seneca Data works with a number of different government, higher education, and enterprise customers to design and build HPC Clusters. Virtual 3-D Model Rendering Grid Computing Clusters Grid Clusters are similar to HPC Clusters; the key difference between grids and traditional clusters are that grids connect collections of computers which do not fully trust each other and hence operate more like a computing utility than like a single computer. In addition, grids typically support more heterogeneous collections of systems than are commonly supported in clusters. Microsoft Compute Clusters Windows Compute Cluster Server 2003 is a cluster of servers that includes a single head node and one or more compute nodes. The head node controls and mediates all access to the cluster s resources and is the single point of management, deployment, and job scheduling for the compute cluster. Windows Compute Cluster Server 2003 uses the existing corporate Active Directory infrastructure for security, account management, and overall operations management using tools such as Microsoft Operations Manager 2005 and Microsoft Systems Management Server 2003. ACTIVE DIRECTORY FILE SERVER MOM SERVER MAIL SERVER PUBLIC (CORPORATE) NETWORK WORKSTATION HEAD NODE MS-MPI INTERCONNECT PRIVATE NETWORK COMPUTE NODE COMPUTE NODE COMPUTE NODE Typical Windows Compute Cluster Server 2003 network Configuring Windows Compute Cluster Server 2003 involves installing the operating system on the head node, joining it to an existing Active Directory domain, and then installing the Compute Cluster Pack. If you are using Remote Installation Services (RIS) to automatically deploy compute nodes, RIS will be installed and configured as part of the To Do List after installation is complete. When Compute Cluster Pack installation is complete, it will display a To Do List page that shows you the remaining steps necessary to finish the configuration of your cluster. These steps include defining the network topology, configuring RIS using the Configure RIS Wizard, adding compute nodes to the cluster, and configuring cluster users and administrators. 7401 Round Pond Road North Syracuse, NY 13212 Ph: 800.227.3432 Fx: 315.433.0945 www.nexlink.com

Nexlink High-performance Computing (HPC) Cluster Offering Nexlink HPC Clusters, manufactured by Seneca Data, are tailor made to customer requirements, offer maximum system performance, and are on the cutting edge of innovation and design. As a custom HPC Cluster manufacturer, Seneca Data delivers solutions tailored for diverse market segments and industries by offering: Industrial cluster hardware engineering and manufacturing in our 40,000 sq ft, ISO 9001:2000 compliant facility Stylish marketing, branding, and labeling design Simplified ordering, build-up, testing, qualification, and staging Comprehensive delivery, setup, and post-purchase service options Cluster Hardware Engineering Seneca Data Sales Engineers can specify any/all hardware/software requirements, such as: Systems Hardware Components Intel or AMD Graphic Adapter - GPU requirements 1U to 8U system chassis Interconnectivity - 1,2,3 NIC ports - InfiniBand HCA Comprehensive System Software Requirements Security considerations Port manipulation Services enabling/disabling Node-naming conventions Storage mounting Network file system administration Access considerations User access control Storage Considerations Storage requirements per head-node - Short term - Long term Storage requirements per compute-node Rack Hardware Configuration Rack height requirements (30U to 46U racks) Rack depth requirements Rack power requirements (single, dual, triple, quad power) Rack color front, back, sides Rack labeling front, back and sides Rack Components Power distribution units per rack Cover-plates needed Cabling 7401 Round Pond Road North Syracuse, NY 13212 Ph: 800.227.3432 Fx: 315.433.0945 www.nexlink.com

Rack Management Options Out-of-band management IPMI SRENA Intelligent power KVM/IP Marketing / Branding / Labeling Design Specifications Seneca Data can assist with: Marketing related documents - training guides, user manuals, tech-specs, and white-papers Pre-sales collaboration Branding - boxes, back-plate, front-bezel, silk-screening, and documentation Pre-order Build-up / Testing / Qualification and Staging of Clusters By nature, HPC Clusters demand the latest advances in hardware to accommodate future expandability. Seneca Data helps mitigate technology concerns by pre-building, testing, and qualifying designated hardware. Upon completion, staging services are provided until the end user s environment is prepared to receive the hardware. These services include, but are not limited to: Operating system benchmarks on specific hardware Driver optimization on all hardware Management and utilities suite installation Customer login, testing, and pre-inspection prior to delivery Staging, prior to delivery Engineer onsite during install Typical Cluster Related Sales Engineering Considerations Site Location Considerations Is site building built/completed Does site have adequate power Does site have adequate air conditioning Is cluster going into first floor of building Does building have an elevator Is elevator door wider than 48" Does site have a raised floor with easy access for cabling Can cluster be rolled to final location in building Will cluster be moved to another site later Will cluster be phased out after a certain date Will cluster be in a limited access space Will cluster be in a closed access space Rack Cabinet Related Considerations Is there a height restriction on cabinet 7401 Round Pond Road North Syracuse, NY 13212 Ph: 800.227.3432 Fx: 315.433.0945 www.nexlink.com

Is there a depth restriction on cabinet Should back door of cabinet be screen holed Should front door of cabinet be glass Should front door of cabinet be labeled Should back door of cabinet be labeled Does Customer need additional open space per rack Should cabinet be color other than black System Hardware Related Considerations How many total compute nodes Does customer prefer Intel processors Does customer prefer AMD processors Does customer needs Graphic Processing Unit Does customer prefer SAS Drives Does customer prefer SCSI Drives Does customer prefer SATA Drives Does customer need 1, 2, or 3 NICS (Don t count Infiniband) Security Related Considerations Does customer want SSH disabled or enabled Does customer want Telnet disabled or enabled Does customer want ports disabled What naming convention needed on Nodes Storage Related Considerations Does customer want a storage array with cluster Management Related Considerations Does customer want serial Out-of-Band Management Does customer want KVM/IP Management Does customer want Intelligent Power - cold reboot ability Conclusion Seneca Data is an established manufacturer of compute clusters. Our expertise in design, manufacturing, and logistics makes us an ideal partner for single build projects or contract manufacturing engagements. For more information about Seneca Data and our cluster offering visit us at www.senecadata.com or www.nexlink.com. Sources: Wikipedia, Microsoft Compute Clusters, NEC EXPRESS5800, ISTARUSA, RED HAT, Intel Intel, the Intel logo, Xeon, and Xeon Inside are trademarks or registered trademarks of Intel Corporation in the U.S. and other countries. 7401 Round Pond Road North Syracuse, NY 13212 Ph: 800.227.3432 Fx: 315.433.0945 www.nexlink.com