UCS M-Series Modular Servers

Similar documents
I/O Performance of Cisco UCS M-Series Modular Servers with Cisco UCS M142 Compute Cartridges

How To Build A Cisco Ukcsob420 M3 Blade Server

Support a New Class of Applications with Cisco UCS M-Series Modular Servers

Cisco Unified Computing System Hardware

Unified Computing Systems

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

PCI Express Impact on Storage Architectures and Future Data Centers. Ron Emerick, Oracle Corporation

The virtualization of SAP environments to accommodate standardization and easier management is gaining momentum in data centers.

Cisco UCS B200 M3 Blade Server

Cisco UCS C24 M3 Server

Cisco UCS C220 M3 Server

Cisco UCS B460 M4 Blade Server

Cisco Unified Computing System: Meet the Challenges of Microsoft SharePoint Server Workloads

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation

How To Write An Article On An Hp Appsystem For Spera Hana

Cisco UCS C220 M3 Server

How To Build A Cisco Uniden Computing System

SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION

Cisco UCS B440 M2 High-Performance Blade Server

præsentation oktober 2011

Architectural Comparison: Cisco UCS and the Dell FX2 Platform

Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and IBM FlexSystem Enterprise Chassis

UCS Network Utilization Monitoring: Configuration and Best Practice

Elasticsearch on Cisco Unified Computing System: Optimizing your UCS infrastructure for Elasticsearch s analytics software stack

VTrak SATA RAID Storage System

Cisco UCS B-Series M2 Blade Servers

IVA & UCS. Frank Stott UCS Sales Specialist frstott@cisco.com Cisco and/or its affiliates. All rights reserved.

C460 M4 Flexible Compute for SAP HANA Landscapes. Judy Lee Released: April, 2015

Cisco Unified Communications on the Cisco Unified Computing System

Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and Dell PowerEdge M1000e Blade Enclosure

Unified Computing System When Delivering IT as a Service. Tomi Jalonen DC CSE 2015

Private cloud computing advances

PCI Express Impact on Storage Architectures and Future Data Centers. Ron Emerick, Oracle Corporation

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

Data Sheet FUJITSU Server PRIMERGY CX272 S1 Dual socket server node for PRIMERGY CX420 cluster server

A Platform Built for Server Virtualization: Cisco Unified Computing System

UNIFIED HYBRID STORAGE. Performance, Availability and Scale for Any SAN and NAS Workload in Your Environment

Data Centre of the Future

Cisco Unified Computing System: Meet the Challenges of Microsoft SharePoint Server Workloads

HITACHI VIRTUAL STORAGE PLATFORM FAMILY MATRIX

Dell PowerEdge Blades Outperform Cisco UCS in East-West Network Performance

Terms of Reference Microsoft Exchange and Domain Controller/ AD implementation

PCI Express Impact on Storage Architectures. Ron Emerick, Sun Microsystems

Cisco Unified Computing System: Meet the Challenges of Virtualization with Microsoft Hyper-V

Cisco UCS and Fusion- io take Big Data workloads to extreme performance in a small footprint: A case study with Oracle NoSQL database

New Hitachi Virtual Storage Platform Family. Name Date

IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Virtual Connect

High Performance MySQL Cluster Cloud Reference Architecture using 16 Gbps Fibre Channel and Solid State Storage Technology

Direct Attached Storage

UCS Storage Options. July Bertalan Dergez Consulting Systems Engineer

Platfora Big Data Analytics

Storage Architectures. Ron Emerick, Oracle Corporation

Cisco UCS C420 M3 Rack Server

Technical Brief: Egenera Taps Brocade and Fujitsu to Help Build an Enterprise Class Platform to Host Xterity Wholesale Cloud Service

Cisco SmartPlay Select. Cisco Global Data Center Promotional Program

Cisco Unified Computing System and EMC VNXe3300 Unified Storage System

EMC VMAX3 FAMILY - VMAX 100K, 200K, 400K

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Data Communications Hardware Data Communications Storage and Backup Data Communications Servers

Block based, file-based, combination. Component based, solution based

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops

Intel RAID SSD Cache Controller RCS25ZB040

Mit Soft- & Hardware zum Erfolg. Giuseppe Paletta

Iron Networks Network Virtualization Gateways

Building the Virtual Information Infrastructure

DD670, DD860, and DD890 Hardware Overview

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

HP high availability solutions for Microsoft SQL Server Fast Track Data Warehouse using SQL Server 2012 failover clustering

Hitachi Unified Compute Platform (UCP) Pro for VMware vsphere

Achieve Automated, End-to-End Firmware Management with Cisco UCS Manager

HUAWEI Tecal E9000 Converged Infrastructure Blade Server

MESOS CB220. Cluster-in-a-Box. Network Storage Appliance. A Simple and Smart Way to Converged Storage with QCT MESOS CB220

Copyright 2012, Oracle and/or its affiliates. All rights reserved.

FCoE Deployment in a Virtualized Data Center

4 Channel 6-Port SATA 6Gb/s PCIe RAID Host Card

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright

White paper. ATCA Compute Platforms (ACP) Use ACP to Accelerate Private Cloud Deployments for Mission Critical Workloads. Rev 01

Cisco UCS Integrated Infrastructure for Big Data with Splunk Enterprise

nappliance Network Virtualization Gateways

SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION

Nimble Storage SmartStack for Microsoft Windows Server

Cisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage

MaxDeploy Hyper- Converged Reference Architecture Solution Brief

HITACHI VIRTUAL STORAGE PLATFORM FAMILY MATRIX

Deploying Ceph with High Performance Networks, Architectures and benchmarks for Block Storage Solutions

SeaMicro SM Server

Open-E Data Storage Software and Intel Modular Server a certified virtualization solution

Data Sheet Fujitsu PRIMERGY BX900 S1 Blade Server

Boost Database Performance with the Cisco UCS Storage Accelerator

Configuration Maximums

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server

DCICT: Introducing Cisco Data Center Technologies

Reference Architecture - Microsoft Exchange 2013 on Dell PowerEdge R730xd

HP Moonshot System. Table of contents. A new style of IT accelerating innovation at scale. Technical white paper

Cisco 7816-I5 Media Convergence Server

How To Evaluate Netapp Ethernet Storage System For A Test Drive

Configuration Maximums

Transcription:

UCS M-Series Modular Servers The Next Wave of UCS Innovation Marian Klas Cisco Systems June 2015

Cisco UCS - Powering Applications at Every Scale Edge-Scale Computing Cloud-Scale Computing Seamlessly Extend the Data Center to the Edge Customer Needs Power and Operational Simplicity in the Data Center Core More Efficient Way to Power Cloud-Scale Applications 2 1 3 2

Cloud-Scale Inverts Computing Architecture Core Enterprise Workloads Cloud Scale SCM ERP/Financial Legacy CRM Email Online Content Gaming Mobile IoT E-Commerce Server Hypervisor Single Server Single Application Many Applications Many Servers 3

Cisco System Link Technology Extending the UCS Fabric inside the server Compute Shared Infrastructure 4

Why UCS M-Series - Expanding the infrastructure portfolio UCS M-Series was designed to complement the compute infrastructure requirements in the data center The goal of the M-Series is to offer smaller compute nodes to meet the needs of scale out applications, while taking advantage of the management infrastructure and converged innovation of UCS By disaggregating the server components, UCS M-Series helps provide a component life cycle management strategy as opposed to a server-byserver strategy M-Series provides a platform that will provide flexible and rapid deployment of compute, storage, and networking resources 5

UCS M-Series Overview Front View Intel Xeon E3 (4 Cores) 2 RU 8 Cartridges 1 2 Aggregate Capacity 32 GB Memory 4 x SSD Scale from 480 GB SATA to 6.4 TB SAS Compute Cartridge 16 Servers 64 Cores 512 GB Memory 6.4 TB Storage Brand new modular compute platform portfolio Product Line Extension to the currently shipping B-Series and C-Series Powered by next generation UCS ASIC technology used in the VIC Focus Areas o Density, Power Efficiency, Modularity, Ease of Management (UCS Manager) Power Supplies 2 x 1400 Watts FCS in Q4 CY2014 Rear View 2 x 40 Gb Uplinks 6

UCS M-Series Overview Typical Rack Deployment Front View 2 RU 8 Cartridges 10 Gb 62xx Fabric Interconnects Intel Xeon E3 (4 Cores) 1 2 32 GB Memory 4 x SSD Scale from 480 GB SATA to 6.4 TB SAS Compute Cartridge Rack Capacity Chassis 20 Servers 320 Cores 1280 Memory 10TB Storage 128TB 40 Gb ports Power Supplies 2 x 1400 Watts Rear View 2 x 40 Gb Uplinks 7

UCS M-Series Specifics Front View 2 RU Rear View Aggregate Capacity per Chassis 16 Servers, 64 Cores, 512 GB Memory Compute Cartridge CPU Intel Xeon E3 1275Lv3, 1240Lv3, 1220Lv3 Memory 8 GB UDIMM 32 GB Max/Cartridge Disks 2 or 4 SSDs SATA (240 GB, 480 GB, 960 GB) SAS (400 GB, 800 GB, 1.6 TB) RAID Controller Cisco 12 Gb Modular RAID Controller with 2GB FlashBacked Write Cache (FBWC) Network 2 x 40 Gbps Power 2 x 1400W 8

System Architecture Diagram 2 x 40 Gb Uplinks Controller and 4 x SSD Drives Redundant Power Supplies CMC Shared Resources Midplane Cartridge Cartridge Cartridge Cartridge Cartridge Cartridge Cartridge Cartridge Compute Resources 9

UCS M4308 Chassis 2U Rack-Mount Chassis (3.5 H x 30.5 L x 17.5 W) 3rd Generation UCS VIC ASIC (System Link Technology) 1 Chassis Management Controller - CMC (Manages Chassis resources) 8 cartridge slots (x4 PCIe lanes per slot) Slots and ASIC adaptable for future use Four 2.5 SFF Drive Bays (SSD Only) 1 Internal x 8 PCIe Gen3 connection to Cisco 12 SAS RAID card 1 Internal x8 PCIe Gen3 slot, ½ height, ½ Width (future use) 6 hot swappable fans, accessed by top rear cover removal 2 external Ethernet management ports, 1 external serial console port (connected to CMC for out of band troubleshooting) 2 AC Power Module bays - (1+1 redundant) 2 QSFP 40Gbps ports (data and management - server and chassis) 10

UCS M142 Cartridge UCS M142 Cartridge contains 2 distinct E3 servers Each Server independently manageable and has it s own memory, CPU, management controller (CIMC) Cartridge connects to Mid-plane for access to power, network, storage, management. x2 PCIe Gen3 Lanes Connect each server to System Link Technology for access to storage & network 15.76 Gbps I/O Bandwidth per server Host PCIe Interface x2 PCIe Gen3 Lanes Host PCIe Interface x2 PCIe Gen3 Lanes 11

System Link Technology 12

System Link Technology Overview System Link Technology is built on proven Cisco Virtual Interface Card (VIC) technologies VIC technologies use standard PCIe architecture to present an endpoint device to the compute resources VIC technology is a key component to the UCS converged infrastructure In the M-Series platform this technology has been extended to provide access to PCIe resources local to the chassis like storage eth0 eth1 operating system eth0 eth1 eth2 eth3 operating system 13

System Link Technology Overview eth0 eth1 operating system eth0 eth1 eth2 eth3 operating system Traditional PCIe Cards require additional PCIe slots for additional resources System Link Technology provides a mechanism to connect to the PCIe architecture of the host This presents unique PCIe devices to each server Each device created on the ASIC is a unique PCIe physical function PCIe BUS PCIe Slot x8 Gen3 lanes PCIe Slot x8 Gen3 lanes PCIe BUS System Link ASIC x32 PCIe Gen3 lanes distributed across compute resources 14

Cisco Innovation - Converged Network Adapter (M71KR-Q/E) The Initial Converged Network adapter from Cisco combined two physical PCIe devices on one card. Intel Ethernet Adapter Qlogic HBA The ASIC provided a simulated PCIe switch for each device to be connected to the OS so they used native drivers and PCIe communications. The ASIC also provided the mechanism to transport the FC traffic in an FCoE frame to the Fabric Interconnect. The number of PCIe devices is limited to that of the hardware installed behind the ASIC 15

Cisco Innovation The Cisco VIC (M81KR, 1280, & 1380) The Cisco VIC was an extension of the first Converged networking adapters but created 2 new PCIe devices. vnic vhba Each device is a PCIe physical function and is presented to the operating system as a unique PCIe endpoint. Drivers were created for each device. enic driver for vnic fnic driver for vhba The ASIC allows for the creation of multiple devices (e.g. 128, 256, 1024), limited only by the ASIC capabilities. 16

System Link Technology 3 rd Generation of Innovation System Link technology provides the same capabilities as a VIC to configure PCIe devices for use by the server The difference with System Link is that it is an ASIC within the chassis and not a PCIe card The ASIC is core to the M-Series platform and provides access to I/O resources The ASIC connects devices to the compute resource through the system mid plane System Link provides the ability to access network and storage shared resources SCSI Commands Virtual Drive 18

System Link Technology 3 rd Generation of Innovation Same ASIC used in the 3 rd Generation VIC M-Series takes advantage of additional features which include: Gen3 PCIe root complex for connectivity to Chassis PCIe cards (e.g Storage) 32 Gen3 PCIe lanes connected to cartridges CPUs 2 x 40Gbps uplinks Scale to 1024 PCIe devices created on ASIC (e.g. vnic) 32 Lanes Gen3 PCIe Cartridges Storage Network 40Gbps QSFP x2 19

Introduction of the snic The SCSI NIC (snic) is the PCIe device that provides access to the storage components of the UCS M-Series Chassis The snic presents to the operating system as a PCIe connected local storage controller The communication between the operating system to the drive is via standard SCSI commands SCSI Commands Virtual Drive LUN0 20

Chassis Storage Components The chassis storage components consist of: Cisco Modular 12 Gb SAS RAID Controller with 2GB Flash Drive Mid-plane 4 SSD Drives Same controller used in the C-Series M4 servers Controller supports RAID 0,1,5,6,10,50,60 Support for up to 4 SSD in chassis drive bays Support for 6Gb or 12Gb SAS or SATA Drives NO support for spinning media SSD ONLY (power, heat, performance) All drives are hot swappable RAID rebuild functions are supported for failed drives 21

Storage Controller - Drive Groups RAID configuration groups drives together to form a RAID volume Drive Groups define the operations of the physical drives (RAID level, write back characteristics, stripe size) Depending on the RAID Level a group could be as small as 1 drive (RAID 0) or in the case of M- Series 4 Drives (R0, R1, R5, R10) M-Series chassis support multiple drive groups Drive groups are configured through a new policy setting in UCS Manager RAID 0 Drive RAID Group 1 1 Drive Group 1 RAID 0 Drive RAID Group 01 2 Drive Group 1 RAID 0 Drive Group 3 RAID 0 Drive Group 2 RAID 0 Drive Group 4 22

Storage Controller - Virtual Drives After creating a drive group on a controller one must then create a virtual drive to be presented as a LUN (drive) to the operating system It is common (but not required) to use the entire amount of space available on the drive group If you do not use all the available space on the drive group it becomes possible to create more virtual drives on that drive group to be presented as LUNs to the device For UCS M-Series the virtual drives are created on a drive group to be assigned to a specific server within the chassis for local storage Virtual Drive 14 11 10 12 13 43 15 50 6 28 17 9 200GB 100GB 100B Virtual Drive 23 16 20 19 22 17 18 100GB 100B RAID 1 Drive Group 1 1.6TB RAID 0 Drive Group 2 800GB RAID 0 Drive Group 3 800GB 23

Storage Capabilities The Cisco Modular Storage Controller supports 16 Drive Groups 64 Virtual Drives across all drive groups at initial ship 64 virtual drives in a single drive group UCS M-Series Supports 2 Virtual Drives per server (service profile) Service Profile mobility within the chassis Maximum Flexibility: Virtual drives can be different sizes within the drive group Drive groups can have different number of virtual drives All virtual drives in the drive group have the same r/w and stripe size settings Drive groups configured as fault tolerant RAID groups will support: UCSM reporting of degraded virtual drive UCSM reporting of Failed Drive Hot swap capabilities from the back of the chassis Automatic RAID rebuild UCS Manager provides status of RAID rebuild 24

Mapping Disk resources to M-Series Servers From the perspective of the server in the cartridge, the storage controller and the virtual drive are local resources Through the use of policies and service profiles, UCS Manager create the snic in the System Link Technology as a PCIe Host PCIe Interface /dev/sda /dev/sdb C:\ D:\ endpoint for the server Through the use of policies and service profiles, UCS Manager maps the virtual drive from the RAID disk group to the server through the configured snic / RAID 1 Drive Group 1 1.6TB RAID 0 Drive Group 2 800GB Each virtual drive configured in the system is applied to only 1 server and appears as a local drive to the server resource The operating system and snic send SCSI commands directly to the virtual drive through the PCIe architecture in the System Link Technology The end result are SCSI drives attached to the servers as LOCAL storage vd5 200GB RAID 0 Drive Group 3 800GB vd29 100GB 25

Mapping Network resources to the M-Series Servers The System Link Technology provides the network interface connectivity for all of the servers Virtual NICs (vnic) are created for each server and are mapped to the appropriate fabric through the service profile on UCS Manager Servers can have up to 4 vnics. The operating system sees each vnic as a 10Gbps Ethernet The interfaces can be rate limited and provide QoS marking in hardware. Interfaces are 802.1Q capable Fabric Failover is supported, so in the event of a failure traffic is automatically moved to the second fabric Host PCIe Interface eth0 eth1 / eth0 eth1 Fabric Interconnect A Fabric Interconnect B 27

Networking Capabilities The System Link ASIC supports 1024 virtual devices. Scale for a particular M-Series chassis will depend on the number of uplinks to the Fabric interconnect All network forwarding is provided by the fabric interconnects, there is no forwarding local to the chassis At initial ship System Link supports Ethernet traffic only. The M-Series devices can connect to external storage volumes like NFS, CIFS, HTTPS, or iscsi. (FCoE to be supported in a future release) iscsi boot is planned for initial ship, OS support matrix TBD 28

Connectivity and Management 29

Role of the Fabric Interconnect and UCS Manger UCS M-Series operates only in UCS managed mode UCS Manager is required to provide configuration of network and storage resources The QSFP cable connects to 4 ports on the fabric interconnect which are port channeled to provide 40Gbps bandwidth per uplink All switching is local to the fabric interconnects System Link Technology uses the 802.1BR pre-standard VN-TAG protocol to support NIV for the server interfaces Each vnic has a unique virtual ethernet interface on the fabric interconnect Statistics and counters are centralized by UCS Manager for each interface 30

UCS Manager Disk Group Configuration The creation of multiple drive groups on a chassis introduces a new Disk Group policy into UCS Manager The Disk Group Policy defines the resources that are required to configure a drive group on the controller The policy also defines the operation parameters of that drive group such as RAID level, strip size, write back policy, etc. The creation of a policy DOES NOT create a drive group on a chassis. 31

UCS Manager Virtual Drive Configuration Disk Groups only define the resources where the virtual drives to be used by the servers are defined To create a virtual drive policy for the server to create and use the disk resources UCS Manager introduces the concept of a Storage Profile The Storage profile defines the size of the virtual drive and the type of disk group that it should be created on Storage profiles do not create drives on the chassis, they simply define a policy 32

Service Profile Applying policies to physical servers Service profiles consume the disk group and storage profile by selecting them when building a service profile Once a service profile is applied to a physical server it will check the chassis to see if the resources are available for the drive group(s) selected for each local Lun in the storage profile If a matching drive group exists or can be created, UCS Manager will determine if there is space available on that group. If so then it will configure the resources If resources do not exist in either step the service profile association will fail 33

Service Profile Mobility When a virtual drive (Local LUN) is created for a service profile, it is physically located on the hard drives for the chassis where it was first applied Service profiles can be moved to any system in the domain, BUT the Local LUN and the data remains on the chassis where it was first associated Service profile mobility between chassis will NOT move the data LUN to the new chassis UCS Service Profile Template Unified Device Management Network Policy Storage Policy Server Policy 34

UCS Manager for M-Series Version 2.5 M Release @ FCS M-Series only Ethernet Only Support for 2 LUNs per host Support for 4 vnics per host Manageability by UCS Central introduced with version 1.3 Merge of B, C, and M series into a single build will be UCSM 3.1 FCoE support targeted for UCSM 3.1 release. 35

Management Scalability Cloud Scale applications rely on large numbers of servers. UCS Management tools provide the ability to manage these servers at scale Each server has a unique dedicated integrated management controller Each Chassis has a Unique management controller. System link technology uses converged infrastructure to provide management connectivity from UCS Manger to each server UCS Manager supports 20 UCS M-Series chassis in a single domain for management of up to 320 M-Series Servers UCS Central 1.3 has introduced support for M-Series servers in order to support further scaling UCS Central 36

Recap - UCS M-Series Modular Servers UCS M-Series True Server Disaggregation Compact Chassis 8 Compute cartridges Based on Cisco Virtual Interface Card Technology 3rd Gen ASIC creates local fabric for compute nodes Lightweight Compute Cartridge Two Independent Intel Xeon E3 Servers No adapters or HDDs Shared Local Resources Four shared SSDs in the chassis Shared dual 40Gb connectivity Shared Management Shared Local Resources Network and storage resources Compute Density 16 Intel Xeon E3 Compute nodes in 2RU chassis Each cartridge holds two independent compute nodes 38

UCS M-Series Business Outcomes Reduced CapEx and OpEx Faster Time to Market Deliver SLAs with Ease Maximize your IT investment Optimal performance / watt Optimal performance / watt / $ Reduced space, power and cooling Modular Design Independently refresh various server components Longevity of shared infrastructure Cost Amortization Beyond sheet metal, power and fan supplies Includes local disks, I/O assembly and Interface cards Ease of deployment Service profiles accelerate initial provisioning and ongoing maintenance Fewer components to install, provision and maintain Right-size your infrastructure Workload optimized compute elements Reduced business impact of component failures Common management framework No need for dedicated staff or retraining Leverage established IT workflows Establish CoS and QoS Easily align infrastructure to optimize cost and revenue model Mix and match value and performance 39

Thank you.